From Pixels to Posture: BIOTONIX's Innovation in AI-driven Anatomical Detection

Identifying objects and their points of interest in raw images is an arduous, time-consuming task, often requiring highly specialized skills. This task represents a first step in interpreting images and analyzing them quantitatively (Lindner 2017). 

An autonomous system capable of performing such a task accurately and reliably could be of significant value in various fields of scientific investigation, such as morphometric analysis, statistical shape analysis, pose estimation, 3D reconstruction, but also commercially.

To name just a few industry examples:

Modern medicine and physiotherapy rely heavily on the precise identification of anatomical points (depending on the field of application, several synonyms for “point” are used, such as marker, landmark, key point, etc.) for diagnosis, treatment and rehabilitation (e.g., Ghesu et al. 2016, Fig. 1). 

Landmarks detected in medical images.
Figure 1 – Landmarks detected in medical images. Source: Ghesu et al. 2016.
 
The fashion and garment industries can take advantage of this technology to take precise biometric measurements. By identifying key anatomical points, designers can create better-fitting garments, reducing the number of returns and increasing customer satisfaction. It could also revolutionize online shopping, enabling virtual fittings with precise sizing recommendations (e.g., Liao et al. 2023, Fig. 2). 
 
Body point detection for tailoring services
Figure 2 – Body point detection for tailoring services. Source: Liao et al. 2023.
Ergonomics and workstation design rely crucially on an understanding of the individual’s anatomy. Accurate measurements can guide the design of office furniture, tools and workspaces that reduce the risk of musculoskeletal disorders and increase productivity (e.g. Kim et al. 2021, Fig. 3).
 
Skeletal tracking using three different systems
Figure 3 – Skeletal tracking using three different systems. Source: Kim et al. 2021.

Over the years, a number of ingenious methods have been developed for automatic image annotation, whether for segmentation or detection of the object or its points of interest (e.g. feature point detection, group image registration, thresholding, edge detection, sliding window approach). These traditional methods require manual definition of features, making them less flexible in the face of variable image conditions. (O’Mahony et al. 2020).

Machine learning (ML)-based methods such as deep learning (DL) have revolutionized this field in recent years. Object detection, semantic segmentation and key point detection are already standard computer vision tasks that benefit from DL, however, they require a large volume of annotated data, which is not always available, as this accumulation is time-consuming and can be very costly.

The Evolution of Posture Assessment: From Visual to Digital Observation

Although some more objective methods of assessing standing posture using photographs (e.g. Kraus & Eisenmenger-Weber 1945; MacEwan et al. 1932) were proposed as early as the first half of the 20th century, before the advent of modern digital technology, posture assessment relied primarily on anamnestic examinations and visual observations by practitioners, generally based on qualitative observation of spinal curvatures and body misalignment in relation to the lead line in anterior, posterior and lateral views (Iunes et al. 2009). Although this method has provided valuable information, it remains inevitably subjective and largely dependent on the individual expertise of the assessor.

With the popularization of digital cameras in the 1990s, the field saw significant progress. Photographic systems for assessing static posture emerged as a new standard, paving the way for quantitative assessment methods. These systems aimed to reduce the subjectivity inherent in visual assessments and provide a more standardized and reproducible measurement process.

Zonnenberg et al (1996) highlighted the potential of these methods by focusing on their intra/inter-rater reliability, showing how digital tools could offer consistency in measurements on photographs of body posture (Fig. 4). However, to be able to quantitatively assess postural deviations, precise identification of specific anatomical points on the human body is required.

Configuration photographique de base pour l’analyse quantitative de la posture
Figure 4 – Configuration photographique de base pour l’analyse quantitative de la posture. Source: Zonnenberg et al. (1996).

 

BIOTONIX POSTURE'S Pioneering First Steps

As a pioneer in the development of photographic systems for the quantitative assessment of posture, the history of BIOTONIX, which came into being in the early 2000s, is closely linked to the history of scientific development in this field (Guimond et al. 2003). At the heart of the BIOTONIX system was the paradigm established by Kendall and colleagues (Kendall et al. 2005).

Instead of visually estimating deviations from lead line alignment, as prescribed by these authors, BIOTONIX’s marker-based photogrammetric system represented an innovation, as it quantified these deviations by recording images and coordinates of anatomical reference points in metric units.

Although innovative and effective for its time, this approach was not without its challenges. Reliance on manual palpation meant inherent variations between assessments, influenced by the individual expertise and experience of the assessing physiotherapist.

One of the outstanding features of the original BIOTONIX system was its attention to detail during the photography phase. To ensure that anatomical points were clearly visible and distinct in the photos, fluorescent surface markers were used (Fig. 5). When the camera flash was activated, these markers intensely reflected light, creating high-contrast dots on the captured images.

Assembly of the retro-reflective marker sphere and application sticker (left) and construction of the retro-reflective marker sticker (right)
Figure 5 – Assembly of the retro-reflective marker sphere and application sticker (left) and construction of the retro-reflective marker sticker (right). Source: Guimond et al. 2003.

The striking contrast ensured that anatomical points were unmistakable, reducing the risk of misinterpretation during the assessment phase. This was particularly crucial given the number of points assessed and the need for high precision in determining postural alignment.

Using conventional computer vision algorithms, the software analyzed the images, detecting and recording the coordinates of each high-contrast anatomical point. This automation not only accelerated the process, but also introduced a new level of consistency and precision to the procedure. Once the coordinates had been recorded, the software could then calculate postural deviations, comparing the observed postural alignment with the pre-established ideal.

The innovative approach and methodology behind BIOTONIX’s original system were patented in 2003 by Sylvain Guimond, Ph.D., and his colleagues (Guimond et al. 2003). Entitled “System and method for automated biomechanical analysis, detection and correction of postural deviations”, this patent testifies to the pioneering spirit and technical achievements of the system at the time. Over the following years, various research groups validated the system’s reliability (Harrison et al. 2008, 2007; Janik et al. 2007; Normand et al. 2007, 2002).

This patented methodology set a new standard in the world of biomechanical analysis, marking a shift from purely manual methods to more automated and standardized approaches, such as DIPA (Furlanetto et al. 2012) and SAPO (Ferreira et al. 2011).

Using a pain scale questionnaire, the study also investigated the incidence of body pain among subjects with different posture types. Interestingly, those with ideal posture reported lower pain scores compared to individuals with other posture categories. This finding highlights the importance of maintaining proper posture to reduce the risk of pain and discomfort.

The Problem: The Challenge of Identifying Anatomical Points for Postural Assessment

Although photographic methods for quantitative posture analysis represented a considerable advance in clinical practice, identifying specific anatomical points on the patient’s body prior to taking photos remained a task based on visual inspection and palpation, which was time-consuming and dependent on the practitioner’s expertise.

As technology advances, a revolutionary solution emerges: an AI-based system hyper-specialized in the identification of anatomical points on digital photos.

BIOTONIX 2.0: Embracing the Future with AI-powered Autonomy

Building on its rich heritage and proven expertise, BIOTONIX has embarked on an ambitious transformation starting in 2020. With the support of the Canadian federal government’s Scientific Research and Experimental Development (SR&ED) program, the company has set itself the goal of revolutionizing postural assessment by developing a new autonomous system.

BIOTONIX set out to create an autonomous postural assessment system, capable of bypassing the need for human intervention. To achieve this, BIOTONIX turned to advanced artificial intelligence (AI) techniques, seeking to harness the power of AI for unrivalled accuracy and efficiency.

Why the move to AI?

Although the original BIOTONIX system was revolutionary for its time, there were some obvious areas for improvement:

  • Efficiency: The manual marking process, though painstaking, was time-consuming.
  • Consistency: Human intervention, despite best efforts, introduced variables that could influence results.
  • Scalability: An autonomous system offers wider application potential, without being limited by manual processes.

AI, with its ability to process data quickly, learn from large datasets and make accurate identifications, presented a solution to these challenges. By integrating AI, BIOTONIX aimed to redefine postural assessment, making it faster, more consistent and universally applicable.

Harnessing Data: The Foundation of BIOTONIX's AI Revolution

In the field of AI, data is everything. In recognition of this, BIOTONIX took advantage of an invaluable asset: a carefully selected database containing thousands of postural assessments. This vast collection of data represented a wealth of information, laying the foundations for the development of AI-powered postural assessment techniques.

Training Specialized AI Models

Using this vast database, BIOTONIX set about training highly specialized AI models. The aim was to teach the AI how to accurately identify anatomical points on photos of people in a relaxed standing position.

The data were carefully annotated and used in iterative training processes, enabling the model to learn to discern subtle nuances, recognize patterns, and accurately identify crucial anatomical points. The AI’s predictions were not only accurate, but also consistent across a wide range of postures and body types.

All BIOTONIX anatomical markers

All BIOTONIX anatomical markers. Source: Guimond et al. 2003.

Anatomical points recognized in RIGHT LATERAL view by BIOTONIX AI

The right lateral view offers a unique perspective, particularly for assessing sagittal plane alignment. The BIOTONIX AI system, with its commitment to comprehensive evaluation, identifies 9 critical anatomical points for this view (Fig. 6):

  • Tragus of the right ear (SD01)
  • Glabella (SD02)
  • Middle of chin (SD03)
  • Right shoulder above acromion (SD04)
  • Right posterosuperior iliac spine & Right anterosuperior iliac spine (SD05 and SD08)
  • Greater trochanter (SD09)
  • Gerdy’s tubercle (SD10)
  • Transverse tarsal joint (SD11)

 

Anatomical points recognized in FRONTAL view by BIOTONIX AI

Based on the analytical paradigm established by Kendall et al (2005), the BIOTONIX system’s frontal view model is designed to recognize 16 specific anatomical points (Fig. 7). Each of these points plays a key role in understanding and assessing postural alignment in the frontal plane:

  • Glabella (FA02)
  • Middle of chin (FA04)
  • Right and left shoulders above acromion (FA05 and FA07)
  • Jugular notch (FA06)
  • Umbilicus (FA08)
  • Right and left anterior superior iliac spines (FA09 and FA11)
  • Right and left wrists above the styloid process of the radius (FA12 and FA13)
  • Right and left patellas (FA14 and FA15)
  • Centered between right and left medial and lateral malleoli (FA16 and FA18)
  • Anterior aspects of right and left distal phalanges of big toe (FA19 and FA20)
Example of AI detection of BIOTONIX markers in frontal view
Figure 7 – Example of AI detection of BIOTONIX markers in frontal view. Source: BIOTONIX Posture.

Anatomical points recognized in Posterior view by BIOTONIX AI

Based on the analytical paradigm established by Kendall et al (2005), the BIOTONIX system’s frontal view model is designed to recognize 16 specific anatomical points (Fig. 7). Each of these points plays a key role in understanding and assessing postural alignment in the frontal plane:

  • Glabella (FA02)
  • Middle of chin (FA04)
  • Right and left shoulders above acromion (FA05 and FA07)
  • Jugular notch (FA06)
  • Umbilicus (FA08)
  • Right and left anterior superior iliac spines (FA09 and FA11)
  • Right and left wrists above the styloid process of the radius (FA12 and FA13)
  • Right and left patellas (FA14 and FA15)
  • Centered between right and left medial and lateral malleoli (FA16 and FA18)
  • Anterior aspects of right and left distal phalanges of big toe (FA19 and FA20)

Le système détecte également automatiquement et avec une grande précision les quatre marqueurs de calibration situés sur le panneau installé derrière le sujet dans l’environnement clinique.

Diverse applications for a modern era

BIOTONIX’s innovative approach goes beyond the stand-alone model. The AI trained to detect anatomical points has been integrated into a broader ecosystem, tailored to different user experiences and applications:

  • BIOTONIX Posture: For a more complete experience, the BIOTONIX Posture web application offers in-depth analysis, tracking and reporting capabilities. It provides a robust platform for detailed postural assessments and recommendations.

Conclusion

From its pioneering beginnings in postural assessment systems in the early 2000s to its leading position in the AI revolution in biomechanical analysis, BIOTONIX remains at the forefront of innovation. With its suite of applications and cutting-edge AI capabilities, it promises a future where postural health is accessible, accurate and seamlessly integrated into our digital lives.

With BIOTONIX, the future promises innovation, precision and a relentless quest for excellence.

To find out more about our AI system, or to see it in action, visit https://biotonixposture.com. Interested in integrating it into your practice? Contact our team at https://biotonixposture.com/get-in-touch

 

Resources

AAOS, P.C., 1947. Posture and its Relationship to Orthopaedic Disabilities. A Report of the Posture Committee of the American Academy of Orthopaedic Surgeons. 

Ferreira, Elizabeth A., et al. « Quantitative assessment of postural alignment in young adults based on photographs of anterior, posterior, and lateral views. » Journal of manipulative and physiological therapeutics 34.6 (2011): 371-380. 

Furlanetto, Tássia Silveira, et al. « Validating a postural evaluation method developed using a Digital Image-based Postural Assessment (DIPA) software. » Computer methods and programs in biomedicine 108.1 (2012): 203-212. 

Ghesu, Florin C., et al. « An artificial agent for anatomical landmark detection in medical images. » Medical Image Computing and Computer-Assisted Intervention-MICCAI 2016: 19th International Conference, Athens, Greece, October 17-21, 2016, Proceedings, Part III 19. Springer International Publishing, 2016. 

Guimond, Sylvain, et al. « System and method for automated biomechanical analysis and the detection and correction of postural deviations. » U.S. Patent No. 6,514,219. 4 Feb. 2003. 

Harrison, Deed E., et al. « Upright static pelvic posture as rotations and translations in 3-dimensional from three 2-dimensional digital images: validation of a computerized analysis. » Journal of manipulative and physiological therapeutics 31.2 (2008): 137-145. 

Harrison, Deed E., et al. « Validation of a computer analysis to determine 3-D rotations and translations of the rib cage in upright posture from three 2-D digital images. » European Spine Journal 16 (2007): 213-218. 

Iunes, D. H., et al. « Comparative analysis between visual and computerized photogrammetry postural assessment. » Brazilian Journal of Physical Therapy 13 (2009): 308-315. 

Janik, Tadeusz J., et al. « Validity of a computer postural analysis to estimate 3-dimensional rotations and translations of the head from three 2-dimensional digital images. » Journal of manipulative and physiological therapeutics 30.2 (2007): 124-129. 

Kendall, Florence Peterson, et al. Muscles: testing and function with posture and pain. 5th Ed. Baltimore, MD: Lippincott Williams & Wilkins, 2005. 

Kim, Woojoo, et al. « Ergonomic postural assessment using a new open-source human pose estimation technology (OpenPose). » International Journal of Industrial Ergonomics 84 (2021): 103164. 

Kraus, Hans, and S. Eisenmenger-Weber. « Evaluation of posture based on structural and functional measurements. » Physical Therapy 25.6 (1945): 267-271. 

Liao, Iman Yi, Eric Savero Hermawan, and Munir Zaman. « Body landmark detection with an extremely small dataset using transfer learning. » Pattern Analysis and Applications 26.1 (2023): 163-199. 

Lindner, Claudia. « Automated image interpretation using statistical shape models. » Statistical shape and deformation analysis. Academic Press, 2017. 3-32. 

MacEwan, Charlotte G., and Eugene C. Howe. « An objective method of grading posture. » Research Quarterly. American Physical Education Association 3.3 (1932): 144-157. 

Normand, Martin C., et al. « Three dimensional evaluation of posture in standing with the PosturePrint: an intra-and inter-examiner reliability study. » Chiropractic & Osteopathy 15 (2007): 1-11. 

Normand, Martin C., et al. « Reliability and measurement error of the biotonix video posture evaluation system—part I: inanimate objects. » Journal of manipulative and physiological therapeutics 25.4 (2002): 246-250. 

O’Mahony, Niall, et al. « Deep learning vs. traditional computer vision. » Advances in Computer Vision: Proceedings of the 2019 Computer Vision Conference (CVC), Volume 1 1. Springer International Publishing, 2020. 

Zonnenberg, A. J. J., et al. « Intra/interrater reliability of measurements on body posture photographs. » Cranio® 14.4 (1996): 326-331. 

Leave a Comment

Your email address will not be published. Required fields are marked *

Shopping Cart
EN
Scroll to Top