Next generation centering devices
AI is taking over
The process of measuring the position of the spectacles frame in relation to a user’s face and pupils is called the centering process. Its purpose is to provide the optician with the necessary measurements to correctly order the lenses. For a long time, this process was carried out by hand, but there are now numerous digital methods and even technology using AI to improve the process.
The result of this centering process can be divided into two sets of data. First, the coordinates of the pupil relative to the frame edges, known as the frame box, are obtained. These coordinates include the fitting height and nasopupillary distances (Fig. 1). Second, there is another set of data that involves the position of the final lens in relation to the user’s face, including the distance to the cornea (Back Vertex Distance), pantoscopic angle, and wrap angle. However, both types of data can significantly impact the comfort of the spectacles user if the measurements are not taken with sufficient precision, especially the first set. Misalignment of the optical centers of the lenses with the pupil position can cause a range of symptoms, from double vision and difficulty finding the intermediate and near areas in PAL lenses to headaches in extreme cases.
The second set of parameters may not have as severe consequences, but inadequate measurements can lead to incorrect optical power reaching the retina and insufficient compensation for oblique astigmatism when the lens is placed at extreme orientation angles or distances in front of the eye.
Manual centering process
Traditionally, optometrists have been trained in schools and universities to perform the centering process manually. They often use a marker pen to mark the position of the pupil on the sample lens of the frame being measured. Then, a ruler is used to determine the coordinates of that mark relative to the frame box. Additionally, special rulers are used to determine the second set of parameters, such as back vertex distance, pantoscopic angle, and wrap angle.
This method usually lacks precision and repeatability and, more importantly, does not present a very sophisticated image for the optical store.
Chances of digital methods
Digital methods have been introduced over the years that can both improve precision and enhance the technological image of the optical store. In particular, centering methods based on tablets have become quite common. This technique involves taking pictures of the user wearing the frame with a special mask added to provide visual references for the graphical analysis software.
The outcome is usually very reliable in terms of precision, but the measuring process can be tedious, involving the adjustment of the mask to the frame, taking pictures from the front and side of the user, and finally making manual adjustments on the tablet software to ensure all visual references on the mask are correctly identified.
The frame-face object as an 3D system
Other digital methods have been developed over the years with a more sophisticated approach, using ideas that date back more than 20 years but required technological maturity in other areas, such as electronics, to finally succeed and become commercially competitive. Stereoscopic pairs of cameras that take synchronized pictures can be used to reconstruct the frame-face object into a 3D system by inferring the depth position of each pixel.
These systems have been on the market for a few years and have been improving with each new version. They offer significant advantages over the previously mentioned methods in terms of precision and simplification of the process from the user’s standpoint. Their measurement process can be divided into three main steps: photography capture, revision of feature identification, and results outcome (Fig. 2).
Photography capture process
Photography capture is one of the strong points of this technology, as opposed to tablets, because a single shot is taken that, including the user positioning process, only takes a few seconds and it does not require the use of any mask or specific gadget. The optometrist only needs to ensure the patient is correctly placed in front of the equipment in a relaxed, neutral position. Once that is done, the capture itself takes less than one second. After pictures have been acquired, the internal algorithm positions each pixel from each camera in 3D space, followed by the identification of the pupil within the frame limits. Pupil positioning can be based on different available technologies, such as graphical analysis using trained AI models or direct cornea reflection using IR light. Both turn out to be very reliable.
Frame detection is required to locate the limits of the frame on the pictures and determine the box size, but the difficulty lies on being able to graphically distinguish the frame textures from the background, i.e. shadows on the skin, eyebrows or even objects placed behind the user. The practitioner has no other choice but reviewing the boxes picture by picture to ensure they are well-located and validate all the shots before the results can be calculated. Once this is done, all parameters will be readily calculated on the same device or later on a Backoffice web page where all the results can be consulted.
AI as a fundamental pillar
In today’s world, AI is increasingly becoming a fundamental pillar for enhancing products and processes across various services and industries. AI is used to create new solutions or predict outcomes in many fields, making it an excellent tool to simplify people’s lives. The field of centering devices is no exception. AI can provide an innovative solution that incorporates this advanced technology to facilitate the work of opticians.In fact, the time-consuming task of validating frame boxes can be greatly optimized with the incorporation of AI, leading to significant improvements. At Horizons Optical, we have implemented this type of solution to verify its suitability by studying the algorithm’s precision and measuring the time improvements from the practitioner’s perspective.
To achieve these advancements, a predictive AI can be employed to enhance the detection of frame boxes using computer vision algorithms and convolutional neural networks (CNN), an advanced deep learning technique. The model can be trained with an extensive dataset consisting of thousands of images of boxes previously adjusted by opticians, allowing the system to learn specific and relevant patterns for the task. Specifically, a bounding box detection model can be used to accurately identify relevant areas in the images and automatically adjust them (Fig. 3).
Once implemented, two experiments were conducted to gather objective data:
A. 220 measurements were performed by two different practitioners on various users who were unaware of whether AI was enabled or not. Half of the measurements had AI enabled, while the other half did not.
B. 64 measurements were performed by two practitioners on four users with a variety of frames, including plastic, metal, full frame, rimless, and semi-rimless. Practitioners did not know either when AI assistance was activated.
The results were evaluated based on:
- Improved precision before any assistant correction
- Number of measurements that did not require any correction
- Overall adjustment time decrease
The calculated error of the box adjustment in pixels significantly improved in both mean error and standard distribution when AI was enabled, meaning the results were consistently closer to zero error.
Consequently, the number of times the practitioner needed no adjustment at all decreased from 60% to 24% of the measurements. Interestingly, among this 24%, the adjustments were much smaller (Fig. 4).
More importantly, the time required for the box adjustment step decreased by 70%, from an average of 31 seconds down to only 9 seconds. Therefore, AI assistance resulted in a much more robust system and a more appealing experience that fully achieved the desired functionality while improving optometrist operations.
Increases practitioner engagement
These results demonstrate the profound impact this technology can have on the use of advanced centering devices. In fact, an extended analysis of device uses in stores after enabling AI assistance clearly revealed increased practitioner engagement. The average number of monthly measurements in stores increased by nearly 50% after AI was enabled.
In conclusion
The integration of AI into tools like centering devices is a clear example of how technology can solve problems and transform traditional industries such as optics. Specifically, AI has led to:
- Improved automatic detection: The accuracy of box detection at the pixel level has increased by 75%.
- Fewer manual adjustments: Previously, opticians had to make readjustments in 60% of cases. Now, only 24% of detections require human intervention.
- Time savings: Necessary adjustments are now made 70% faster thanks to usability improvements in the interface. The new adjustments screen design reduces the number of clicks and time required.
In the future, it will become increasingly common to find applications with AI layers that enhance functionalities and optimize processes. The advancements achieved with AI are not only real but also mark a significant improvement in the efficiency of the optical industry, demonstrating the true potential of these technologies.
Pau Artús, Chief Innovation Officer at Horizons Optical got his Bachelor Degree in Chemistry at Universitat de Barcelona. Later on he achieved a M.Sc. in Molecular Magnetism at Indiana University in Bloomington, Indiana, in 2000. He obtained a Ph.D. in mechanical properties of plastic materials for ophthalmic lenses in 2009 (Universitat Politècnica de Catalunya) and in 2011 he studied a Masters Degree in Innovation Management (Universitat Pompeu Fabra). His professional career in the ophthalmic field started in the Lens R&D Department of Indo, whereb he He later became the Lens R&D Department Manager. In 2017 the whole Lens R&D Department of Indo became the Innovation Department of the newly created Horizons Optical. Here, Pau initially took the role of Technical Operations Director and later became the Chief Innovation Officer of the company in 2019.