Improving Detection Accuracy and Training
AR Toolbox delivers strong detection accuracy right out of the box, but like any machine learning system, the quality of results depends on the conditions under which it operates. The difference between a scan that catches every tool perfectly and one that misses a few items often comes down to controllable factors like lighting, camera angle, and how tools are arranged. In this post, we will walk through practical steps you can take to get the best possible results from the detection model, and explain how the system learns and improves over time.
Lighting and Camera Angle Best Practices
Lighting is the single most impactful factor for detection accuracy. The YOLO model relies on visual features like edges, shapes, and color gradients to distinguish one tool from another. In dim or uneven lighting, those features become harder to extract, and confidence scores drop accordingly. For the best results, make sure the tools you are scanning are well-lit with diffuse light. Overhead shop lights, natural daylight from a garage door, or even a work lamp pointed at the scanning area all work well. Avoid strong shadows that fall across tools, as a shadow can split a single tool into what the model perceives as two separate objects.
Camera angle matters too. Holding your phone directly above the tools, looking straight down, provides the model with the clearest silhouette of each item. This top-down perspective minimizes overlap and gives the model the most surface area to work with. Angles between 30 and 90 degrees from the surface generally perform well. Very shallow angles, where you are looking across the surface at nearly eye level, tend to cause tools to overlap visually and make detection significantly harder.
Distance also plays a role. Hold your phone close enough that tools fill a meaningful portion of the frame, but far enough away that the items you want to scan are all visible at once. For a single toolbox drawer, arm's length is usually ideal. For a larger workbench surface, you may need to step back slightly or scan in sections.
Handling Similar-Looking Tools and Corrections
Some tools are genuinely difficult to distinguish visually. A Phillips head screwdriver and a Pozidriv screwdriver look nearly identical from most angles. Combination wrenches in similar sizes can be hard to tell apart without reading the stamped markings. Needle-nose pliers and long-nose pliers share the same basic shape. The model handles these cases by assigning its best guess along with a confidence score that reflects the ambiguity.
When you notice an incorrect classification during the review step, correcting it is simple and takes only a tap. Select the wrong entry, choose the correct tool type from the searchable list, and save. This correction is recorded in your inventory as the accurate value. Over time, these corrections also serve as valuable feedback signals. When correction data is aggregated across the user base in anonymized form, it highlights specific tool pairs that the model struggles with, which directly informs the next round of model training. The result is that the tools users correct most frequently become the tools the model improves on fastest.
Think of corrections not as a failure of the system, but as a collaboration with it. Every time you fix a misidentification, you are contributing to a smarter model for everyone. The more the community scans and corrects, the better AR Toolbox gets.
Confidence Thresholds and Tuning
AR Toolbox uses a confidence threshold to determine which detections are shown on screen. By default, this threshold filters out predictions below a certain level so that only reasonably certain identifications appear as AR overlays. You can adjust this threshold in the app settings to suit your preference.
Raising the threshold means fewer detections but higher accuracy among the ones that do appear. This is useful when you want to quickly scan a well-organized toolbox and trust every result without much review. Lowering the threshold surfaces more detections, including uncertain ones, which is helpful when you are scanning a cluttered surface and want to catch as many tools as possible, accepting that some will need manual correction.
There is no single right answer for the threshold setting. Technicians who scan the same toolbox daily and know its contents well might prefer a higher threshold for speed. Someone inventorying an unfamiliar collection of tools for the first time might lower it to make sure nothing is missed. The flexibility is there so you can match the app's behavior to your workflow rather than the other way around.
Another useful technique is rescanning from a different angle. If a tool was missed on the first pass, shifting your position by even 15 to 20 degrees can change the visual profile enough for the model to pick it up. The app is designed for multiple quick scans rather than one perfect scan, and the review screen merges results intelligently so you do not end up with duplicate entries.
What's Next
With a solid understanding of how to optimize your scanning environment and how the model improves through corrections, you are well equipped to get accurate results in any condition. Next, we will take a closer look at the AR overlay system itself, examining what information is displayed, how overlays are positioned and color-coded, and how performance varies across different devices.