ABI Research expects over 200 million active users in augmented reality (AR) applications that leverage artificial intelligence (AI) in some form by 2026. This can include foundational AR technologies like machine vision and Simultaneous Location and Mapping (SLAM) tracking, as well as value add applications like image and object recognition, semantic labelling, and expert system analytics.
“The combination of AI, machine learning (ML), and AR is an incredibly potent one,” says Eric Abbruzzese, augmented and virtual reality research director at ABI Research. “At the core, the capabilities of augmented reality get stronger with more data available. This data comes from location data, sensor data, environmental dynamics, and integrated systems such as Internet of Things (IoT). AR can also serve as a data collection enabler for these data types. Weaving AI into these areas brings high value and often critical AR capabilities to market.”
The need for visual and spatial data for AR often relies on AI enabling technologies to capture, process, and contextualise that data in an actionable way. As a result, these two markets continue to overlap and create substantial opportunity.
According to ABI Research, while machine vision isn’t inherently required for AR—in the case of assisted reality hardware and applications—it is increasingly becoming a necessity for most use cases. Machine vision enabling SLAM tracking allows for precise user tracking in space and can also capture spatial data for later use.
ABI Research expects nearly 20 million shipments of AR smart glasses with local on-device AI chipsets in 2026, which accounts for 70% of total smart glasses shipments in that year. Local processing of AI is most common today, but processing location is increasingly moving to the cloud for some AI processing types. For instance, SLAM tracking can stay on-device for reliability and low latency, but semantic labelling can sit on the cloud, sacrificing latency in a non-sensitive latency scenario for that type of data. Cloud and hybrid compute scenarios allow for the best AI processing performance versus device performance and battery life, flexibility depending on application, and environment.
Many companies in the AR space have been leveraging AI in numerous ways for years, and this usage is growing both in number of companies and scope of usage. At the hardware level, Qualcomm has baked in AI enhancements specifically for AR and VR—to improve tracking accuracy and performance, for instance—in their XR chipset line. NVIDIA is leveraging AI in their CloudXR product as well as Omniverse, which most recently announced automated simulation and content creation elements using AI. Enterprise players like PTC and Teamviewer use machine vision for device tracking, as well as for backend processing, analytics, predictive processes, and more.
These elements sum up to a valuable enabling technology that is harmonious with the entire augmented reality value chain.
“Point to a use case, application, service, or vertical, and AI is already being leveraged and its role will evolve substantially over the next 5 to 10 years. The value adds commonly cited for augmented reality, including increased worker efficiency and safety as well as novel collaboration and remote enablement capabilities, are enhanced with AI. More accurate and predictable tracking and data gathering, automated and targeted content delivery, newly uncovered data and usage trends all contribute,” concludes Abbruzzese.