XRAI AR One AI-Enabled Smart Glasses Enable Enhanced Communication Options for the Hard-of-Hearing
|
NEWS
|
In June 2024, XRAI Glass launched the XRAI AR One, a pair of state-of-the-art smart glasses that utilizes Artificial Intelligence (AI) to translate and subtitle real-time conversations, layering its interface over the natural environment via Augmented Reality (AR). Featuring on-cloud AI as an enabling technology to translate and interpret speech data, the XRAI AR One was designed to meet the needs of the global deaf and hard-of-hearing community. The XRAI AR One glass is not necessarily the first of its kind as an enabling technology for the global deaf and hard-of-hearing community. It is, however, one of the pioneering forces in engaging AI as an enabling technology for disadvantaged communities.
Potential for Enabling AI Set to Grow with Investments and Partnerships
|
IMPACT
|
The prospect of AI as an enabling force for disadvantaged communities is exciting, as the XRAI AR One’s use case demonstrates the potential for AI sensing software in human applications. AI sensing software is the AI framework to derive information (e.g., objects and people) from image, video, sound, and other inputs. This includes computer/machine vision and automatic speech recognition. Integrating AI sensing software into multiple branches of technology, such as AR, as an enabling technology, allows societies to better realize the potential for AI’s place in daily applications. As a result, ABI Research expects AI sensing software to grow by a Compound Annual Growth Rate (CAGR) of 30.3% between 2023 and 2030, particularly in enterprise services.
ABI Research believes that AI applications built to support community inclusiveness (i.e., inclusive AI) have a strong growth trajectory, especially as more investments continue to pour into this market. For example, XRAI Glass is not the only company invested in designing inclusive AI systems for diverse needs. Many other enterprises, from niche companies like Parrot AI to big tech firms like OpenAI and Google, have also led developments in inclusive AI applications through independent launches and partnerships. These sustained investments and partnerships in the field are expected to drive the adoption of inclusive AI systems and the development of new use cases.
Enabling AI's Use Case Scenarios for Disabled Communities
|
RECOMMENDATIONS
|
AI as an enabling force can become a new reality for disadvantaged communities. However, as AI models are trained on human-created data, there will be a possibility that without a guided fine-tuning process, generic AI models may replicate the same biases present among humans regarding data from outlier communities like the disabled community. Enterprises exploring the possibilities of harnessing Generative Artificial Intelligence (Gen AI) by developing enabling technologies for communities, such as the disabled, must incorporate ethical and supervised fine-tuning measures to accurately understand these communities’ most urgent needs. In this way, an inclusive AI system incorporating a diverse spread of human-centric data of speech, text, and images can enhance its ability as an assistive technology, which could ultimately be a crucial enabler that levels the playing field.
Inclusive AI systems can vary in applicability and development purpose, mainly where the range of disabilities and their core challenges differ widely. Enterprises that are exploring inclusive AI systems can consider a host of use cases that can enable and assist specific disabilities, such as:
- Communication technologies that primarily target the speech-impaired community in speech accessibility. These disabilities could range from having trouble vocalizing words and phrases to disorders involving pitch, loudness, and quality, to being entirely mute. Developing AI-driven speech recognition and text-to-speech synthesis products faces limited datasets when accommodating such impairments, which could exclude such communities from current state-of-the-art tools and experiences. By developing effective communication and speech transcription tools, impaired individuals can participate equally in collaborative settings and share ideas seamlessly.
- Educational technology in AI tools is not a new concept. However, increasing accessibility in learning should be a key target to bridge the skill and knowledge gap between disabled communities and the broader communities. AI-powered tutor apps such as ObjectiveEd’s Braille AI Tutor application, supported by Microsoft’s AI for Accessibility program, assist disabled individuals in learning braille and accessing education at different levels. This solution uses Optical Character Recognition (OCR) to transform physical braille characters into digital texts on an iPad through a refreshable braille display and a speech recognition service that assesses students' reading skills. These innovative measures for learning and developing can allow greater accessibility to learning and understanding challenges to the visually impaired community, among other impairments.
- Visualization technologies like the XRAI Glass, Rogervoice, and Ava utilize AI to transcribe conversations for those with hearing impairments. AI assistants are embedded into eyewear glasses through software and hardware implementations for these solutions.
With Natural Language Processing (NLP) and AI sensing at the forefront of AI development, AI's potential as an enabling force can significantly level the playing field of the disabled community, allowing for equal competition of opportunities above all else.