Apple has published research papers showcasing its endeavor to advance artificial intelligence (AI) technology. The research highlights the development of on-device AI technology, including methods to efficiently run large language models (LLMs) and create animatable avatars.
The research includes a groundbreaking method named “LLM in a flash” which enables the smooth running of complex AI applications on iPhones or iPads, potentially leading to an improved and on-device generative AI-powered Siri.
Another significant advancement is the creation of animatable avatars, termed HUGS, which can be generated from short video clips captured on an iPhone, offering a new level of personalization and realism for users in social media, gaming, educational, and augmented reality applications.
Apple’s research suggests the possibility of a generative AI-powered Siri, as well as advancements in mobile technology and improved performance in everyday devices through the efficient LLM inference strategy described in LLM in a Flash.
HUGS, with its rendering speeds 100 times faster than previous methods, has the potential to significantly enhance social, gaming, and professional applications with realistic, user-controlled avatars, and could greatly benefit the Apple Vision Pro’s Digital Persona.
While Apple tends to focus on machine learning rather than using buzzwords like ‘AI’, the research papers indicate a deeper involvement in new AI technologies. This suggests the likelihood of Apple implementing generative AI into its products, although the company has yet to officially confirm this.