Apple hasn’t said much publicly about its plans to join the many companies offering generative AI products, but this week it did open up a window into its behind-the-scenes work on the kind of system that powers AI chatbots and image generators.
On Monday, it released OpenELM, which it calls a “state-of-the-art open language model.” Language models are the massive sets of information that tools like ChatGPT, Gemini, Perplexity and Dall-E draw on to respond to the prompts you type when you want an AI to whip up an email, write computer code or create a fanciful image.
So it’s not yet the Apple AI product we’ve all been waiting for, but it is a logical step in that direction — and potentially hints at the AI capabilities Apple might offer in its upcoming iOS 18 software for iPhones.
OpenELM’s release comes just weeks ahead of Apple’s WWDC event in early June, where the company traditionally talks about its next wave of software offerings.
Apple did not respond to a request for comment.
But during a quarterly earnings call in February, CEO Tim Cook hinted that Apple would reveal its plans for generative AI at some point in 2024. Also around that time, Apple reportedly shuttered its long-running electric car project to focus on generative AI and the Apple Vision Pro, the wearable that went on sale that same month and that CNET reviewer Scott Stein calls “Apple’s wildest and strangest device.”
It’s not clear yet how OpenELM fits into these plans. However, in a research paper posted in March, Apple discussed multimodal large language models, or those that can generate a variety of content formats.
While Apple has been holding fire, most tech giants and a rash of startups have already rushed out one or more generations of gen AI products. Adobe, Anthropic, Google and OpenAI are in a race to release increasingly capable models that not only understand a wider variety of queries, but produce more realistic images and videos. They’re even keen to highlight internal research…
Read the full article here