Apple’s new artificial intelligence models may well find a place in the iPhone
Sciences et technologies

Apple’s new artificial intelligence models may well find a place in the iPhone

While awaiting general availability of features filled with generative AI, Apple is working on engines—in other words, AI models—that will drive the manufacturer’s strategy. The latter thus offers the open source community a new family of large OpenELM language models, the feature of which is local operation.

WWDC, which begins on June 10, will not mark the debut of generative AI at Apple, even if the manufacturer is expected to introduce new AI features in iOS 18. Apple has been developing this technology for several months, and that work is still ongoing. in the public domain, as Apple experts regularly publish their research.

iPhone 15 at the best price Base price: 969 euros.

View more offers

Privacy-Respecting Artificial Intelligence

The company has posted on Hugging Face, a meeting place for the artificial intelligence community, a family of large language models (LLMs) grouped under the name OpenELM, which stands for Open Source Efficient Language Models. Because yes, it is entirely possible to use this code as is or modify it, including for commercial use under the license.

There are 8 models in total; four are pre-trained, meaning they are trained on large data sets to later develop more specialized models. The remaining four models, configured according to instructions, were pre-trained and received additional training to answer specific queries.

These OpenELM models cover varying volumes of parameters, from 270 million to 3 billion parameters—in other words, the number of connections between artificial neurons in the LLM. Each parameter can be thought of as a kind of “weight” that affects the way the model processes information. You might think that the more parameters the better, but it should be noted that volume does not always guarantee better performance; other factors, such as the quality of the training data and the efficiency of the algorithm, also play a decisive role.

The peculiarity of these models is that they all work locally, that is, on devices. Apple conducted tests on computers (Mac and PC), and everything suggests that they can also perform at their best in a smartphone. This is reminiscent of Google’s Gemini Nano or Microsoft’s new Phi-3 Mini (3.8 billion parameters).

Apple should prioritize local processing of AI tasks due to data privacy concerns. It is possible that these OpenEML models will be used!

🔴 To avoid missing news from 01net, subscribe to us on Google News and WhatsApp.

Hi, I’m laayouni2023