Apple researchers have published research on an artificial intelligence model, called ReALM, that can understand the visual and textual context on a screen and outperforms ChatGPT-4 in tests, according to TechStartups.
Apple is working on its own artificial intelligence solutions that could be unveiled at its annual developer conference in June, and the document, originally picked up by VentureBeat, confirms it.
Apple researchers have published a paper on a new model of artificial intelligence, called ReALM (Reference Resolution As Language Modeling)which proposes an innovative approach to understand the context of user requirements.
This adapts as needed to conversational or visual contextualization. Thus, ReALM understands information in conversations or displayed on a screen.
The difference between ReALM and other AI solutions that understand visual context is the approach – the model developed by Apple converts everything visible, including text on the screen, to text. Thus, ReALM would provide a better visual context than competitors in the market, especially ChatGPT-4.
The AI model could provide an improvement to the voice assistant Siri in that it would receive a better context of the information displayed on the screens of Apple devices, according to the cited source.
It remains to be seen whether Apple will introduce this model with AI, along with the company’s ambitions to integrate artificial intelligence into phones, tablets, PCs and more.
Tags: Apple researchers working model outperforms ChatGPT4 tests