At the Meta Connect 2024 conference, Meta announced that their AI technology, Meta AI, is now gaining multimodal capabilities through their Llama 3.2 models. This means that Meta AI can now assist with photo editing and answer questions about shared photos, similar to Google Gemini and OpenAI’s ChatGPT. With this update, users can share photos in chats and ask Meta AI about the contents of the image, such as the type of flower or how to make a dish. However, the accuracy of Meta AI’s responses still needs to be tested and reviewed.
In addition to photo support, Meta AI can also be used on Instagram to generate backgrounds for Stories when resharing a photo from the feed. Meta is also testing translation tools for Facebook and Instagram Reels, including automatic dubbing and lip-syncing, in small groups in the U.S. and Latin America.
Meta AI is also expanding its generative AI features and testing the sharing of Meta AI images to Facebook and Instagram feeds to prompt users to try the feature. During the conference, Meta CEO Mark Zuckerberg highlighted that Meta AI stands out by offering state-of-the-art AI models for free and easily integrated into their products and apps. He also mentioned that Meta AI is on track to becoming the most used AI assistant in the world by the end of the year, with almost 500 million monthly active users. However, the editing features will initially be available in English in the U.S.
In summary, Meta AI is catching up with Google in terms of AI-powered photo editing and is continuously improving its capabilities. With its integration into various platforms and its free access, Meta AI is on its way to becoming the most used AI assistant in the world.
Read More @ techcrunch.com