Meta has announced its new free and open-source large language model (LLM), Llama 3.2, during Meta Connect 2024.
About Llama 3.2:
It is an open-source large language model developed by Meta.
Features
It has both image and text processing abilities.
It includes voice interaction features, allowing users to engage in conversations with the AI using voice commands.
This model includes both small and medium-sized variants at 11-billion and 90-billion parameters as well as more lightweight text-only models at 1-billion and 3-billion parameters that fit into select mobile and edge devices.
Out of the different Llama 3.2 variants, the 11-billion parameter one and the 90-billion parameter one are vision models and can understand charts and graphs, caption images and locate objects from natural language prompts.
The bigger model can also pinpoint details from images to create captions.
Applications: The models will help developers create more advanced AI applications like AR apps with a real-time understanding of video, visual search engines that distribute images based on content, or document analysis tools that can summarise large portions of text.
Dear Student,
You have still not entered your mailing address. Please enter the address where all the study materials will be sent to you. (If applicable).