META has unveiled its most powerful AI model yet, and CEO Mark Zuckerberg says it’s on track to surpass its fiercest competitor.
After months of development, the tech giant released Llama 3.1, a free and open-source AI model.
The Llama 3.1 is significantly more advanced than the smaller Llama 3 models released just a few months ago.
It’s so advanced that it outperforms OpenAI’s GPT-4o in several benchmarks, including those designed to test reasoning.
Zuckerberg made the formal announcement in a letter published today on the Meta company blog.
The mogul also advocated for an open-source AI model that companies can train on personalized data and tweak to their liking.
READ MORE ABOUT MARK ZUCKERBERG
Meta has partnered with industry leaders like Microsoft, Google, and Nvidia to begin this process.
“People often ask if I’m worried about giving up a technical advantage by open-sourcing Llama, but I think that misses the bigger picture,” Zuckerberg wrote.
Releasing Llama 3.1 to the world will help it “develop into a complete ecosystem of tools, efficiency improvements, silicon optimizations, and other integrations,” he continued.
To be clear, Zuckerberg referenced the “long history of open source projects and successes.”
goal “saved billions of dollars” by making its server, network and data center projects public.
“This approach has worked consistently for us when we’ve stuck with it over the long term,” Zuckerberg professed.
Unlike your competitors, goalThe company’s business model does not depend on selling access to AI models – meaning that open disclosures do not harm “revenue, sustainability or the ability to invest in research”.
The company did not reveal where it obtained the data to train Llama 3.1, but it has already been criticized for extracting information from Instagram and Facebook.
Meta mentions model training on synthetic or AI-generated data in other blog posts.
Zuckerberg ended the letter by addressing concerns about AI, which he divided into two categories: unintentional and intentional harms.
Examples of unintentional harm are “bad health advice” or futuristic concern that models can “unintentionally self-replicate or hyper-optimize goals to the detriment of humanity.”
Meanwhile, intentional harm can be summarized as the work of a “bad actor.”
Since most of the “concerns people have about AI” fall into the category of unintentional harm, this is why open source software is the best way forward, Zuckerberg continued.
To mitigate risks, the Llama 3.1 is equipped with characteristics like Llama Guard, which protects conversations between humans and AI.
Zuckerberg is optimistic about the future.
In a video posted on Instagramthe tech mogul claimed that Meta AI was on track to become “the most used AI assistant in the world by the end of the year.”
This record is currently held by OpenAIfrom ChatGPT, which has more than 100 million active users.
Despite fierce competition, Meta aims to establish itself as a pioneer by implementing a range of AI capabilities across platforms and services.
The company’s flagship model has also received some updates, with Meta calling it “more creative and smart” than ever.
What are the arguments against AI?
Artificial intelligence is a highly controversial issue and it seems like everyone has a position on it. Here are some common arguments against this:
Job Loss – Some industry experts argue that AI will create new niches in the job market, and as some roles are eliminated, others will appear. However, many artists and writers insist that the argument is ethical, since generative AI tools are being trained on their work and would not work otherwise.
Ethics – When AI is trained on a dataset, much of the content is taken from the internet. This is almost always, if not exclusively, done without notifying the people whose work is being performed.
Privacy – Content from personal social media accounts can be fed into language models to train them. Concerns have emerged as Meta unveils its AI assistants on platforms like Facebook and Instagram. There have been legal challenges to this issue: in 2016, legislation was created to protect personal data in the EU, and similar laws are in the works in the United States.
Misinformation – As AI tools extract information from the Internet, they may take things out of context or experience hallucinations that produce absurd responses. Tools like Copilot on Bing and Google’s generative search AI are always at risk of getting things wrong. Some critics argue this could have lethal effects – such as AI prescribing erroneous health information.
The Meta AI assistant is available in more languages, including French, German, and Hindi.
There is also the “Imagine Me” feature, which scans a user’s face through the phone’s camera and inserts their image into AI-generated images.
The tool is available in beta and can be accessed by typing “Imagine me” followed by any safe-for-work activity.
An experimental version of Meta AI will be released for the Quest headset next month, starting in the United States and Canada.
Users can ask simple questions, translate text, and get real-time information through Bing integration.
This story originally appeared on The-sun.com read the full story