Home > Media News >
Source: http://www.mashable.com
Mashable: Meta is democratizing access in this crucial, fast-changing sector by making it available as a research tool.
The last past few weeks in tech has been dominated by discussions of the language models developed and deployed by big giants like Microsoft, Google, and OpenAI. Yet, Facebook's parent company, Meta, is still making tremendous strides in this area, and they have just released a new artificial intelligence language generator called LLaMA.
LLaMA isn't a conversational system like Bing or ChatGPT. Instead, Meta is democratising access in this crucial, fast-changing sector by making it available as a research tool.
Facebook CEO Mark Zuckerberg announced on Friday that the company's new huge language model, Meta, had been trained. Researchers and developers can benefit from the LLaMA paradigm by discovering new applications for artificial intelligence beyond traditional question and document summarization.
The new model developed by Meta's Fundamental AI Research (FAIR) team has just been revealed at a time when large tech corporations and well-funded startups are competing to demonstrate the most cutting-edge AI approaches and incorporate them into their own commercial products.
Meta claims in a paper that LLaMA-13B, the second-smallest version of the LLaMA model, outperforms OpenAI's popular GPT-3 model "on most benchmarks," and that LLaMA-65B, the largest version of the LLaMA model, is "competitive with the best models," such as DeepMind's Chinchilla70B and Google's PaLM 540B. The digits in the names denote the number of billions of parameters in each model; these numbers represent a measure of the size of the system and a rough approximation of its intelligence.
After training, LLaMA-13B can be deployed on a single enterprise-grade Nvidia Tesla V100 GPU in a data centre. That's great news for places without the resources to do their own tests on these systems, but it doesn't help lone researchers much.
The release of Meta is particularly noteworthy, in part because it came after the AI chatbot trend had already peaked. Large language models are the basis for applications like OpenAI's ChatGPT, Microsoft's Bing AI, and Google's forthcoming Bard.
According to Zuckerberg's post, LLM technology has the potential to one day be used to solve mathematical problems and conduct scientific research.
LLMs have shown a lot of potential in creating text, holding conversations, summarising written information, and even more sophisticated jobs like solving arithmetic theorems or predicting protein structures, Zuckerberg stated on Friday.
Meta claims that their LLM stands out from the crowd in a number of ways. For starters, it states that it will be available in a range of sizes, from 7 billion to 65 billion parameters. In recent years, researchers have been able to successfully enhance the technology's potential by using bigger models, although this "inference" phase comes at a higher expense. For instance, OpenAI's Chat-GPT 3 has 175 billion variables.
Meta has also announced that it is accepting submissions from researchers and would make its models available to the research public unlike models used by Google's LaMDA and OpenAI's ChatGPT which are secret.
According to Zuckerberg, Meta is dedicated to this open style of research, and the company plans to share its new model with the AI research community.