Meta, the parent company of Facebook, has announced the release of LLaMA (Large Language Model Meta AI), a state-of-the-art large language model designed to help researchers advance their work in this subfield of AI. As part of their commitment to open science, Meta is making LLaMA publicly available to democratize access to these models and further research in this fast-changing field.

LLaMA is a foundational model, with 65 billion parameters, that has been trained on a large set of unlabeled data. The model has been made available in several sizes, including 7B, 13B, 33B, and 65B parameters, to enable researchers to fine-tune it for a variety of tasks. Training smaller foundation models like LLaMA requires far less computing power and resources, making it easier for researchers to test new approaches, validate others’ work, and explore new use cases.
Large language models, natural language processing systems with billions of parameters, have shown new capabilities to generate creative text, predict protein structures, solve mathematical theorems, answer reading comprehension questions, and more. Despite these advancements, full research access to large language models remains limited due to the resources required to train and run such large models.
Smaller models like LLaMA are easier to retrain and fine-tune for specific use cases. It was trained on 1.4 trillion tokens, focusing on the 20 languages with the most speakers, including those with Latin and Cyrillic alphabets. LLaMA works by taking a sequence of words as input and predicts the next word to recursively generate text.
There are still risks associated with large language models, such as bias, toxicity, and the potential for generating misinformation. As a foundation model, LLaMA is versatile and can be applied to many different use cases. Meta is sharing the code for LLaMA to enable other researchers to test new approaches to limiting or eliminating these risks in large language models.
Meta is releasing LLaMA under a non-commercial license focused on research use cases to prevent misuse and maintain integrity. Access to the model will be granted on a case-by-case basis to academic researchers, those affiliated with government, civil society, and academia, and industry research laboratories worldwide.
Meta believes that the entire AI community, including academic researchers, civil society, policymakers, and industry, must work together to develop clear guidelines around responsible AI and responsible large language models in particular. By releasing LLaMA, Meta hopes to advance research in this crucial area and ultimately build more robust and reliable large language models.
In conclusion, Meta’s public release of LLaMA is a significant step towards democratizing access to large language models and advancing research in this field. As more researchers gain access to LLaMA, it will be interesting to see what new insights and developments will emerge in the future.