The Wall Street Journal also reports on Meta’s plans for Llama 3, which is said to be many times more capable than Llama 2.
According to the WSJ, Meta is planning the new model for 2024, and is upgrading its data centers specifically for AI training with H100s. Training is expected to begin in early 2024, and the resulting model is said to be “roughly” on par with GPT-4 and significantly better than Llama 2. The WSJ cites sources familiar with the project.
Meta CEO Mark Zuckerberg is reportedly sticking to the open-source strategy of making the Llama models available for free for many, though not all, use cases – including commercial ones – despite the performance increase.
The release of the new model is said to be part of Zuckerberg’s strategy to reposition Meta as a leading AI company. However, Google should have released Gemini by then, and OpenAI is also expected to release more innovations by next year – though probably not GPT-5.
WSJ confirms rumors about Llama 3
The Wall Street Journal report coincides with statements made by OpenAI engineer Jason Wei. He reported in late August about a group meeting with Meta AI developers who claimed to have enough computing power to train Llama 3 and 4. Llama 3 is expected to reach the level of GPT-4 but will remain open source.
Llama 2 reaches the level of GPT-3.5 in some applications and is also being optimized by the open-source community through fine-tuning and additional applications. Reaching the level of GPT-4 is likely to be much more challenging, as the current OpenAI model is based on a more complex architecture of 16 expert networks with about 111 billion parameters each.
This may explain why Meta’s goal with an open-source approach is not to surpass GPT-4, but only to be somewhat on par with GPT-4. Distributing a model with the complexity of GPT-4 could be much more difficult.