Code Llama is Meta’s refined Llama 2 variant for code generation.
According to Meta, Code Llama is an evolution of Llama 2 that has been further trained with 500 billion code tokens and code-related tokens from Llama 2’s code-specific datasets. To train Code Lama, Meta used more code data over a longer period of time.
Compared to Llama 2, Code Lama has enhanced programming capabilities and can, for example, generate appropriate code in response to a natural language prompt such as “Write me a function that outputs the Fibonacci sequence.” Similar to Github Copilot, it can also complete and debug code.
Code Lama supports popular programming languages like Python, C++, Java, PHP, Typescript, C#, Bash and others.
Three models and two variants
Meta releases Code Llama in three sizes with 7 billion, 13 billion and 34 billion parameters. The particularly large context window is 100,000 tokens, which makes the model particularly interesting for processing large amounts of code simultaneously.
“When developers need to debug a large chunk of code, they can pass the entire code length to the model,” Meta AI writes.
The 34-billion-parameter variant is said to provide the highest code quality, making it suitable as a code assistant. The smaller models are optimized for real-time code completion. They have lower latency and are trained to fill-in-the-middle (FIM) by default.
In addition, Meta releases a Code Llama variant optimized for Python and trained with an additional 100 billion Python code tokens, as well as an Instruct variant optimized with code tasks and their sample solutions. This version is recommended by Meta for code generation because it is the best at following instructions.
Code Llama outperforms other open-source models, but GPT-4 stays ahead
In the HumanEval and Mostly Basic Python Programming (MBPP) benchmarks, Code Llama 34B achieves results on par with GPT-3.5, but is far behind GPT-4 in Human Eval. Code Llama outperforms Llama 2, which is not optimized for code, and other open-source models tested.
The Open Source Initiative criticized Meta for marketing the models as open source, saying the license restricts commercial use and certain areas of application and thus does not fully meet the open-source definition.