Llama 3
Llama 3 is a family of models developed by Meta. They are considered one of the most capable LLMs published till date.
note
Model | Params | Context Length | GQA | Token Count | Knowledge Cutoff |
---|---|---|---|---|---|
Llama-3 8B Instruct | 8B | 8k | Yes | 15T+ | March 2023 |
Llama-3 70B Instruct | 70B | 8k | Yes | 15T+ | December 2023 |
tip
We recommend this model for complex tool calling scenarios, but users should be aware of the relatively short context length.
Llama-3 8B Instruct
Model | Function Calling | MMLU | GPQA | GSM-8K | MATH | MT-bench | MT-bench Pairwise Comparison | |||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Win | Loss | Tie | Win Rate | Loss Rate | Adjusted Win Rate | |||||||
Llama-3 8B Instruct | - | 65.69 | 31.47 | 77.41 | 27.58 | 8.07 | 41 | 42 | 77 | 0.25625 | 0.2625 | 0.496875 |
Rubra Enhanced Llama-3 8B Instruct | 89.28% | 64.39 | 31.70 | 68.99 | 23.76 | 8.03 | 42 | 41 | 77 | 0.2625 | 0.25625 | 0.503125 |
Llama-3 70B Instruct
- rubra-ai/Meta-Llama-3-70B-Instruct
- rubra-ai/Meta-Llama-3-70B-Instruct-GGUF
- rubra-ai/Meta-Llama-3-70B-Instruct-AWQ
Model | Function Calling | MMLU | GPQA | GSM-8K | MATH | MT-bench | MT-bench Pairwise Comparison | |||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Win | Loss | Tie | Win Rate | Loss Rate | Adjusted Win Rate | |||||||
Llama-3 70B Instruct | - | 79.90 | 38.17 | 90.67 | 44.24 | 8.88 | 58 | 28 | 74 | 0.3625 | 0.1750 | 0.59375 |
Rubra Enhanced Llama-3 70B Instruct | 97.85% | 75.90 | 33.93 | 82.26 | 34.24 | 8.36 | 28 | 58 | 74 | 0.1750 | 0.3625 | 0.40625 |