Rubra
Rubra is a collection of open-weight, tool-calling LLMs.
Rubra enhances the top open-weight large language models with tool-calling capability. The ability to call user-defined external tools in a deterministic manner while reasoning and chatting makes Rubra models ideal for agentic use cases.
All models are enhanced from the top open-source LLMs with further post-training and methods that effectively teach instruct-tuned models new skills while mitigating catastrophic forgetting. For easy use, we extend popular inferencing projects, allowing you to run Rubra models easily.
Enhanced Models
Demo
Try out the models immediately without downloading anything in Huggingface Spaces! It's free and requires no login.
Run Rubra Models Locally
We extend the following inferencing tools to run Rubra models in an OpenAI-compatible tool-calling format for local use:
Note: Llama3 models, including the 8B and 70B variants, are known to experience increased perplexity and a subsequent degradation in function-calling performance as a result of quantization. We recommend serving them with either vLLM or using the fp16 quantization.
Contributing
Contributions to Rubra are welcome! We'd love to improve tool-calling capability in the models based on your feedback. Please submit issues to the GitHub repository.
License
Rubra code is licensed under the Apache 2.0 License. Rubra enhanced models are published under the same license as the parent model.
For more details and documentation, visit the Rubra GitHub page.