2K
0.000945
0.000945
65B
Chat

Guanaco (65B)

Access Guanaco-65B API. Guanaco 65B is an open-source chatbot model that rivals ChatGPT 3.5 Turbo, developed using efficient 4-bit QLoRA finetuning.
Try it now

$0.00117

AI Playground

Test all API models in the sandbox environment before you integrate. We provide more than 200 models to integrate into your app.
AI Playground image
Ai models list in playground
Testimonials

Our Clients' Voices

Guanaco (65B)Techflow Logo - Techflow X Webflow Template

Guanaco (65B)

Guanaco-65B: Powerful open-source chatbot model, competitive with ChatGPT 3.5 Turbo.

Guanaco-65B Model Overview

Basic Information

  • Model Name: Guanaco
  • Developer/Creator: Tim Dettmers
  • Release Date: 2023
  • Version: 65B
  • Model Type: Text-based LLM

Description

Overview

The Guanaco-65B is a 65 billion parameter open-source chatbot model developed by Tim Dettmers. It is created by applying 4-bit QLoRA finetuning to the LLaMA base model using the OASST1 dataset. The model demonstrates the potential of QLoRA technology, achieving performance comparable to top commercial chatbots like ChatGPT and BARD.

Key Features

  • Competitive performance with ChatGPT and BARD on Vicuna and OpenAssistant benchmarks
  • Available open-source for affordable and local experimentation
  • Replicable and efficient 4-bit QLoRA training procedure
  • Lightweight adapter weights that can be used with LLaMA base models

Intended Use

The Guanaco-65B is designed for use as a powerful open-source chatbot model, enabling developers and researchers to experiment with and deploy high-performance conversational AI systems. It can be used for tasks such as:

  • Open-domain chatbots
  • Task-oriented dialogue systems
  • Question-answering
  • Summarization
  • Text generation

Language Support

The Guanaco-65B is a multilingual model, but the OASST1 dataset it was trained on is heavily weighted towards high-resource languages. The model likely performs best on English and other high-resource languages.

Technical Details

Architecture

The Guanaco-65B uses a LoRA (Low-Rank Adaptation) architecture, with adapter weights added to all layers of the LLaMA base model. This allows for efficient finetuning while preserving the base model's capabilities.

Training Data

The model was trained on the OASST1 dataset, which is multilingual but skewed towards high-resource languages. The exact size and diversity of the dataset is not publicly reported.

Knowledge Cutoff

The knowledge cutoff date for the Guanaco-65B model is not publicly specified. It likely reflects the date of the OASST1 dataset used for finetuning.

Performance Metrics

According to the model's documentation, the Guanaco-65B achieves performance at 99.3 percent of ChatGPT-3.5 Turbo on the Vicuna benchmarks, as evaluated by both human raters and GPT-4.

Usage

API Usage Example

Ethical Guidelines

No specific ethical guidelines are provided for the Guanaco-65B model. As an open-source model, it is up to developers to use it responsibly and consider potential misuse.

License Type

The Guanaco adapter weights are licensed under Apache 2.0. However, using the model requires access to the LLaMA base model weights, which have more restrictive licensing terms.In summary, the Guanaco-65B is a powerful open-source chatbot model that rivals commercial offerings like ChatGPT, while demonstrating the potential of efficient 4-bit QLoRA finetuning. It provides an affordable and replicable path to high-performance conversational AI.

Try it now

The Best Growth Choice
for Enterprise

Get API Key