32K
0.0002625
0.000525
14B
Chat

EVA Qwen2.5 14B

Explore EVA Qwen2.5 14B API, a powerful language model optimized for roleplay and creative writing with extensive capabilities and performance metrics.
Try it now

AI Playground

Test all API models in the sandbox environment before you integrate. We provide more than 200 models to integrate into your app.
AI Playground image
Ai models list in playground
Testimonials

Our Clients' Voices

EVA Qwen2.5 14BTechflow Logo - Techflow X Webflow Template

EVA Qwen2.5 14B

EVA Qwen2.5 14B is an advanced roleplay language model optimized for creative writing tasks.

Model Overview Card for EVA Qwen2.5 14B

Basic Information

  • Model Name: EVA Qwen2.5 14B
  • Developer/Creator: EVA-UNIT-01
  • Release Date: October 31, 2024
  • Version: 0.1
  • Model Type: Text Generation (Roleplay and Storywriting)

Description

Overview

EVA Qwen2.5 14B is a specialized language model designed for roleplay (RP) and creative writing tasks. It is a full-parameter fine-tuning of the Qwen2.5 14B base model, utilizing a mixture of synthetic and natural datasets to enhance its creative capabilities.

Key Features
  • Parameter Count: 14 billion parameters for robust language understanding.
  • Context Length: Supports a maximum context length of 128K tokens, allowing for extensive input.
  • Fine-tuning: Optimized for creativity and versatility through extensive dataset training.
  • Sampling Configurations: Multiple sampling configurations to tailor output, including temperature and top-k sampling.
Intended Use

The model is primarily intended for applications in roleplay scenarios, story generation, and creative writing, making it suitable for game developers, writers, and content creators.

Language Support

Currently, the model supports English.

Technical Details

Architecture

EVA Qwen2.5 14B is built on the Qwen2 architecture, specifically designed for causal language modeling tasks. It utilizes the 'Qwen2ForCausalLM' architecture and operates with a "Qwen2Tokenizer".

Training Data

The model was trained on a diverse dataset that includes:

  • Celeste 70B data mixture (excluding the Opus Instruct subset).
  • Kalomaze's Opus_Instruct_25k dataset, filtered to remove refusals.
  • Selected subsets from ChatGPT writing prompts and short stories.

The total training data consists of approximately 1.5 million tokens from role-play data combined with synthetic data to enhance its storytelling capabilities.

Data Source and Size

The training data encompasses a variety of sources aimed at improving the model's ability to generate coherent and engaging narratives. The mixture of synthetic and natural data contributes to its robustness in handling various writing prompts.

Knowledge Cutoff

The model has a knowledge cutoff date of October 2023.

Diversity and Bias

The training dataset's diversity helps mitigate biases, making the model more adaptable across different contexts and narratives. Continuous efforts are made to refine the dataset for improved performance.

Performance Metrics

  • Inference Speed: The model achieves approximately 15.63 tokens/second under optimal conditions with a single GPU.
  • Latency: Average latency is around 3.03 seconds per request.
  • VRAM Requirement: Requires approximately 29.6 GB of VRAM for efficient operation.
  • Throughput: Capable of processing multiple requests simultaneously under high load conditions.

Comparison to Other Models

Advantages
  • High Performance, Moderate Size: Eva-Qwen-2.5-14B balances strong language processing with resource efficiency. Larger models like Llama 3 (70B) offer deeper insights but require significantly more resources.
  • Multilingual Precision: Ideal for global tasks, Eva-Qwen-2.5-14B handles nuanced contexts well. While GPT-4 supports multilingual tasks, it comes with higher computational costs.
  • Memory Efficiency: Optimized for smoother performance in resource-limited setups. Models like Falcon 40B deliver power but demand more memory.
  • Versatility: Eva-Qwen works well across tasks without extensive tuning. FLAN-T5 also adapts well but may require more adjustments for specialized uses.
Limitations
  • Lower Parameter Depth: Lacks the intricate analysis power of models like Llama 3.2 90B Vision Instruct Turbo, which are better for large, complex datasets.
  • Less Specialized Power: For highly specific tasks, Claude 3.5 Sonnet and GPT-4o can outperform due to larger datasets and parameters.
  • Accuracy vs. Resources: For peak accuracy, higher-parameter models like Gemini 1.5 Pro are more suitable, though Eva-Qwen is more efficient for general applications.

Usage

Code Samples

The model is available on the AI/ML API platform as "eva-unit-01/eva-qwen-2.5-14b" .

API Documentation

Detailed API Documentation is available here.

Ethical Guidelines

The development of EVA Qwen2.5 adheres to ethical considerations regarding AI-generated content, including transparency in usage and potential biases in generated narratives. The creators emphasize responsible use in creative contexts.

Licensing

Apache 2.0 License. This allows both commercial and non-commercial usage rights, enabling developers to integrate the model into various applications without restrictions.

Get EVA Qwen2.5 API here.

Try it now
MODELS

200+ AI Models

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

The Best Growth Choice
for Enterprise

Get API Key