
.png)
EVA Qwen2.5 14B is an advanced roleplay language model optimized for creative writing tasks.
EVA Qwen2.5 14B is a specialized language model designed for roleplay (RP) and creative writing tasks. It is a full-parameter fine-tuning of the Qwen2.5 14B base model, utilizing a mixture of synthetic and natural datasets to enhance its creative capabilities.
The model is primarily intended for applications in roleplay scenarios, story generation, and creative writing, making it suitable for game developers, writers, and content creators.
Currently, the model supports English.
EVA Qwen2.5 14B is built on the Qwen2 architecture, specifically designed for causal language modeling tasks. It utilizes the 'Qwen2ForCausalLM' architecture and operates with a "Qwen2Tokenizer".
The model was trained on a diverse dataset that includes:
The total training data consists of approximately 1.5 million tokens from role-play data combined with synthetic data to enhance its storytelling capabilities.
The training data encompasses a variety of sources aimed at improving the model's ability to generate coherent and engaging narratives. The mixture of synthetic and natural data contributes to its robustness in handling various writing prompts.
The model has a knowledge cutoff date of October 2023.
The training dataset's diversity helps mitigate biases, making the model more adaptable across different contexts and narratives. Continuous efforts are made to refine the dataset for improved performance.
The model is available on the AI/ML API platform as "eva-unit-01/eva-qwen-2.5-14b" .
Detailed API Documentation is available here.
The development of EVA Qwen2.5 adheres to ethical considerations regarding AI-generated content, including transparency in usage and potential biases in generated narratives. The creators emphasize responsible use in creative contexts.
Apache 2.0 License. This allows both commercial and non-commercial usage rights, enabling developers to integrate the model into various applications without restrictions.
Get EVA Qwen2.5 API here.
EVA Qwen2.5 14B is a specialized language model designed for roleplay (RP) and creative writing tasks. It is a full-parameter fine-tuning of the Qwen2.5 14B base model, utilizing a mixture of synthetic and natural datasets to enhance its creative capabilities.
The model is primarily intended for applications in roleplay scenarios, story generation, and creative writing, making it suitable for game developers, writers, and content creators.
Currently, the model supports English.
EVA Qwen2.5 14B is built on the Qwen2 architecture, specifically designed for causal language modeling tasks. It utilizes the 'Qwen2ForCausalLM' architecture and operates with a "Qwen2Tokenizer".
The model was trained on a diverse dataset that includes:
The total training data consists of approximately 1.5 million tokens from role-play data combined with synthetic data to enhance its storytelling capabilities.
The training data encompasses a variety of sources aimed at improving the model's ability to generate coherent and engaging narratives. The mixture of synthetic and natural data contributes to its robustness in handling various writing prompts.
The model has a knowledge cutoff date of October 2023.
The training dataset's diversity helps mitigate biases, making the model more adaptable across different contexts and narratives. Continuous efforts are made to refine the dataset for improved performance.
The model is available on the AI/ML API platform as "eva-unit-01/eva-qwen-2.5-14b" .
Detailed API Documentation is available here.
The development of EVA Qwen2.5 adheres to ethical considerations regarding AI-generated content, including transparency in usage and potential biases in generated narratives. The creators emphasize responsible use in creative contexts.
Apache 2.0 License. This allows both commercial and non-commercial usage rights, enabling developers to integrate the model into various applications without restrictions.
Get EVA Qwen2.5 API here.