| --- |
| license: apache-2.0 |
| tags: |
| - text-generation |
| - instruction-tuned |
| - llama |
| - gguf |
| - chatbot |
| library_name: llama.cpp |
| language: en |
| datasets: |
| - custom |
| model-index: |
| - name: Corelyn NeoMini |
| results: [] |
| base_model: |
| - mistralai/Ministral-3-3B-Base-2512 |
| --- |
| |
|  |
|
|
| # Corelyn NeoMini GGUF Model |
|
|
| ## Specifications : |
| - Model Name: Corelyn NeoMini |
| - Base Name: NeoMini-3B |
| - Type: Instruct / Fine-tuned |
| - Architecture: Ministral-3 |
| - Size: 3B parameters |
| - Organization: Corelyn |
|
|
| ## Model Overview |
|
|
| Corelyn NeoMini is a 3-billion parameter LLaMA-based instruction-tuned model, designed for general-purpose assistant tasks and knowledge extraction. It is a fine-tuned variant optimized for instruction-following use cases. |
|
|
| - Fine-tuning type: Instruct |
|
|
| - Base architecture: Ministral-3 |
|
|
| - Parameter count: 3B |
|
|
|
|
| ### This model is suitable for applications such as: |
|
|
| - Chatbots and conversational AI |
|
|
| - Knowledge retrieval and Q&A |
|
|
| - Code and text generation |
|
|
| - Instruction-following tasks |
|
|
| ## Usage |
|
|
| Download from : [NeoMini3.2](https://huggingface.co/CorelynAI/NeoMini/resolve/main/NeoMini_3B.gguf) |
|
|
| ```python |
| |
| # pip install pip install llama-cpp-python |
| |
| from llama_cpp import Llama |
| |
| # Load the model (update the path to where your .gguf file is) |
| llm = Llama(model_path="path/to/the/file/NeoMini_3B.gguf") |
| |
| # Create chat completion |
| response = llm.create_chat_completion( |
| messages=[{"role": "user", "content": "Create a Haiku about AI"}] |
| ) |
| |
| # Print the generated text |
| print(response.choices[0].message["content"]) |
| |
| |
| ``` |