Instructions to use N-Bot-Int/ZoraBetaA1 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- PEFT
How to use N-Bot-Int/ZoraBetaA1 with PEFT:
from peft import PeftModel from transformers import AutoModelForCausalLM base_model = AutoModelForCausalLM.from_pretrained("HuggingFaceH4/zephyr-7b-beta") model = PeftModel.from_pretrained(base_model, "N-Bot-Int/ZoraBetaA1") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Unsloth Studio new
How to use N-Bot-Int/ZoraBetaA1 with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for N-Bot-Int/ZoraBetaA1 to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for N-Bot-Int/ZoraBetaA1 to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for N-Bot-Int/ZoraBetaA1 to start chatting
Load model with FastModel
pip install unsloth from unsloth import FastModel model, tokenizer = FastModel.from_pretrained( model_name="N-Bot-Int/ZoraBetaA1", max_seq_length=2048, )
ZoraBetaA1 - SuperCompanion
ZoraBetaA1 is Our Brand new AI Model, finetuned using Iris-Uncensored-Reformat-R2, ZoraBetaA1 showcase a Strong reasoning Capability With a Stronger Finetuned Bias toward Roleplaying Using Zephyr Beta 7B, ZoraBetaA1 also Shows a Great Companionship Capabilities, Without Hallucinating Much Unlike MistThena7B Finetuned Using Mistral 7b v0.1, This New Architecture allow us To Increase Roleplaying capabilities without Doing everything from scratch as Zephyr Beta has a Strong RP foundation already, Leading us to Scaffolding on this Architecture And Increasing Roleplaying capabilities further.
ZoraBetaA1 contains Cleaned Dataset, however its still relatively Unstable so please Report any issues found through our email nexus.networkinteractives@gmail.com about any overfitting, or improvements for the future Models Once again feel free to Modify the LORA to your likings, However please consider Adding this Page for credits and if you'll increase its Dataset, then please handle it with care and ethical considerations
ZoraBetaA1 is
- Developed by: N-Bot-Int
- License: apache-2.0
- Parent Model from model: HuggingFaceH4/zephyr-7b-beta
- Dataset Combined Using: UltraDatasetCleanerAndMoshpit-R1(Propietary Software)
Notice
- For a Good Experience, Please use
- Low temperature 1.5, min_p = 0.1 and max_new_tokens = 128
- For a Good Experience, Please use
Detail card:
Parameter
- 3 Billion Parameters
- (Please visit your GPU Vendor if you can Run 3B models)
Training
- 300 Steps from Iris-Dataset-Reformat-R1
Finetuning tool:
Unsloth AI
- This Zephyr model was trained 2x faster with Unsloth and Huggingface's TRL library.

- This Zephyr model was trained 2x faster with Unsloth and Huggingface's TRL library.
Fine-tuned Using:
Google Colab
- Downloads last month
- 1
Model tree for N-Bot-Int/ZoraBetaA1
Base model
mistralai/Mistral-7B-v0.1