| --- |
| base_model: |
| - mistralai/Mistral-Nemo-Instruct-2407 |
| - LatitudeGames/Wayfarer-2-12B |
| - Epiculous/Violet_Twilight-v0.2 |
| - inflatebot/MN-12B-Mag-Mell-R1 |
| - cgato/Nemo-12b-Humanize-SFT-v0.2.5-KTO |
| library_name: transformers |
| tags: |
| - mergekit |
| - merge |
| - roleplay |
| --- |
| # Poetic-Rune-12B |
|
|
|  |
|
|
| This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). |
|
|
| ## Merge Details |
| ### Merge Method |
|
|
| This model was merged using the [Linear DELLA](https://arxiv.org/abs/2406.11617) merge method using [mistralai/Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407) as a base. |
|
|
| ### Models Merged |
|
|
| The following models were included in the merge: |
| * [LatitudeGames/Wayfarer-2-12B](https://huggingface.co/LatitudeGames/Wayfarer-2-12B) |
| * [Epiculous/Violet_Twilight-v0.2](https://huggingface.co/Epiculous/Violet_Twilight-v0.2) |
| * [inflatebot/MN-12B-Mag-Mell-R1](https://huggingface.co/inflatebot/MN-12B-Mag-Mell-R1) |
| * [cgato/Nemo-12b-Humanize-SFT-v0.2.5-KTO](https://huggingface.co/cgato/Nemo-12b-Humanize-SFT-v0.2.5-KTO) |
|
|
| ### Configuration |
|
|
| The following YAML configuration was used to produce this model: |
|
|
| ```yaml |
| |
| models: |
| - model: LatitudeGames/Wayfarer-2-12B |
| parameters: |
| weight: 0.15 |
| density: 0.7 |
| epsilon: 0.1 |
| - model: inflatebot/MN-12B-Mag-Mell-R1 |
| parameters: |
| weight: 0.3 |
| density: 0.6 |
| epsilon: 0.3 |
| - model: Epiculous/Violet_Twilight-v0.2 |
| parameters: |
| weight: 0.3 |
| density: 0.6 |
| epsilon: 0.3 |
| - model: cgato/Nemo-12b-Humanize-SFT-v0.2.5-KTO |
| parameters: |
| weight: 0.15 |
| density: 0.65 |
| epsilon: 0.1 |
| merge_method: della_linear |
| base_model: mistralai/Mistral-Nemo-Instruct-2407 |
| parameters: |
| lambda: 1 |
| normalize: true |
| dtype: bfloat16 |
| tokenizer: |
| source: union |
| |
| ``` |