Instructions to use Interplay-LM-Reasoning/extrapolation_midtrain with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Interplay-LM-Reasoning/extrapolation_midtrain with Transformers:
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("Interplay-LM-Reasoning/extrapolation_midtrain", dtype="auto") - Notebooks
- Google Colab
- Kaggle
Add pipeline tag, GitHub link, and improved model description
#1
by nielsr HF Staff - opened
This PR enhances the model card by:
- Adding
pipeline_tag: text-generationto the metadata, which helps users discover the model on the Hugging Face Hub under the relevant pipeline. - Including a direct link to the GitHub repository for easier access to the codebase.
- Significantly improving the model's description by incorporating key information, overview, and findings from the paper's GitHub README, providing a richer context for the research.
Please review and merge this PR if these improvements are satisfactory.
Clockz changed pull request status to merged