Train AI models with Unsloth and Hugging Face Jobs for FREE

Published: (February 19, 2026 at 07:00 PM EST)
5 min read

Source: Hugging Face Blog






Overview

This blog post covers how to use Unsloth and Hugging Face Jobs for fast LLM fine‑tuning (specifically LiquidAI/LFM2.5-1.2B-Instruct) through coding agents like Claude Code and Codex. Unsloth provides ~2× faster training and ~60 % less VRAM usage compared to standard methods, so training small models can cost just a few dollars.

Why a small model?

Small language models like LFM2.5-1.2B‑Instruct are ideal candidates for fine‑tuning:

  • Cheap to train and fast to iterate on.
  • Competitive with much larger models on focused tasks.
  • Run under 1 GB of memory and are optimized for on‑device deployment (CPU, phone, laptop).

Unsloth + Jobs screenshot

You will need

We are giving away free credits to fine‑tune models on Hugging Face Jobs. Join the Unsloth Jobs Explorers organization to claim your free credits and a one‑month Pro subscription.

  • A Hugging Face account (required for HF Jobs).
  • Billing setup (for verification; you can monitor usage and manage billing on your billing page).
  • A Hugging Face token with write permissions.
  • (Optional) A coding agent (Open Code, Claude Code, or Codex).

Run the Job

If you want to train a model using HF Jobs and Unsloth, you can simply use the hf jobs CLI to submit a job.

  1. Install the hf CLI

    # macOS or Linux
    curl -LsSf https://hf.co/cli/install.sh | bash
  2. Submit the job

    hf jobs uv run https://huggingface.co/datasets/unsloth/jobs/resolve/main/sft-lfm2.5.py \
        --flavor a10g-small \
        --secrets HF_TOKEN \
        --timeout 4h \
        --dataset mlabonne/FineTome-100k \
        --num-epochs 1 \
        --eval-split 0.2 \
        --output-repo your-username/lfm-finetuned

    See the training script and the Hugging Face Jobs documentation for more details.

Installing the Skill

The Hugging Face model‑training skill lowers the barrier to entry by letting you train a model simply by prompting. First, install the skill with your coding agent.

Claude Code

Claude Code discovers skills through its plugin system. To install the Hugging Face skills:

/plugin marketplace add huggingface/skills   # add the marketplace
/plugin                                      # browse available skills
/plugin install hugging-face-model-trainer@huggingface-skills

For more details, see the Hub‑with‑Skills documentation or the Claude Code Skills docs.

Codex

Codex discovers skills via AGENTS.md files and .agents/skills/ directories. Install the skill with $skill-installer:

$skill-installer install https://github.com/huggingface/skills/tree/main/skills/hugging-face-model-trainer

Refer to the Codex Skills docs and the AGENTS.md guide for more information.

Generic method

Clone the skills repository and copy the desired skill into your agent’s skills directory:

git clone https://github.com/huggingface/skills.git
mkdir -p ~/.agents/skills && cp -R skills/skills/hugging-face-model-trainer ~/.agents/skills/

Quick Start

Once the skill is installed, ask your coding agent to train a model:

Train LiquidAI/LFM2.5-1.2B-Instruct on mlabonne/FineTome-100k using Unsloth on HF Jobs

The agent will:

  1. Generate a training script based on an example in the skill.
  2. Submit the job to HF Jobs.
  3. Provide a monitoring link via Trackio.
  4. Push the trained model to your Hugging Face Hub repository.

How It Works

Training jobs run on Hugging Face Jobs, a fully managed cloud‑GPU service. The agent:

  • Generates a UV script with inline dependencies.
  • Submits it to HF Jobs via the hf CLI.
  • Reports the job ID and monitoring URL.
  • Pushes the trained model to your Hub repository.

Example

(Insert your example usage or output here.)

Training Script

The skill generates scripts like this based on the example in the skill.

# /// script
# dependencies = ["unsloth", "trl>=0.12.0", "datasets", "trackio"]
# ///

from unsloth import FastLanguageModel
from trl import SFTTrainer, SFTConfig
from datasets import load_dataset

model, tokenizer = FastLanguageModel.from_pretrained(
    "LiquidAI/LFM2.5-1.2B-Instruct",
    load_in_4bit=True,
    max_seq_length=2048,
)

model = FastLanguageModel.get_peft_model(
    model,
    r=16,
    lora_alpha=32,
    lora_dropout=0,
    target_modules=[
        "q_proj",
        "k_proj",
        "v_proj",
        "out_proj",
        "in_proj",
        "w1",
        "w2",
        "w3",
    ],
)

dataset = load_dataset("trl-lib/Capybara", split="train")

trainer = SFTTrainer(
    model=model,
    tokenizer=tokenizer,
    train_dataset=dataset,
    args=SFTConfig(
        output_dir="./output",
        push_to_hub=True,
        hub_model_id="username/my-model",
        per_device_train_batch_size=4,
        gradient_accumulation_steps=4,
        num_train_epochs=1,
        learning_rate=2e-4,
        report_to="trackio",
    ),
)

trainer.train()
trainer.push_to_hub()

Model Size | Recommended GPU | Approx. Cost/hr

Model sizeRecommended GPUApprox. cost/hr
< 1 B paramst4-small~$0.40
1‑3 B paramst4-medium~$0.60
3‑7 B paramsa10g-small~$1.00
7‑13 B paramsa10g-large~$3.00

For a full overview of Hugging Face Spaces pricing, check out the guide here.

Tips for Working with Coding Agents

  • Be specific about the model and dataset to use, and include Hub IDs (e.g., Qwen/Qwen2.5-0.5B and trl-lib/Capybara). Agents will search for and validate those combinations.
  • Mention Unsloth explicitly if you want it used; otherwise, the agent will choose a framework based on the model and budget.
  • Ask for cost estimates before launching large jobs.
  • Request Trackio monitoring for real‑time loss curves.
  • Check job status by asking the agent to inspect logs after submission.

Resources

0 views
Back to Blog

Related posts

Read more »