Fine-Tune Fridays

Each week we will take an open-source model and put it head to head against a closed-source foundation model on a specialized task. ​We will be giving you practical examples with reference code, reference data, model weights, and the end to end infrastructure to reproduce experiments on your own. Join the community here: https://lu.ma/oxen
Jun
28
How to Fine-Tune a FLUX.1-dev LoRA with Code, Step by Step

How to Fine-Tune a FLUX.1-dev LoRA with Code, Step by Step

FLUX.1-dev is one of the most popular open-weight models available today. Developed by Black Forest Labs, it has 12
20 min read
Jun
19
How to Fine-Tune PixArt to Generate a Consistent Character

How to Fine-Tune PixArt to Generate a Consistent Character

Can we fine-tune a small diffusion transformer (DiT) to generate OpenAI-level images by distilling off of OpenAI images? The end
21 min read
May
28
How to Fine-Tune Qwen3 on Text2SQL to GPT-4o level performance

How to Fine-Tune Qwen3 on Text2SQL to GPT-4o level performance

Welcome to a new series from the Oxen.ai Herd called Fine-Tuning Fridays! Each week we will take an open
15 min read
May
16
Fine-Tuning Fridays

Fine-Tuning Fridays

Welcome to a new series from the Oxen.ai Herd called Fine-Tuning Fridays! Each week we will take an open
4 min read
Mar
05
Training a Rust 1.5B Coder LM with Reinforcement Learning (GRPO)

Training a Rust 1.5B Coder LM with Reinforcement Learning (GRPO)

Group Relative Policy Optimization (GRPO) has proven to be a useful algorithm for training LLMs to reason and improve on
17 min read