Oxen.ai
Subscribe for the latest news, research, and updates from Oxen.ai

Latest

Sep
18
arXiv Dive: How Flux and Rectified Flow Transformers Work

arXiv Dive: How Flux and Rectified Flow Transformers Work

Flux made quite a splash with its release on August 1st, 2024 as the new state of the art generative
9 min read
Sep
13
How Well Can Llama 3.1 8B Detect Political Spam? [4/4]

How Well Can Llama 3.1 8B Detect Political Spam? [4/4]

It only took about 11 minutes to fine-tuned Llama 3.1 8B on our political spam synthetic dataset using ReFT.
3 min read
Sep
04
Fine-Tuning Llama 3.1 8B in Under 12 Minutes [3/4]

Fine-Tuning Llama 3.1 8B in Under 12 Minutes [3/4]

Meta has recently released Llama 3.1, including their 405 billion parameter model which is the most capable open model
3 min read
Aug
26
arXiv Dive: How Meta Trained Llama 3.1

arXiv Dive: How Meta Trained Llama 3.1

Llama 3.1 is a set of Open Weights Foundation models released by Meta, which marks the first time an
12 min read
Aug
22
How to De-duplicate and Clean Synthetic Data [2/4]

How to De-duplicate and Clean Synthetic Data [2/4]

Synthetic data has shown promising results for training and fine tuning large models, such as Llama 3.1 and the
6 min read
Jul
31
A screenshot of the synthetic spam dataset.

Create Your Own Synthetic Data With Only 5 Political Spam Texts [1/4]

With the 2024 elections coming up, spam and political texts are more prevalent than ever as political campaigns increasingly turn
5 min read
Jul
25
Fine-tuning Llama 3 in 14 minutes using ReFT

Fine-tuning Llama 3 in 14 minutes using ReFT

If you have been fine-tuning models recently, you have most likely used LoRA. While LoRA has been the dominant PEFT
8 min read
Jul
21
ArXiv Dives: How ReFT works

ArXiv Dives: How ReFT works

ArXiv Dives is a series of live meetups that take place on Fridays with the Oxen.ai community. We believe
10 min read
Jun
26
ArXiv Dives:💃 Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling

ArXiv Dives:💃 Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling

Modeling sequences with infinite context length is one of the dreams of Large Language models. Some LLMs such as Transformers
4 min read
Jun
04
ArXiv Dives: Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet

ArXiv Dives: Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet

The ability to interpret and steer large language models is an important topic as they become more and more a
9 min read