Greg Schoeninger

Greg Schoeninger

Dec
09
LLaVA-CoT: Let Vision Language Models Reason Step-By-Step

LLaVA-CoT: Let Vision Language Models Reason Step-By-Step

When it comes to large language models, it is still the early innings. Many of them still hallucinate, fail to
12 min read
Nov
18
How Upcycling MoEs Beat Dense LLMs

How Upcycling MoEs Beat Dense LLMs

In this Arxiv Dive, Nvidia researcher, Ethan He, presents his co-authored work Upcycling LLMs in Mixture of Experts (MoE). He
1 min read
Nov
11
Thinking LLMs: General Instruction Following with Thought Generation

Thinking LLMs: General Instruction Following with Thought Generation

The release of OpenAI-O1 has motivated a lot of people to think deeply about…thoughts 💭. Thinking before you speak is
14 min read
Oct
31
The Prompt Report Part 2: Plan and Solve, Tree of Thought, and Decomposition Prompting

The Prompt Report Part 2: Plan and Solve, Tree of Thought, and Decomposition Prompting

In the last blog, we went over prompting techniques 1-3 of The Prompt Report. This arXiv Dive, we were lucky
17 min read
Jul
21
ArXiv Dives: How ReFT works

ArXiv Dives: How ReFT works

ArXiv Dives is a series of live meetups that take place on Fridays with the Oxen.ai community. We believe
10 min read
Apr
29
How to Train Diffusion for Text from  Scratch

How to Train Diffusion for Text from Scratch

This is part two of a series on Diffusion for Text with Score Entropy Discrete Diffusion (SEDD) models. Today we
16 min read
Apr
15
ArXiv Dives: Text Diffusion with SEDD

ArXiv Dives: Text Diffusion with SEDD

Diffusion models have been popular for computer vision tasks. Recently models such as Sora show how you can apply Diffusion
11 min read
Apr
08
ArXiv Dives: The Era of 1-bit LLMs, All Large Language Models are in 1.58 Bits

ArXiv Dives: The Era of 1-bit LLMs, All Large Language Models are in 1.58 Bits

This paper presents BitNet b1.58 where every weight in a Transformer can be represented as a {-1, 0, 1}
9 min read
Apr
01
ArXiv Dives: Evolutionary Optimization of Model Merging Recipes

ArXiv Dives: Evolutionary Optimization of Model Merging Recipes

Today, we’re diving into a fun paper by the team at Sakana.ai called “Evolutionary Optimization of Model Merging
10 min read
Mar
25
ArXiv Dives: I-JEPA

ArXiv Dives: I-JEPA

Today, we’re diving into the I-JEPA paper. JEPA stands for Joint-Embedding Predictive Architecture and if you have been following
13 min read