Mathias

Mathias

Nov
18
How Upcycling MoEs Beat Dense LLMs

How Upcycling MoEs Beat Dense LLMs

In this Arxiv Dive, Nvidia researcher, Ethan He, presents his co-authored work Upcycling LLMs in Mixture of Experts (MoE). He
1 min read
Nov
11
Thinking LLMs: General Instruction Following with Thought Generation

Thinking LLMs: General Instruction Following with Thought Generation

The release of OpenAI-O1 has motivated a lot of people to think deeply about…thoughts 💭. Thinking before you speak is
14 min read
Oct
31
The Prompt Report Part 2: Plan and Solve, Tree of Thought, and Decomposition Prompting

The Prompt Report Part 2: Plan and Solve, Tree of Thought, and Decomposition Prompting

In the last blog, we went over prompting techniques 1-3 of The Prompt Report. This arXiv Dive, we were lucky
17 min read
Oct
09
The Prompt Report Part 1: A Systematic Survey of Prompting Techniques

The Prompt Report Part 1: A Systematic Survey of Prompting Techniques

For this blog we are switching it up a bit. In past Arxiv Dives, we have gone deep into the
12 min read
Sep
18
arXiv Dive: How Flux and Rectified Flow Transformers Work

arXiv Dive: How Flux and Rectified Flow Transformers Work

Flux made quite a splash with its release on August 1st, 2024 as the new state of the art generative
9 min read
Aug
26
arXiv Dive: How Meta Trained Llama 3.1

arXiv Dive: How Meta Trained Llama 3.1

Llama 3.1 is a set of Open Weights Foundation models released by Meta, which marks the first time an
12 min read
Jun
26
ArXiv Dives:💃 Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling

ArXiv Dives:💃 Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling

Modeling sequences with infinite context length is one of the dreams of Large Language models. Some LLMs such as Transformers
4 min read
Jun
04
ArXiv Dives: Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet

ArXiv Dives: Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet

The ability to interpret and steer large language models is an important topic as they become more and more a
9 min read
May
29
ArXiv Dives: Efficient DiT Fine-Tuning with PixART for Text to Image Generation

ArXiv Dives: Efficient DiT Fine-Tuning with PixART for Text to Image Generation

Diffusion Transformers have been gaining a lot of steam since OpenAI's demo of Sora back in March. The
8 min read