LLaVA-CoT: Let Vision Language Models Reason Step-By-Step
When it comes to large language models, it is still the early innings. Many of them still hallucinate, fail to
How Upcycling MoEs Beat Dense LLMs
In this Arxiv Dive, Nvidia researcher, Ethan He, presents his co-authored work Upcycling LLMs in Mixture of Experts (MoE). He
Thinking LLMs: General Instruction Following with Thought Generation
The release of OpenAI-O1 has motivated a lot of people to think deeply about…thoughts 💭. Thinking before you speak is
The Prompt Report Part 2: Plan and Solve, Tree of Thought, and Decomposition Prompting
In the last blog, we went over prompting techniques 1-3 of The Prompt Report. This arXiv Dive, we were lucky
The Prompt Report Part 1: A Systematic Survey of Prompting Techniques
For this blog we are switching it up a bit. In past Arxiv Dives, we have gone deep into the
arXiv Dive: How Flux and Rectified Flow Transformers Work
Flux made quite a splash with its release on August 1st, 2024 as the new state of the art generative
arXiv Dive: How Meta Trained Llama 3.1
Llama 3.1 is a set of Open Weights Foundation models released by Meta, which marks the first time an
ArXiv Dives:💃 Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling
Modeling sequences with infinite context length is one of the dreams of Large Language models. Some LLMs such as Transformers
ArXiv Dives: Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet
The ability to interpret and steer large language models is an important topic as they become more and more a
ArXiv Dives: Efficient DiT Fine-Tuning with PixART for Text to Image Generation
Diffusion Transformers have been gaining a lot of steam since OpenAI's demo of Sora back in March. The