Latest

Feb
05
Arxiv Dives - Self-Rewarding Language Models

Arxiv Dives - Self-Rewarding Language Models

The goal of this paper is to see if we can create a self-improving feedback loop to achieve “superhuman agents”
13 min read
Jan
29
Arxiv Dives - Direct Preference Optimization (DPO)

Arxiv Dives - Direct Preference Optimization (DPO)

This paper provides a simple and stable alternative to RLHF for aligning Large Language Models with human preferences called "
12 min read
Jan
20
Arxiv Dives - Efficient Streaming Language Models with Attention Sinks

Arxiv Dives - Efficient Streaming Language Models with Attention Sinks

This paper introduces the concept of an Attention Sink which helps Large Language Models (LLMs) maintain the coherence of text
12 min read
Jan
13
Arxiv Dives - How Mixture of Experts works with Mixtral 8x7B

Arxiv Dives - How Mixture of Experts works with Mixtral 8x7B

Mixtral 8x7B is an open source mixture of experts large language model released by the team at Mistral.ai that
12 min read
Jan
07
Arxiv Dives - LLaVA 🌋 an open source Large Multimodal Model (LMM)

Arxiv Dives - LLaVA 🌋 an open source Large Multimodal Model (LMM)

What is LLaVA? LLaVA is a Multi-Modal model that connects a Vision Encoder and an LLM for general purpose visual
12 min read
Jan
06
Practical ML Dive - Building RAG from Open Source Pt 1

Practical ML Dive - Building RAG from Open Source Pt 1

RAG was introduced by the Facebook AI Research (FAIR) team in May of 2020 as an end-to-end way to include
14 min read
Dec
23
Arxiv Dives - How Mistral 7B works

Arxiv Dives - How Mistral 7B works

What is Mistral 7B? Mistral 7B is an open weights large language model by Mistral.ai that was build for
10 min read
Dec
20
Practical ML Dive - How to train Mamba for Question Answering

Practical ML Dive - How to train Mamba for Question Answering

What is Mamba 🐍? There is a lot of hype about Mamba being a fast alternative to the Transformer architecture. The
22 min read
Dec
15
Mamba: Linear-Time Sequence Modeling with Selective State Spaces - Arxiv Dives

Mamba: Linear-Time Sequence Modeling with Selective State Spaces - Arxiv Dives

What is Mamba 🐍? Mamba at it's core is a recurrent neural network architecture, that outperforms Transformers with faster
15 min read
Dec
13
Practical ML Dive - How to customize a Vision Transformer on your own data

Practical ML Dive - How to customize a Vision Transformer on your own data

Welcome to Practical ML Dives, a series spin off of Arxiv Dives. In Arxiv Dives, we cover state of the
20 min read