Greg Schoeninger

Greg Schoeninger

Feb
19
ArXiv Dives - Depth Anything

ArXiv Dives - Depth Anything

This paper presents Depth Anything, a highly practical solution for robust monocular depth estimation. Depth estimation traditionally requires extra hardware
16 min read
Feb
12
Arxiv Dives - Toolformer: Language models can teach themselves to use tools

Arxiv Dives - Toolformer: Language models can teach themselves to use tools

Large Language Models (LLMs) show remarkable capabilities to solve new tasks from a few textual instructions, but they also paradoxically
10 min read
Feb
05

Arxiv Dives - Self-Rewarding Language Models

The goal of this paper is to see if we can create a self-improving feedback loop to achieve “superhuman agents”
13 min read
Jan
29
Arxiv Dives - Direct Preference Optimization (DPO)

Arxiv Dives - Direct Preference Optimization (DPO)

This paper provides a simple and stable alternative to RLHF for aligning Large Language Models with human preferences called "
12 min read
Jan
20
Arxiv Dives - Efficient Streaming Language Models with Attention Sinks

Arxiv Dives - Efficient Streaming Language Models with Attention Sinks

This paper introduces the concept of an Attention Sink which helps Large Language Models (LLMs) maintain the coherence of text
12 min read
Jan
13
Arxiv Dives - How Mixture of Experts works with Mixtral 8x7B

Arxiv Dives - How Mixture of Experts works with Mixtral 8x7B

Mixtral 8x7B is an open source mixture of experts large language model released by the team at Mistral.ai that
12 min read
Jan
07
Arxiv Dives - LLaVA 🌋 an open source Large Multimodal Model (LMM)

Arxiv Dives - LLaVA 🌋 an open source Large Multimodal Model (LMM)

What is LLaVA? LLaVA is a Multi-Modal model that connects a Vision Encoder and an LLM for general purpose visual
12 min read
Jan
06
Practical ML Dive - Building RAG from Open Source Pt 1

Practical ML Dive - Building RAG from Open Source Pt 1

RAG was introduced by the Facebook AI Research (FAIR) team in May of 2020 as an end-to-end way to include
14 min read
Dec
23
Arxiv Dives - How Mistral 7B works

Arxiv Dives - How Mistral 7B works

What is Mistral 7B? Mistral 7B is an open weights large language model by Mistral.ai that was build for
10 min read
Dec
20
Practical ML Dive - How to train Mamba for Question Answering

Practical ML Dive - How to train Mamba for Question Answering

What is Mamba 🐍? There is a lot of hype about Mamba being a fast alternative to the Transformer architecture. The
22 min read