Latest

Mar
18
Uploading Datasets to Oxen.ai

Uploading Datasets to Oxen.ai

Oxen.ai makes it quick and easy to upload your datasets, keep track of every version and share them with
4 min read
Mar
11
ArXiv Dives - Diffusion Transformers

ArXiv Dives - Diffusion Transformers

Diffusion transformers achieve state-of-the-art quality generating images by replacing the commonly used U-Net backbone with a transformer that operates on
14 min read
Mar
04
"Road to Sora" Paper Reading List

"Road to Sora" Paper Reading List

This post is an effort to put together a reading list for our Friday paper club called ArXiv Dives. Since
21 min read
Mar
04
ArXiv Dives - Medusa

ArXiv Dives - Medusa

Abstract In this paper, they present MEDUSA, an efficient method that augments LLM inference by adding extra decoding heads to
5 min read
Feb
26
ArXiv Dives - Lumiere

ArXiv Dives - Lumiere

This paper introduces Lumiere – a text-to-video diffusion model designed for synthesizing videos that portray realistic, diverse and coherent motion – a
11 min read
Feb
19
ArXiv Dives - Depth Anything

ArXiv Dives - Depth Anything

This paper presents Depth Anything, a highly practical solution for robust monocular depth estimation. Depth estimation traditionally requires extra hardware
16 min read
Feb
12
Arxiv Dives - Toolformer: Language models can teach themselves to use tools

Arxiv Dives - Toolformer: Language models can teach themselves to use tools

Large Language Models (LLMs) show remarkable capabilities to solve new tasks from a few textual instructions, but they also paradoxically
10 min read
Feb
05
Arxiv Dives - Self-Rewarding Language Models

Arxiv Dives - Self-Rewarding Language Models

The goal of this paper is to see if we can create a self-improving feedback loop to achieve “superhuman agents”
13 min read
Jan
29
Arxiv Dives - Direct Preference Optimization (DPO)

Arxiv Dives - Direct Preference Optimization (DPO)

This paper provides a simple and stable alternative to RLHF for aligning Large Language Models with human preferences called "
12 min read
Jan
20
Arxiv Dives - Efficient Streaming Language Models with Attention Sinks

Arxiv Dives - Efficient Streaming Language Models with Attention Sinks

This paper introduces the concept of an Attention Sink which helps Large Language Models (LLMs) maintain the coherence of text
12 min read