Practical ML Dive - How to train Mamba for Question Answering
What is Mamba š?
There is a lot of hype about Mamba being a fast alternative to the Transformer architecture. The
Mamba: Linear-Time Sequence Modeling with Selective State Spaces - Arxiv Dives
What is Mamba š?
Mamba at it's core is a recurrent neural network architecture, that outperforms Transformers with faster
Practical ML Dive - How to customize a Vision Transformer on your own data
Welcome to Practical ML Dives, a series spin off of Arxiv Dives.
In Arxiv Dives, we cover state of the
Arxiv Dives - Zero-shot Image Classification with CLIP
CLIP explores the efficacy of learning image representations from scratch with 400 million image-text pairs, showcasing zero-shot transfer capabilities across
How NOT to store unstructured machine learning datasets
Training data is typically the most valuable part of any machine learning project. As we converge on model architectures like
š§¼ SUDS - A Guide to Structuring Unstructured Data
At Oxen.ai we value high quality datasets. We have many years of experience training and evaluating models, and have
Arxiv Dives - Vision Transformers (ViT)
With all of the hype around Transformers for natural language processing and text, the authors of this paper beg the
Reading List For Andrej Karpathyās āIntro to Large Language Modelsā Video
Andrej Karpathy recently released an hour long talk on āThe busy personās intro to large language modelsā that had
Arxiv Dives - A Mathematical Framework for Transformer Circuits - Part 2
Every Friday at Oxen.ai we host a paper club called "Arxiv Dives" to make us smarter Oxen
Arxiv Dives - A Mathematical Framework for Transformer Circuits - Part 1
Every Friday at Oxen.ai we host a paper club called "Arxiv Dives" to make us smarter Oxen