arXiv Dive: How Meta Trained Llama 3.1
            Llama 3.1 is a set of Open Weights Foundation models released by Meta, which marks the first time an
        
     
    
            
    
        
    
        How to De-duplicate and Clean Synthetic Data [2/4]
            Synthetic data has shown promising results for training and fine tuning large models, such as Llama 3.1 and the
        
     
    
            
    
        
    
        Create Your Own Synthetic Data With Only 5 Political Spam Texts [1/4]
            With the 2024 elections coming up, spam and political texts are more prevalent than ever as political campaigns increasingly turn
        
     
    
            
    
        
    
        Fine-tuning Llama 3 in 14 minutes using ReFT
            If you have been fine-tuning models recently, you have most likely used LoRA. While LoRA has been the dominant PEFT
        
     
    
            
    
        
    
        ArXiv Dives: How ReFT works
            ArXiv Dives is a series of live meetups that take place on Fridays with the Oxen.ai community. We believe
        
     
    
            
    
        
    
        ArXiv Dives:💃 Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling
            Modeling sequences with infinite context length is one of the dreams of Large Language models. Some LLMs such as Transformers
        
     
    
            
    
        
    
        ArXiv Dives: Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet
            The ability to interpret and steer large language models is an important topic as they become more and more a
        
     
    
            
    
        
    
        ArXiv Dives: Efficient DiT Fine-Tuning with PixART for Text to Image Generation
            Diffusion Transformers have been gaining a lot of steam since OpenAI's demo of Sora back in March. The
        
     
    
            
    
        
    
        ArXiv Dives: Evaluating LLMs for Code Completion with HumanEval
            Large Language Models have shown very good ability to generalize within a distribution, and frontier models have shown incredible flexibility
        
     
    
            
    
        
    
        How to Train Diffusion for Text from  Scratch
            This is part two of a series on Diffusion for Text with Score Entropy Discrete Diffusion (SEDD) models. Today we