Cassius Oldenburg
Independent LLM Safety & Red Teaming Researcher
        Welcome to my corner of the alignment frontier.
        I explore how large language models succeed, fail, and sometimes break their own rules.
        My focus is on adversarial red teaming, alignment drift, and practical risks in real-world AI systems.
      
Latest Project
        
          EXP01 — Guardrail Decay in LLMs
        
        My first public experiment documents how and when LLM safety guardrails erode under persistent prompting.
        Learning in public, sharing raw data, and inviting scrutiny.
      
Contact
- Email: [email protected]
- X (Twitter): @redcassius
Always open to feedback, critique, collaboration, and mentorship.
      Let’s build safer AI together.