Skip to main content

Experience

My Journey So Far

From Sydney beaches to San Francisco fog. A story of discovering AI safety, taking a leap, and ending up exactly where I wanted to be (even if I didn't know it at the time).

2024 - Present

Research Engineer, Alignment Science

Anthropic

San Francisco, CA

Building and running ML experiments to understand and steer the behavior of powerful AI systems. Contributing to exploratory experimental research on AI safety, with a focus on risks from powerful future systems.

  • Testing robustness of safety techniques with model organisms•
  • Running multi-agent RL experiments for scalable oversight•
  • Building tooling for automated safety evaluations•
  • Contributing to alignment assessments and safety cases•
Mid 2024

The Big Move

Sydney → San Francisco

12,000 km

Packed two suitcases, said goodbye to Sydney beaches, and moved to a city where the ocean is freezing. Ay nako, the culture shock was real, but so was the opportunity.

  • •First time living outside Australia
  • •Learned what 'fog schedule' means (still don't understand it)
  • •Found the best boba spots in SF (priorities)
  • •Miss the beach more than expected — SF water is cold
Early 2024

Got The Call

Anthropic Interview Process

Sydney, Australia (2am)

What started as 'let me just apply and see' turned into four months of interviews, coding exercises, and increasingly elaborate time zone math. The final call came at 2am Sydney time. I said yes before they finished the sentence.

  • Technical interviews that actually tested relevant skills•
  • Research presentation to future teammates•
  • Time zone coordination that aged me 5 years•
  • The most stressful waiting period of my life•
2023 - 2024

Honours Year

UNSW Sydney

Sydney, Australia

The year of 'will this thesis ever end' and discovering that alignment research is what I want to do with my life. Spent more time in the library than my apartment.

  • •Thesis on alignment techniques for language models
  • •Late nights debugging PyTorch gradient issues
  • •Discovered the AI safety research community
  • •Realized I'd found my people
2022

The Alignment Rabbit Hole

Self-directed research

Sydney, Australia

Read 'Risks from Learned Optimization' and spent the next three months in an existential spiral. In a good way. Started pivoting everything toward safety research.

  • First alignment paper: immediate obsession•
  • Started following safety researchers on Twitter•
  • Attempted to explain AI risk to family (mixed results)•
  • Built my first red-teaming experiment•
2020 - 2022

Bachelor of Science

UNSW Sydney

Sydney, Australia

Started as a generic CS student, ended as someone who couldn't stop talking about AI. Classic Kensington campus experience: lectures, boba runs, debugging, late-night gaming, more debugging.

  • •First ML course blew my mind
  • •Built cursed projects at hackathons
  • •Discovered I'm not a systems programmer
  • •Night owl tendencies became permanent

What I Do Day-to-Day

āœ“Design and run experiments to test alignment techniques and safety interventions
āœ“Build model organisms of misalignment to study potential failure modes
āœ“Develop tooling for automated safety evaluations and red-teaming
āœ“Contribute to alignment assessments and RSP evaluations
āœ“Collaborate with interpretability, fine-tuning, and frontier red team members
āœ“Write code, run experiments, and contribute to research papers and blog posts

Sample Projects

Testing safety techniques by training models designed to subvert them
Running multi-agent RL experiments for techniques like AI Debate
Building automated jailbreak generation and testing pipelines
Creating evaluation datasets for safety-relevant model capabilities
Designing experiments for AI control in agentic scenarios
Contributing to alignment assessments for model system cards