About The Role
What if your curiosity and critical thinking could make AI safer for millions of people We're looking for AI Safety Analysts to put cutting-edge AI systems to the test — probing for unsafe outputs, pushing boundaries, and uncovering behavior that shouldn't exist.
This is your chance to work on one of the most important challenges in technology today. No cybersecurity background. No AI expertise required. Just a sharp, questioning mind and the drive to find what others miss.
- Organization: Alignerr
- Type: Hourly Contract
- Location: Remote
- Commitment: 10–40 hours/week
What You'll Do
- Probe AI systems with challenging, adversarial, and edge-case inputs to uncover unsafe or unexpected behavior
- Identify harmful, inappropriate, or policy-violating AI outputs across a range of scenarios
- Document safety issues clearly and precisely with supporting examples
- Rate AI responses against structured safety and helpfulness rubrics
- Follow red-teaming guides and testing protocols to ensure consistent, high-quality evaluations
- Work independently and asynchronously on your own schedule
Who You Are
- A natural critical thinker who enjoys questioning assumptions and exploring what could go wrong
- Comfortable venturing into edge cases, unusual scenarios, and grey areas
- Strong written communication skills — you can describe problems precisely and concisely
- Curious about AI safety, ethics, and responsible technology development
- Detail-oriented and consistent in your approach
- No cybersecurity, AI, or technical background required
Nice to Have
- Experience in research, journalism, policy, ethics, or quality assurance
- Familiarity with AI tools or large language models as an end user
- Background in psychology, philosophy, or social sciences — great for thinking about harm scenarios
Why Join Us
- Work on real, high-impact AI safety projects alongside leading research labs
- Fully remote and flexible — work when and where it suits you
- Freelance autonomy with the structure of meaningful, mission-driven work
- Contribute to AI development that directly shapes how safely these systems behave in the real world
- Potential for ongoing work and contract extension as new projects launch