
Building safe and reliable AI systems for everyone
Anthropic, headquartered in SoMa, San Francisco, is an AI safety and research company focused on developing reliable, interpretable, and steerable AI systems. With over 1,000 employees and backed by Google, Anthropic has raised $29.3 billion in funding, including a monumental Series F round of $13 b...
Anthropic offers comprehensive health, dental, and vision insurance for employees and their dependents, along with inclusive fertility benefits via Ca...
Anthropic's culture is rooted in AI safety and reliability, with a focus on producing less harmful outputs compared to existing AI systems. The compan...

Anthropic • San Francisco, CA | New York City, NY
Anthropic is seeking a Technical CBRN-E Threat Investigator to join their Threat Intelligence team. You'll be responsible for detecting and investigating the misuse of AI systems related to CBRN-E threats. This role requires deep expertise in chemical defense or biodefense.
You have a strong background in threat investigation, particularly in the context of Chemical, Biological, Radiological, Nuclear, and Explosives (CBRN-E) threats. Your expertise in either chemical defense or biodefense is critical, as you will be working at the intersection of AI safety and CBRN security. You are comfortable with the potential exposure to explicit content and are prepared to respond to escalations during weekends and holidays. You possess excellent analytical skills and a keen eye for detail, enabling you to conduct thorough investigations into potential misuse cases of AI systems.
You are a collaborative team player who thrives in a fast-paced environment. Your ability to communicate complex ideas clearly and effectively is essential, as you will work closely with researchers, engineers, and policy experts. You are committed to ensuring that AI systems are safe and beneficial for society, and you understand the implications of misuse in the context of advanced technologies.
Experience in developing detection techniques and building defenses against threat actors is a plus. Familiarity with AI technologies and their potential vulnerabilities will enhance your effectiveness in this role. You are proactive in identifying potential threats and are skilled at developing strategies to mitigate risks associated with AI misuse.
In this role, you will be responsible for detecting and investigating attempts to misuse Anthropic's AI systems for developing weapons, synthesizing dangerous compounds, or creating biological harm. You will conduct thorough investigations into potential misuse cases, leveraging your specialized domain expertise to protect against serious threats. Your work will involve developing novel detection techniques and building robust defenses against threat actors.
You will collaborate with a diverse team of researchers and engineers to enhance the safety and security of AI systems. Your insights will contribute to the development of policies and practices that ensure the responsible use of AI technologies. You will also engage in ongoing research to stay ahead of emerging threats and vulnerabilities in the AI landscape.
Anthropic is a public benefit corporation headquartered in San Francisco, offering competitive compensation and benefits. You will have access to optional equity donation matching, generous vacation and parental leave, and flexible working hours. Our office provides a collaborative environment where you can work closely with colleagues who share your commitment to building beneficial AI systems. We encourage you to apply even if your experience doesn't match every requirement, as we value diverse perspectives and backgrounds.
Apply now or save it for later. Get alerts for similar jobs at Anthropic.