Anthropic Logo

Product Policy Manager, Bio, Chem, and Nuclear Risks

at Anthropic
Compensation
$200k - $240k per year
Type
Full Time
Experience
Manager
Benefits
  • 401k
  • Equity
  • Dental
  • Medical
  • Paid Parental Leave
  • Vision

About Anthropic

Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.

As a Trust and Safety policy manager focused on bio, chemical and nuclear security risks, you will be responsible for helping develop and manage policies for our products and services that specifically address potential risks related to the misuse of AI for bio, chemical and nuclear threats.  Safety is core to our mission and as a member of the team, you'll help shape policy creation and development so that our users can safely interact with and build on top of our products in a harmless, helpful, and honest way, while mitigating risks related to the potential misuse of our AI technology for bio, chemical and nuclear threats.

Important context for this role: In this position you may be exposed to and engage with explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature.

In this role, you will:

  • Develop deep subject matter expertise in biosecurity, chemical threats, and nuclear security risks and the potential role of AI in such threats
  • Draft new policies that help govern the responsible use of our models for emerging capabilities and use cases, with a specific focus on preventing the misuse of our technology for bio, chemical and nuclear threats
  • Conduct regular reviews of existing policies to identify and address gaps and ambiguities related to biosecurity, chemical threats and nuclear security risks
  • Iterate on and help build out our comprehensive harm framework, incorporating potential bio, chemical and nuclear threats
  •  
  • Update our policies based on feedback from our enforcement team and edge cases that you will review
  • Educate and align internal stakeholders around our policies and our overall approach to product policy
  • Partner with internal and external researchers to better understand our product's limitations and risks related to bio, chemical and nuclear threats, and adapt our policies based on key findings
  • Collaborate with enforcement and detection teams and the Frontier Red Team to establish risk assessment guidelines for identifying and categorizing bio, chemical, and nuclear threats. Monitor and address policy gaps based on violations and edge cases
  • Keep up to date with new and existing AI policy norms and standards, particularly those related to bio, chemical and nuclear security, and use these to inform our decision-making on policy areas

This role will require strong communication, analytical, and problem-solving skills to balance safety and innovation through well-crafted and clearly articulated policies. If you are passionate about developing policies to guide new technology and have expertise in bio, chemical and/or nuclear security risks, we want to hear from you!

You might thrive in this role if you:

  • Have a passion for or interest in artificial intelligence and ensuring it is developed and deployed safely
  • Have awareness of and an interest in Trust and Safety policies
  • Have expertise in biosecurity, chemical threats and/or nuclear security risks and an understanding of how AI technology could potentially contribute to such threats
  • Demonstrated expertise in stakeholder management, including identifying key stakeholders, building and maintaining strong relationships, and effectively communicating project goals and progress  
  • Understand the challenges that exist in developing and implementing policies at scale
  • Love to think creatively about how to use technology in a way that is safe and beneficial, and ultimately furthers the goal of advancing safe AI systems while mitigating risks related to bio, chemical and nuclear threats
Report This Job

All job advertisements are governed by AI Job's Terms of Service. We empower users to report listings that may contravene these terms.

Reason Offensive or discriminatory Appears to be a fake job Contains inaccuracies An advertisement Other (specify)
Additional Information