Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
About the Role
As Technical Policy Lead for Government and Third-Party Safety Partnerships, you'll advance Anthropic's safety partnerships with governments and the broader third-party ecosystem. This is a hybrid role that is technically grounded but drives policy impact – strengthening state capacity, building a wider ecosystem for AI safety, and developing shared standards, in order to advance Anthropic’s mission to ensure the world safely makes the transition through transformative AI.
You'll serve as the critical bridge between Anthropic's Policy, Frontier Red Team, Alignment, Safeguards, and Legal teams on one side, and government safety institutes and third-party evaluators on the other. You'll manage complex multi-stakeholder engagements spanning pre-deployment evaluations, collaborative safety research, information-sharing agreements, and regulatory harmonization efforts. As AI capabilities advance, these partnerships will be essential for providing trusted external validation of our safety efforts.
Working within our International Policy team, you'll have significant autonomy to shape these partnerships. We're looking for someone who can be both a trusted partner to external counterparts and a thoughtful steward of Anthropic's resources and priorities - someone who brings clear judgment about where to focus our efforts for maximum impact. This role exists to ensure our government and third-party safety partnerships are strategically coherent, technically sound, and structured in ways that scale globally.
In this role you will: -
Identify collaboration opportunities that advance state capacity on AI safety, strengthen the broader safety ecosystem, and support Anthropic policy objectives for ASL-4+ readiness and standards harmonization
-
Develop frameworks for engaging with emerging government safety organizations, to promoting a coherent global approach that leverages each country's unique capabilities
-
Evaluate and prioritize inbound requests from external safety organizations, ensuring Anthropic's engagement is focused on high-impact collaborations
-
Own strategic alignment as well as day-to-day coordination with US CAISI, UK AISI, and other safety organizations relevant to policy, serving as the primary operational point of contact
-
Manage research and testing collaborations covering pre/post-deployment evaluations, safeguards testing, and alignment research
-
Coordinate between internal teams, in particular Policy, Frontier Red Team, Alignment, Safeguards, Legal, and Communications to facilitate external safety partnerships
-
Support Legal in negotiating and maintaining collaboration agreements and MOUs
-
Manage technical access provisions in accordance with agreement terms
-
Track external safety partnerships and provide timely updates to internal stakeholders
-
Position external safety partnerships to provide decision-relevant information for internal safety teams and processes, such as in safety cases, and enable trusted independent evaluation and validation of our models and processes
-
Represent Anthropic at external conferences and speaking engagements
Identify collaboration opportunities that advance state capacity on AI safety, strengthen the broader safety ecosystem, and support Anthropic policy objectives for ASL-4+ readiness and standards harmonization
Develop frameworks for engaging with emerging government safety organizations, to promoting a coherent global approach that leverages each country's unique capabilities
Evaluate and prioritize inbound requests from external safety organizations, ensuring Anthropic's engagement is focused on high-impact collaborations
Own strategic alignment as well as day-to-day coordination with US CAISI, UK AISI, and other safety organizations relevant to policy, serving as the primary operational point of contact
Manage research and testing collaborations covering pre/post-deployment evaluations, safeguards testing, and alignment research
Coordinate between internal teams, in particular Policy, Frontier Red Team, Alignment, Safeguards, Legal, and Communications to facilitate external safety partnerships
Support Legal in negotiating and maintaining collaboration agreements and MOUs
Manage technical access provisions in accordance with agreement terms
Track external safety partnerships and provide timely updates to internal stakeholders
Position external safety partnerships to provide decision-relevant information for internal safety teams and processes, such as in safety cases, and enable trusted independent evaluation and validation of our models and processes
Represent Anthropic at external conferences and speaking engagements
You may be a good fit if you: -
Have direct experience with government or third-party safety organizations. You may have previously worked with or at US CAISI, UK AISI, METR, Apollo, or equivalent organizations in research, technical program management, or operations roles. You understand how these organizations operate from the inside.
-
Have a technical background in AI safety-relevant areas. You should have sufficient technical experience to engage credibly with evaluation methodologies, understand alignment and safeguards concepts, and partner with internal technical staff to solve problems. You should also have familiarity with safety standards such as our Responsible Scaling policy and the EU Code of Practice.
-
Excel at multi-stakeholder coordination. You should have a proven track record managing complex programs that span research, policy, legal, and communications teams with differing priorities.
-
Are comfortable making judgment calls in ambiguous situations. You can synthesize incomplete information from multiple teams, identify blockers, and drive decisions when strategic direction is evolving.
-
Exercise sound judgment on prioritization. You can focus effort where it matters most and manage stakeholder expectations effectively, with clarity and directness.
-
Communicate effectively across contexts. You are equally comfortable briefing executives on partnership strategy, explaining technical evaluation plans to policy teams, and managing day-to-day coordination with external researchers and engineers.
-
Understand government contexts. You are familiar with how government agencies operate, including considerations around information security, inter-agency coordination, and policy development processes.
Have direct experience with government or third-party safety organizations. You may have previously worked with or at US CAISI, UK AISI, METR, Apollo, or equivalent organizations in research, technical program management, or operations roles. You understand how these organizations operate from the inside.
Have a technical background in AI safety-relevant areas. You should have sufficient technical experience to engage credibly with evaluation methodologies, understand alignment and safeguards concepts, and partner with internal technical staff to solve problems. You should also have familiarity with safety standards such as our Responsible Scaling policy and the EU Code of Practice.
Excel at multi-stakeholder coordination. You should have a proven track record managing complex programs that span research, policy, legal, and communications teams with differing priorities.
Are comfortable making judgment calls in ambiguous situations. You can synthesize incomplete information from multiple teams, identify blockers, and drive decisions when strategic direction is evolving.
Exercise sound judgment on prioritization. You can focus effort where it matters most and manage stakeholder expectations effectively, with clarity and directness.
Communicate effectively across contexts. You are equally comfortable briefing executives on partnership strategy, explaining technical evaluation plans to policy teams, and managing day-to-day coordination with external researchers and engineers.
Understand government contexts. You are familiar with how government agencies operate, including considerations around information security, inter-agency coordination, and policy development processes.
This role can be based in San Francisco, Washington DC, or London.
The expectedbase compensation for this position is below. Our total compensation package for full-time employees includes equity, benefits, and may include incentive compensation.
$230,000
—$265,000 USD
Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience.Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship:We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
How we're differentWe believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Come work with us!Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage:Learn aboutour policyfor using AI in our application process
Meet JobCopilot: Your Personal AI Job Hunter
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!