This job is no longer available

The job listing you are looking has expired.
Please browse our latest remote jobs.

See open jobs →
← Back to all jobs

Senior Program Associate/Associate Program Officer/Senior Program Officer, Technical AI Safety

Added
3 hours ago
Type
Full time
Salary
$126K - $287K

Use AI to Automatically Apply!

Let your AI Job Copilot auto-fill application questions
Auto-apply to relevant jobs from 300,000 companies

Auto-apply with JobCopilot Apply manually instead
Save job

About Open Philanthropy

Open Philanthropy is a philanthropic funder and advisor; our mission is to help others as much as we can with the resources available to us. We stress openness to many possibilities and have chosen our focus areas based on importance, neglectedness, and tractability. Our current giving areas include navigating transformative AI, global health and development, farm animal welfare, and biosecurity and pandemic preparedness. In 2024, we recommended $650 million of grants to high-impact causes, and we’ve recommended over $4.8 billion in grants since our formation. Our spending has grown significantly in 2025, and we expect to continue to scale our grantmaking for several years.

About the Technical AI Safety team

The Technical AI Safety (TAIS) team funds technical research aimed at reducing catastrophic risks from advanced AI, and is housed under our broader work on navigating transformative AI, the largest focus area at Open Philanthropy. Last year, we made $40 million in grants, and this year we expect to make >$130 million. We plan to continue expanding our grantmaking in 2026, and are looking to hire additional staff to enable this.

We think that technical AI safety grantmaking is a highly impactful career for reducing catastrophic risks from advanced AI. Grantmakers have an outsized influence on the field of technical AI safety: the role involves influencing dozens of research projects at once, setting incentives for the entire field, and growing the field by supporting new researchers and incubating organizations that could play important roles in the future. Our grants include general operating support to organizations conducting AI safety research (e.g. FAR.AI, Redwood Research), project-based grants to academic and independent researchers (e.g. through our recent RFP), and proactively seeding new initiatives.

We only have three grantmakers on the team, and are regularly bottlenecked by technical grantmaker capacity, particularly as we have scaled. If you join our team, you may be able to significantly increase the quantity and quality of grants we’re able to make. For example, growing our team’s capacity may enable us to:

  • Periodically update and reopen our recent RFP, keeping it open for longer (potentially permanently open), to match the substantial interest we have received from researchers throughout the year, even as the AI safety field continues to grow.

  • Spend more time actively seeking out and creating exciting new grant opportunities.

  • Engage more with our largest grantees to ensure they are set up for success, including suggesting alterations to make their research more impactful.

  • Write more in public about research we think would be impactful and how to make it happen.

  • Investigate more of the promising proposals we receive, instead of having to aggressively triage due to limited grantmaker capacity.

About the roles

We are looking for multiple hires at a range of seniority levels: Senior Program Associate, Associate Program Officer, and Senior Program Officer. Below, we outline what we are looking for across the roles, and then give more detail about how expectations differ between them.

The ideal candidate for these positions will possess many of the skills and experiences described below. However, there is no such thing as a “perfect” candidate, and we are hiring across a broad range of levels of seniority, so if you are on the fence about applying because you are unsure whether you are qualified, we strongly encourage you to apply. There is a single application for all of the roles listed; we plan to let you know at the point of inviting you for a work test which role(s) we are considering you for.

Who we’re looking for across the roles

The core function of each role is to recommend grants to advance technical research aimed at reducing catastrophic risks from AI. All of our grantmakers have significant responsibility for investigating and recommending grants. We expect team members to develop views about the field, and want to empower them to make grants that can help to shape it. In practice we expect to rely significantly on grantmakers’ inside views about individual grants, and often about entire research agendas.

You might be a good fit for these roles if you have:

  • Familiarity with AI safety. You have well thought out views on the sources and severity of catastrophic risk from transformative AI, and in the cases for and against working on various technical research directions to reduce those risks. You communicate your views clearly, and you regularly update these views through conversation with others.

  • Technical literacy. You are comfortable evaluating a technical proposal for technical feasibility, novelty, and the potential of its contribution to a research area (e.g. to one of the research areas we list in our most recent RFP). You are at home in technical conversations with researchers who are potential or current grantees.

  • Good judgment. You can identify and focus on the most important considerations, have good instincts about when to do due diligence and when to focus on efficiency, and form reasonable, holistic perspectives on people and organizations.

  • High productivity. You are conscientious and well-organized, and you can work efficiently.

  • Clear communication. You avoid buzzwords and abstractions, and give concise arguments with transparent reasoning (you’ll need to produce internal grant writeups, and you may also draft public blog posts).

  • High agency. You will push to make the right thing happen on large, unscoped projects, even if it requires rolling up your sleeves to do something unusual, difficult, and/or time-consuming.

  • Technical AI safety research experience. You have published TAIS research in the past. This is not a hard requirement, but is useful for these roles (especially the more senior roles).

We also expect all staff to model our operating values of ownership, openness, calibration, and inclusiveness.

In general, roles within the team are fairly fluid, with people at different levels of seniority contributing to a range of tasks according to where their skillset and experience is most valuable. Even “junior” team members (in terms of professional experience) regularly take on significant responsibility, especially in areas in which they have expertise.

Central tasks across the roles could include:

  • Evaluating technical grant applications, for example those that came from our recent RFP.

  • Iterating with potential grantees on their research ideas and strategic plans.

  • Maintaining strong knowledge of important developments in AI capabilities and safety research, and adapting our funding strategy appropriately.

  • Developing strong relationships with key AI safety researchers and other important people in the field, and understanding their views on important developments.

  • Explaining our AI safety threat models and research priorities to potential grantees.

  • Sharing feedback and managing relationships with grantees, both in writing and conversation.

Senior Program Associate

Senior Program Associates are typically engaged early contributors to the field of TAIS with strong independent judgment. Candidates might have roughly 0.5-2 years of TAIS-relevant experience – i.e. any experience that involves spending a significant fraction of your time thinking, talking, or reading about technical AI safety. Examples of TAIS-relevant experience include a research master’s degree focused on AI alignment research, time in a technical AI safety mentorship program, or employment in an organization that works on technical AI safety.

Associate Program Officer

Associate Program Officers typically have established expertise in technical AI safety (i.e. 2-4 years of TAIS-relevant experience) or bring professional judgment and transferable skills from other domains while having some technical AI safety expertise (i.e. typically 0.5-2 years of TAIS-relevant experience and 3+ years of other professional experience).

In addition to the tasks listed above, Associate Program Officers might expect to:

  • Develop our grantmaking strategy in particular areas, including ways we could increase impact or use active grantmaking to shape the field of AI safety.

  • Actively create highly promising grant opportunities where they do not already exist.

  • Own relationships with our largest and most important grantees.

Senior Program Officer

Senior Program Officers are typically recognized thought leaders in the field of technical AI safety (i.e. typically bring 5+ years of TAIS-relevant experience) or bring senior-level professional expertise and judgment from other domains combined with significant technical AI safety knowledge (i.e. 2+ years of TAIS-relevant experience and 6+ years of other professional experience).

In addition to the tasks listed above, Senior Program Officers might expect to:

  • Own a significant fraction of our grantmaking strategy, including managing a significant share of our budget, for example in a subarea of technical AI safety.

  • Develop strong relationships with leaders in the field of AI safety.

  • Manage other grantmakers on the team.

  • Autonomously manage large projects for the team.

Different levels of seniority within the team are determined not only by individuals’ prior relevant professional experience, but also by their ability to take ownership of more significant and valuable lines of work undertaken by the team.

Application process

  • Deadline: The application deadline is 11:59 p.m. Pacific Time on Monday 24 November.

  • Application process:

    • Our application process will include a work test and interviews, which will take place remotely by default.

    • The initial application consists of answering a series of questions on our application form and uploading a resume/CV.

    • We plan to invite advanced candidates to complete a paid work test in ~mid December, to be completed by early-mid January. We expect the work test to take 6-8 hours.

    • We expect to conduct interviews in late January / early February and hope to make offers in February.

    • Please note that we cannot give feedback during the early stages of the process, including on any work tests, due to time constraints. Thank you for your understanding.

Role details & benefits

  • Location: This is a full-time, permanent position with flexible work hours and location. Our ideal candidate would be based in the San Francisco Bay Area, but we are open to hiring strong candidates on a full-time remote basis.

    • We are happy to consider candidates based outside of the U.S., and to consider sponsoring U.S. work authorization. However, we don’t control who is and isn’t eligible for a visa and can’t guarantee visa approval.

  • Compensation:

    • The starting compensation for a Senior Program Associate is $126,233.59 - $171,038.93, of which 15% is paid as an unconditional 401k grant, up to $23,000.

    • The starting compensation for an Associate Program Officer is $172,388.73 - $233,576.37, of which 15% is paid as an unconditional 401k grant, up to $23,000.

    • The starting compensation for a Senior Program Officer is $211,623.63 - $286,737.30, of which 15% is paid as an unconditional 401k grant, up to $23,000.

    • Ranges within each role reflect differences in location and technical background. Team members in the Bay Area receive an upwards adjustment, as do those with strong technical backgrounds in AI research; candidates satisfying both criteria can expect to be paid at the top of these ranges, though we make decisions on a case-by-case basis.

    • For exceptional candidates, compensation could be materially higher than the values listed above. If you're interested in the role but are concerned about compensation, we encourage you to apply anyway and discuss this with our recruiting team.

    • All compensation will be distributed in the form of take-home salary for internationally based hires.

  • Benefits: Our benefits package includes:

    • Excellent health insurance (we cover 100% of premiums within the U.S. for you and any eligible dependents) and an employer-funded Health Reimbursement Arrangement for certain other personal health expenses.

    • Dental, vision, and life insurance for you and your family.

    • Four weeks of PTO recommended per year, alongside national holidays.

    • Four months of fully paid family leave.

    • A generous and flexible expense policy — we encourage staff to expense the ergonomic equipment, software, and other services that they need to stay healthy and productive.

    • A continual learning policy that encourages staff to spend time on professional development with related expenses covered.

    • Support for remote work — we’ll cover a remote workspace outside your home if you need one, or connect you with an Open Phil coworking hub in your city. We currently have offices in San Francisco and Washington D.C., and multiple staff working from several other cities in the U.S. and elsewhere, in particular London, where we have ~20 staff including three on the TAIS team.

    • We can’t always provide every benefit we offer U.S. staff to international hires, but we’re working on it (and will usually provide cash equivalents of any benefits we can’t offer in your country).

  • Start date: We would ideally like a candidate to begin as soon as possible after receiving an offer, but we are willing to wait if the strongest candidates can only start later.

We aim to employ people with many different experiences, perspectives, and backgrounds who share our passion for accomplishing as much good as we can. We are committed to creating an environment where all employees have the opportunity to succeed, and we do not discriminate based on race, religion, color, national origin, gender, sexual orientation, or any other legally protected status.

If you need assistance or an accommodation due to a disability, or have any other questions about applying, please contact jobs@openphilanthropy.org.

U.S.-based Program staff are typically employed by Open Philanthropy Project LLC, which is not a 501(c)(3) tax-exempt organization. As such, this role is unlikely to be eligible for public service loan forgiveness programs.

Open Philanthropy may use artificial intelligence (AI) and machine learning (ML) technologies, including natural language processing and predictive analytics, to assist in the initial screening of employment applications. These AI/ML tools assess applications against the characteristics and qualifications relevant to the job requisition. These tools are designed to help identify potentially qualified candidates, but they do not make automated hiring decisions. The AI/ML-generated assessments are one of several factors considered in the hiring process. Our human recruiting team will thoroughly evaluate your skills and qualifications to determine your suitability for the role.
If you prefer not to have your application assessed using AI/ML features, you may opt out by reaching out to jobs@openphilanthropy.org and letting us know. Opting out will not negatively impact your application, which will be reviewed manually by our team.

🛃 Visa sponsorship

Use AI to Automatically Apply!

Let your AI Job Copilot auto-fill application questions
Auto-apply to relevant jobs from 300,000 companies

Auto-apply with JobCopilot Apply manually instead
Share job

Meet JobCopilot: Your Personal AI Job Hunter

Automatically Apply to Remote All Other Jobs. Just set your preferences and Job Copilot will do the rest—finding, filtering, and applying while you focus on what matters.

Related All Other Jobs

See more All Other jobs →