• +91-7428262995
  • write2spnews@gmail.com

Applications Open for Anthropic Fellows Program 2026: Paid AI Safety Research Opportunity

Anthropic has officially opened applications for its prestigious Anthropic Fellows Program, inviting engineers, researchers, and technical talent to join full-time empirical AI safety research cohorts starting in May and July 2026.

The four-month fellowship provides generous funding, direct mentorship from Anthropic researchers, and the opportunity to work on critical AI safety challenges.

Applications for the July 20, 2026 cohort must be submitted by April 26, 2026. Later cohorts will be reviewed on a rolling basis.

Program Highlights

The Anthropic Fellows Program is designed to accelerate progress in AI safety by supporting promising talent — regardless of prior experience in the field. Fellows collaborate closely with Anthropic researchers on high-impact empirical projects, often using open-source models and public APIs.

Key research areas include:

  • Scalable oversight
  • Adversarial robustness and AI control
  • Model organisms
  • Mechanistic interpretability
  • AI security
  • Model welfare

Participants are encouraged to produce public outputs, such as research papers. Over 80% of previous fellows have successfully published their work.

Generous Financial Support (Stipend & Research Funding)

The program offers substantial financial backing, equivalent to a competitive scholarship for research:

  • Weekly stipend: $3,850 USD (or equivalent: £2,310 GBP / $4,300 CAD), plus country-specific benefits.
  • Research funding: Approximately $15,000 per month for compute resources and other project expenses.
  • Total support per fellow often exceeds $75,000 for the four-month period.

Fellows also receive close mentorship and access to shared workspaces in Berkeley, California or London, UK, with remote options available for eligible candidates in the US, UK, or Canada.

Extensions may be possible in exceptional cases.

Who Should Apply?

Anthropic emphasizes execution ability and research potential over formal credentials, prior publications, or AI safety experience. Candidates from diverse quantitative backgrounds — including physics, mathematics, computer science, cybersecurity, and engineering — are strongly encouraged to apply.

The program is open to individuals with work authorization in the United States, United Kingdom, or Canada.

Note: This is a hands-on research fellowship, not a traditional academic program. It does not include formal courses or degrees and is not affiliated with any external university or institution. It is hosted directly in collaboration with Anthropic.

How to Apply

Interested candidates can apply through Anthropic’s careers portal. Early applications are recommended due to rolling reviews.

  • Application Deadline for July 2026 Cohort: April 26, 2026
  • Cohort Start Dates: May 2026 and July 20, 2026

For full details and to submit your application, visit the official Anthropic job posting or the program page on alignment.anthropic.com.

This fellowship represents one of the most generously funded opportunities in the AI safety field, offering both financial support and direct exposure to frontier research at Anthropic.

Aspiring researchers passionate about building safer AI systems are urged to apply before the deadline.

What's your View?