OpenAI Launches Safety Fellowship to Tackle AI Alignment Research
The post OpenAI Launches Safety Fellowship to Tackle AI Alignment Research appeared on BitcoinEthereumNews.com.
Caroline Bishop
Apr 08, 2026 17:45
OpenAI announces new fellowship program for external researchers focused on AI safety and alignment, running September 2026 through February 2027.
OpenAI is opening its doors to outside researchers with a new Safety Fellowship program aimed at advancing independent work on AI alignment and safety challenges. Applications are now open, with a May 3 deadline. The five-month program runs from September 14, 2026 through February 5, 2027, targeting researchers, engineers, and practitioners who want to tackle safety questions affecting both current and future AI systems. OpenAI has partnered with Constellation to provide workspace in Berkeley, though remote participation is an option. What OpenAI Wants The company outlined priority research areas including safety evaluation, ethics, robustness, scalable mitigations, privacy-preserving safety methods, agentic oversight, and high-severity misuse domains. They’re specifically seeking work that’s “empirically grounded, technically strong, and relevant to the broader research community.” Fellows won’t get internal system access—a notable limitation—but will receive API credits, compute support, a monthly stipend, and mentorship from OpenAI staff. The expectation is clear: produce something tangible by program’s end, whether that’s a research paper, benchmark, or dataset. Who Should Apply OpenAI is casting a wide net on backgrounds. Computer science is obvious, but they’re also welcoming applicants from social science, cybersecurity, privacy, and human-computer interaction fields. The company explicitly stated they “prioritize research ability, technical judgment, and execution over specific credentials.” Letters of reference are required. Successful applicants will be notified by July 25. The Bigger Picture This fellowship arrives as AI safety concerns have moved from academic debate to mainstream regulatory discussion. OpenAI has faced criticism over the years for allegedly deprioritizing safety research in favor of capability development—a tension that led to high-profile departures from its safety team. The program…
Filed under: News - @ April 9, 2026 6:28 pm