OpenAI new team will tackle AI risks amid growing concerns of misuse
The post OpenAI new team will tackle AI risks amid growing concerns of misuse appeared on BitcoinEthereumNews.com.
OpenAI has confirmed its intent to launch a new outfit dedicated to countering the risks stemming from the operations of artificial intelligence (AI) systems. The “Preparedness” unit will focus on curtailing the downsides associated with “frontier” AI models. In a company blog post, OpenAI said it believes that in the coming years, future models will outperform the capabilities of present systems, opening a new can of worms for society. Led by Aleksander Madry, OpenAI’s new Preparedness team will iron out guardrails to prevent systemic risks around individual persuasion, cybersecurity, autonomous replication and adaptation, and chemical, biological, radiological, and nuclear (CBRN) threats. “We take seriously the full spectrum of safety risks related to AI, from the systems we have today to the furthest reaches of superintelligence,” OpenAI said. “To support the safety of highly-capable AI systems, we are developing our approach to catastrophic risk preparedness.” To achieve its goal, the team will answer several burning questions about the data theft of frontier AI systems by bad actors and the process of rolling out a “robust framework for monitoring.” OpenAI says the new outfit will begin internal red teaming for frontier models while conducting capability assessments and evaluations. Going forward, the Preparedness team will create a Risk-Informed Development Policy (RDP) to be consistently updated in line with industry requirements. The RDP will build upon OpenAI‘s previous risk mitigation processes to create a governance structure to promote “accountability and oversight” throughout the process. OpenAI says it is currently shopping for talent to fill up the ranks of its new team, seeking individuals with diverse technical backgrounds. Alongside the job listing, the AI developer has launched a Preparedness Challenge to identify less obvious areas of concern around AI misuse. “We will offer $25,000 in API credits to up to 10 top submissions, publish novel ideas and entries, and look…
Filed under: News - @ November 5, 2023 10:12 am