According to Wired, the business presented the technique’s supporting study this week and will provide additional information in a future publication.
The main idea is to drive the more powerful AI model to be more transparent about its thought process by putting it through a dialogue with another model. And that may make it easier for people to comprehend how these models solve problems.
In order to test the method, OpenAI gave AI models simple math problems to solve. While the second person listened to identify mistakes in the first’s answers, the more capable one described how it resolved the problems. The superalignment group at OpenAI, which Leike and Sutskever co-led, was tasked with aligning artificial intelligence systems with human goals. Gretchen Krueger, an OpenAI policy researcher, left the company a week later, citing “overlapping concerns.”
Concerns regarding OpenAI’s dedication to safety as it advances its technology were raised by their departures. Elon Musk, the CEO of Tesla, was one of several academics who signed a letter in March of last year expressing worries about the quick speed at which AI is developing. More recently, Stuart Russell, an AI researcher and professor at the University of California, Berkeley, declared that OpenAI’s goals to develop artificial general intelligence without completing safety validation were “completely unacceptable.” Business Insider reached out to OpenAI for comment, but they didn’t reply right away.The superalignment group at OpenAI, which Leike and Sutskever co-led, was tasked with aligning artificial intelligence systems with human goals. Gretchen Krueger, an OpenAI policy researcher, left the company a week later, citing “overlapping concerns.”
Concerns regarding OpenAI’s dedication to safety as it advances its technology were raised by their departures. Elon Musk, the CEO of Tesla, was one of several academics who signed a letter in March of last year expressing worries about the quick speed at which AI is developing. More recently, Stuart Russell, an AI researcher and professor at the University of California, Berkeley, declared that OpenAI’s goals to develop artificial general intelligence without completing safety validation were “completely unacceptable.” Business Insider reached out to OpenAI for comment, but they didn’t reply right away.