OpenAI’s o1 model may be capable of deception, says AI godfather

OpenAI’s new AI model o1, which the company says can think logically like a human, has raised concerns, particularly over its ability to scheme and deceive. Yoshua Bengio, popularly known as the “Godfather of AI,” has warned of o1’s ability to intentionally and discreetly deceive, as it has “far superior reasoning ability than its predecessors.” He has also called for better safety testing to ensure new AI models do not get out of human control, a hypothetical situation that several AI advocates have previously raised concerns about.

Yoshua Bengio, who earned the nickname “godfather of AI” for his award-winning machine learning research alongside Geoffrey Hinton and Yann LeCun, said he’s concerned that the new AI model could be capable of collusion, deception, and cheating, based on reports from independent AI companies that the o1 model can think and reason better than previous models. “In general, the ability to deceive is very dangerous, and we should have much more stringent safety tests in place to assess that risk and its consequences for o1,” Bengio told Business Insider.

He stressed that it is quite possible that AI models could acquire the ability to conspire, which would allow them to deliberately deceive without the knowledge of the user or the company. He stressed the need to create strict measures that can “prevent the loss of human control” over AI models in the future.

Earlier this month, OpenAI announced its latest o1 series of AI models with improved “reasoning” capabilities. The Microsoft-backed company said the models are “designed to spend more time thinking before responding,” a breakthrough in the field that could make ChatGPT and future AI chatbots based on it better able to understand the nuances of questions “much like a person would.”

Bengio’s concerns revolve around how logically capable next-generation AI models will be, especially after the debut of o1. He expressed nervousness about the rapid advancement of generative AI (GenAI) models and said laws like California’s SB 1047, which imposes several restrictions on AI models to promote safe use and requires the companies behind these models to allow safety testing by third-party companies, are needed.

Source link

Disclaimer:
The information contained in this post is for general information purposes only. We make no representations or warranties of any kind, express or implied, about the completeness, accuracy, reliability, suitability or availability with respect to the website or the information, products, services, or related graphics contained on the post for any purpose.
We respect the intellectual property rights of content creators. If you are the owner of any material featured on our website and have concerns about its use, please contact us. We are committed to addressing any copyright issues promptly and will remove any material within 2 days of receiving a request from the rightful owner.

Leave a Comment