OpenAI: OpenAI, still haunted by its chaotic past, is trying to grow

OpenAIthe often problematic standard-bearer of the Technology industryThe push for artificial intelligence is generating substantial changes in its management team, and even how it is organized, as it seeks investments from some of the world’s richest companies.

In recent months, OpenAI, the creator of the online chatbot ChatGPT, has hired a who’s who of technology executivesexperts in disinformation and AI It has also added seven members to its board of directors, including a four-star Army general who ran the National Security Agency, while renewing efforts to ensure its artificial intelligence technologies do not cause serious harm.

OpenAI is also in talks with investors including Microsoft, Apple, Nvidia and investment firm Thrive for a deal that would value it at $100 billion. And the company is considering changes to its corporate structure that would make it easier to attract investors.

The San Francisco startup, after years of public conflict between management and some of its top researchers, is trying to look more like a sensible company ready to lead the tech industry’s march toward artificial intelligence. OpenAI is also trying to put last year’s high-profile fight over the management of Sam Altman, its CEO, on the back burner.

Yet interviews with more than 20 current and former OpenAI employees and board members show that the transition has been rocky. Early employees keep leaving, even as new workers and new executives come in. And the rapid growth hasn’t resolved a fundamental question about what OpenAI is supposed to be: Is it a cutting-edge AI lab built for the benefit of humanity, or a budding industrial giant dedicated to profit?

OpenAI now has more than 1,700 employees, with 80% of them joining after ChatGPT launched in November 2022. Altman and other leaders have led executive recruitment, while new president Bret Taylor, a former Facebook executive, has overseen board expansion.

Discover the stories that interest you


“While startups must naturally evolve and adapt as their impact grows, we recognize that OpenAI is navigating this transformation at an unprecedented pace,” Taylor said in an emailed statement to The New York Times. “Our board and the dedicated OpenAI team remain focused on safely building AI that can solve hard problems for everyone.” Several of the new executives held prominent roles at other tech companies. Sarah Friar, OpenAI’s new chief financial officer, was the chief executive of Nextdoor. Kevin Weil, OpenAI’s new chief product officer, was the senior vice president of product at Twitter. Ben Nimmo led Facebook’s battle against deceptive social media campaigns. Joaquín Candela oversaw Facebook’s efforts to reduce the risks of artificial intelligence. Now, the two men have similar roles at OpenAI.

OpenAI also told employees on Friday that Chris Lehane, a Clinton White House veteran who played a major role at Airbnb and joined OpenAI this year, would be its head of global policy.

But of the 13 people who helped found OpenAI in late 2015 with the mission of creating artificial general intelligence (AGI) — a machine that can do everything the human brain can do — only three remain. One of them, Greg Brockman, the company’s president, has taken a leave of absence until the end of the year, citing the need to take some time off after nearly a decade on the job.

“It’s pretty common to see these kinds of additions (and subtractions, too), but we’re in a very bright light,” said Jason Kwon, OpenAI’s chief strategy officer. “Everything is magnified.”

Since its beginnings as a nonprofit research lab, OpenAI has faced arguments over its goals. In 2018, Elon Musk, its main backer, left following a dispute with its other founders. In early 2022, a group of key researchers, concerned that commercial forces were pushing OpenAI’s technologies into the market before proper barriers were in place, left to form a rival AI company, Anthropic.

Motivated by similar concerns, OpenAI’s board suddenly fired Altman late last year. He was reinstated five days later.

OpenAI has parted ways with many of the employees who questioned Altman, and others who were less interested in building a regular tech company than in doing advanced research. Echoing other employees’ complaints, one researcher resigned over OpenAI’s efforts to recoup OpenAI stock from employees (which could be worth millions of dollars) if they spoke out publicly against it. OpenAI has since reversed that practice.

OpenAI is driven by two forces that are not always compatible.

For one thing, the company is driven by money — lots of it. Annual revenue has already surpassed $2 billion, according to a person familiar with its revenue. ChatGPT has more than 200 million users each week, double the number it had nine months ago. It’s not clear how much the company spends each year, though one estimate puts the figure at $7 billion. Microsoft, already OpenAI’s largest investor, has committed $13 billion to the artificial intelligence company.

But OpenAI is considering making big changes to its structure as it seeks more investment. Right now, the board of directors of the original OpenAI, formed as a nonprofit, controls the organization, with no official involvement from investors. As part of its new funding discussions, OpenAI is considering changes that would make its structure more attractive to investors, according to three people familiar with the negotiations. But it has not yet settled on a new structure.

OpenAI is also driven by technologies that worry many AI researchers, including some OpenAI employees. They argue that these technologies could help spread disinformation, fuel cyberattacks or even destroy humanity. That tension led to a blowup in November, when four board members, including chief scientist and co-founder Ilya Sutskever, ousted Altman.

After Altman regained control, a cloud hung over the company: Sutskever had not returned to work.

(The Times sued OpenAI and Microsoft in December for copyright infringement of news content related to AI systems.)

Together with another researcher, Jan Leike, Sutskever created OpenAI’s “Superalignment” team, which explored ways to ensure its future technologies would not cause harm.

In May, Sutskever left OpenAI and founded his own AI company. Within minutes, Leike left as well, joining Anthropic. “Security culture and processes have taken a backseat to brilliant products,” he said. Sutskever and Leike did not respond to requests for comment.

Others followed them to the door.

“I still fear that OpenAI and other AI companies don’t have an adequate plan for managing the risks of the human-level and beyond-human-level AI systems they’re raising billions of dollars to build,” said William Saunders, a researcher who recently left the company.

When Sutskever and Leike left, OpenAI handed over its work to another co-founder, John Schulman. While the Superalignment team had focused on damage that could occur years down the road, the new team explored short- and long-term risks.

At the same time, OpenAI hired Friar as chief financial officer (she previously held the same position at Square) and Weil as chief product officer. Friar and Weil did not respond to requests for comment.

Some former executives, who spoke on condition of anonymity because they had signed nondisclosure agreements, expressed skepticism about whether OpenAI’s troubled past was behind it. Three of them pointed to Aleksander Madry, who once led OpenAI’s Readiness team, which explored catastrophic AI risks. After a disagreement over how he and his team would fit into the larger organization, Madry moved on to a different research project.

Some departing employees were asked to sign legal documents stating they would lose their OpenAI shares if they spoke out against the company. This raised new concerns among staff, even after the company reversed the practice.

In early June, one researcher, Todor Markov, posted a message on the company’s internal messaging system announcing his resignation over the issue, according to a copy of the message seen by The Times.

He said OpenAI’s management had repeatedly misled employees on the issue. That’s why, he argued, the company’s management could not be trusted to develop artificial general intelligence — an echo of what the company’s board had said when it fired Altman.

“You often talk about our responsibility to develop IAG safely and to distribute the benefits widely,” he wrote. “How do you expect to be entrusted with that responsibility?”

Days later, OpenAI announced that Paul M. Nakasone, a retired U.S. Army general, had joined its board of directors. On a recent afternoon, he was asked what he thought of the environment he had entered, given that he was new to the field of AI.

“Are you new to AI? I’m not new to this,” he said in a telephone interview. “I ran the NSA and I’ve been dealing with this issue for years.”

Last month, Schulman, the co-founder who helped oversee OpenAI’s new security measures, also resigned from the company, saying he wanted to return to “hands-on” technical work. He also joined Anthropic.

“It’s very difficult to grow a company. You have to make trade-off decisions all the time, and some people may not like those decisions,” Kwon said. “Things are much more complicated.”

Source link

Disclaimer:
The information contained in this post is for general information purposes only. We make no representations or warranties of any kind, express or implied, about the completeness, accuracy, reliability, suitability or availability with respect to the website or the information, products, services, or related graphics contained on the post for any purpose.
We respect the intellectual property rights of content creators. If you are the owner of any material featured on our website and have concerns about its use, please contact us. We are committed to addressing any copyright issues promptly and will remove any material within 2 days of receiving a request from the rightful owner.

Leave a Comment