© 2024 Blaze Media LLC. All rights reserved.
Hacker stole OpenAI secrets in 2023, raising questions about foreign actors hacking AI companies in the future
Photo Illustration byKlaudia Radecka/NurPhoto via Getty Images

Hacker stole OpenAI secrets in 2023, raising questions about foreign actors hacking AI companies in the future

OpenAI did not publicly reveal that they had been hacked in 2023 because no customer data had been stolen.

The New York Times recently reported that OpenAI experienced a "major security incident," which raises concerns about the security and safety of AI companies in the future. The development also suggests that AI companies could be a major hotspot for foreign actors to infiltrate.

Former OpenAI employee Leopold Aschenbrenner alluded to the incident in a recent podcast episode. While he referred to it as a "major security incident," other unnamed company sources told the Times that the hacker only obtained an employee discussion forum. The nature of this discussion forum is unknown.

'While we share his commitment to building safe A.G.I., we disagree with many of the claims he has since made about our work.'

Tech Crunch reported that no security breach should be taken lightly, but what the hacker obtained from OpenAI was minor compared to what they could have gained access to, such as models in progress, internal systems, and secret roadmaps.

The hack occurred in April 2023, but executives at the company decided not to disclose the news publicly because no information about customers or partners had been accessed during the hack. However, it appears that these executives did not consider that such a hack could represent a national security risk, according to the Times.

Furthermore, the company operated under the assumption that the hacker was an individual and did not have any ties to a foreign government. The FBI was not informed of the incident at the time, according to the report.

Aschenbrenner said he was fired by OpenAI this spring for leaking confidential information to those outside the company. However, he argued that his firing was politically motivated. One of the bits of information Aschenbrenner revealed during the podcast episode was that OpenAI's security infrastructure was not strong enough to keep out foreign actors who wished to gain access to the company's secrets.

Despite Aschenbrenner's accusations, OpenAI spokeswoman Liz Bourgeois said "[w]e appreciate the concerns Leopold raised while at OpenAI, and this did not lead to his separation."

“While we share his commitment to building safe A.G.I., we disagree with many of the claims he has since made about our work. This includes his characterizations of our security, notably this incident, which we addressed and shared with our board before he joined the company," she added.

Though foreign actors did not appear to gain access to the company's most sensitive information, this incident could represent risks moving forward. As AI continues to progress at exponential rates, it is possible other governments not friendly to the U.S. could try to gather secrets through organized hacking operations.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

Want to leave a tip?

We answer to you. Help keep our content free of advertisers and big tech censorship by leaving a tip today.
Want to join the conversation?
Already a subscriber?