Katana VentraIP

OpenAI

OpenAI is an American artificial intelligence (AI) research organization founded in December 2015 and headquartered in San Francisco. Its mission is to develop "safe and beneficial" artificial general intelligence, which it defines as "highly autonomous systems that outperform humans at most economically valuable work".[4] As a leading organization in the ongoing AI boom,[5] OpenAI has developed several large language models, advanced image generation models, and previously, released open-source models.[6][7] Its release of ChatGPT has been credited with catalyzing widespread interest in AI.

Not to be confused with OpenAL, OpenAPI, or Open-source artificial intelligence.

Company type

December 11, 2015 (2015-12-11)

San Francisco, California, U.S.[1]

Increase US$28 million[2] (2022)

Decrease US$−540 million[2] (2022)

c. 1,200 (2024)[3]

The organization consists of the non-profit OpenAI, Inc.[8] registered in Delaware and its for-profit subsidiary OpenAI Global, LLC.[9] Microsoft owns roughly 49% of OpenAI's equity, having invested US$13 billion.[10] It also provides computing resources to OpenAI through its Microsoft Azure cloud platform.[11]


In 2023 and 2024, OpenAI faced multiple lawsuits for alleged copyright infringement against authors and media companies whose work was used to train some of OpenAI's products. In November 2023, OpenAI's board removed Sam Altman as CEO citing a lack of confidence in him, and then reinstated him five days later after negotiations resulting in a reconstructed board. OpenAI's board has since added former US Treasury Secretary Lawrence Summers, former National Security Agency head Paul Nakasone, and a non-voting seat for Microsoft.

History[edit]

2015–2018: Non-profit beginnings[edit]

In December 2015, OpenAI was founded by Sam Altman, Elon Musk, Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, John Schulman, Pamela Vagata, and Wojciech Zaremba, with Sam Altman and Elon Musk as the co-chairs. $1 billion in total was pledged by Sam Altman, Greg Brockman, Elon Musk, Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services (AWS), Infosys, and YC Research[12][13]. The actual collected total amount of contributions was only $130 million until 2019.[9] According to an investigation led by TechCrunch, Musk was its largest donor while YC Research did not contribute anything at all.[14] The organization stated it would "freely collaborate" with other institutions and researchers by making its patents and research open to the public.[15][16] OpenAI was headquartered at the Pioneer Building in the Mission District, San Francisco.[17][18]


According to Wired, Brockman met with Yoshua Bengio, one of the "founding fathers" of deep learning, and drew up a list of the "best researchers in the field".[19] Brockman was able to hire nine of them as the first employees in December 2015.[19] In 2016, OpenAI paid corporate-level (rather than nonprofit-level) salaries, but did not pay AI researchers salaries comparable to those of Facebook or Google.[19]


Microsoft's Peter Lee stated that the cost of a top AI researcher exceeds the cost of a top NFL quarterback prospect.[19] OpenAI's potential and mission drew these researchers to the firm; a Google employee said he was willing to leave Google for OpenAI "partly because of the very strong group of people and, to a very large extent, because of its mission."[19] Brockman stated that "the best thing that I could imagine doing was moving humanity closer to building real AI in a safe way."[19] OpenAI co-founder Wojciech Zaremba stated that he turned down "borderline crazy" offers of two to three times his market value to join OpenAI instead.[19]


In April 2016, OpenAI released a public beta of "OpenAI Gym", its platform for reinforcement learning research.[20] Nvidia gifted its first DGX-1 supercomputer to OpenAI in August 2016 to help it train larger and more complex AI models with the capability of reducing processing time from six days to two hours.[21][22] In December 2016, OpenAI released "Universe", a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites, and other applications.[23][24][25][26]


In 2017, OpenAI spent $7.9 million, or a quarter of its functional expenses, on cloud computing alone.[27] In comparison, DeepMind's total expenses in 2017 were $442 million. In the summer of 2018, simply training OpenAI's Dota 2 bots required renting 128,000 CPUs and 256 GPUs from Google for multiple weeks.


In 2018, Musk resigned from his Board of Directors seat, citing "a potential future conflict [of interest]" with his role as CEO of Tesla due to Tesla's AI development for self-driving cars.[28] Sam Altman claims that Musk believed that OpenAI had fallen behind other players like Google and Musk proposed instead to take over OpenAI himself, which the board rejected. Musk subsequently left OpenAI but claimed to remain a donor, yet made no donations after his departure.[29]


In February 2019, GPT-2 was announced, which gained attention for its ability to generate human-like text.[30]

2019: Transition from non-profit[edit]

In 2019, OpenAI transitioned from non-profit to "capped" for-profit, with the profit being capped at 100 times any investment.[31] According to OpenAI, the capped-profit model allows OpenAI Global, LLC to legally attract investment from venture funds and, in addition, to grant employees stakes in the company.[32] Many top researchers work for Google Brain, DeepMind, or Facebook, which offer stock options that a nonprofit would be unable to.[33] Before the transition, public disclosure of the compensation of top employees at OpenAI was legally required.[34]


The company then distributed equity to its employees and partnered with Microsoft,[35] announcing an investment package of $1 billion into the company. Since then, OpenAI systems have run on an Azure-based supercomputing platform from Microsoft.[36][37][38]


OpenAI Global, LLC then announced its intention to commercially license its technologies.[39] It planned to spend the $1 billion "within five years, and possibly much faster."[40] Altman has stated that even a billion dollars may turn out to be insufficient, and that the lab may ultimately need "more capital than any non-profit has ever raised" to achieve artificial general intelligence.[41]


The transition from a nonprofit to a capped-profit company was viewed with skepticism by Oren Etzioni of the nonprofit Allen Institute for AI, who agreed that wooing top researchers to a nonprofit is difficult, but stated "I disagree with the notion that a nonprofit can't compete" and pointed to successful low-budget projects by OpenAI and others. "If bigger and better funded was always better, then IBM would still be number one."


The nonprofit, OpenAI, Inc., is the sole controlling shareholder of OpenAI Global, LLC, which, despite being a for-profit company, retains a formal fiduciary responsibility to OpenAI, Inc.'s nonprofit charter. A majority of OpenAI, Inc.'s board is barred from having financial stakes in OpenAI Global, LLC.[32] In addition, minority members with a stake in OpenAI Global, LLC are barred from certain votes due to conflict of interest.[33] Some researchers have argued that OpenAI Global, LLC's switch to for-profit status is inconsistent with OpenAI's claims to be "democratizing" AI.[42]

2020–2023: ChatGPT, DALL-E, partnership with Microsoft[edit]

In 2020, OpenAI announced GPT-3, a language model trained on large internet datasets. GPT-3 is aimed at natural language answering questions, but it can also translate between languages and coherently generate improvised text. It also announced that an associated API, named simply "the API", would form the heart of its first commercial product.[43]


Eleven employees left OpenAI, mostly between December 2020 and January 2021, in order to establish Anthropic.[44]


In 2021, OpenAI introduced DALL-E, a specialized deep learning model adept at generating complex digital images from textual descriptions, utilizing a variant of the GPT-3 architecture.[45]


In December 2022, OpenAI received widespread media coverage after launching a free preview of ChatGPT, its new AI chatbot based on GPT-3.5. According to OpenAI, the preview received over a million signups within the first five days.[46] According to anonymous sources cited by Reuters in December 2022, OpenAI Global, LLC was projecting $200 million of revenue in 2023 and $1 billion in revenue in 2024.[47]


In January 2023, OpenAI Global, LLC was in talks for funding that would value the company at $29 billion, double its 2021 value.[48] On January 23, 2023, Microsoft announced a new US$10 billion investment in OpenAI Global, LLC over multiple years, partially needed to use Microsoft's cloud-computing service Azure.[49][50] Rumors of this deal suggested that Microsoft may receive 75% of OpenAI's profits until it secures its investment return and a 49% stake in the company.[51] The investment is believed to be a part of Microsoft's efforts to integrate OpenAI's ChatGPT into the Bing search engine. Google announced a similar AI application (Bard), after ChatGPT was launched, fearing that ChatGPT could threaten Google's place as a go-to source for information.[52][53]


On February 7, 2023, Microsoft announced that it was building AI technology based on the same foundation as ChatGPT into Microsoft Bing, Edge, Microsoft 365 and other products.[54]


On March 3, 2023, Reid Hoffman resigned from his board seat, citing a desire to avoid conflicts of interest with his investments in AI companies via Greylock Partners, and his co-founding of the AI startup Inflection AI. Hoffman remained on the board of Microsoft, a major investor in OpenAI.[55]


On March 14, 2023, OpenAI released GPT-4, both as an API (with a waitlist) and as a feature of ChatGPT Plus.[56]


On May 22, 2023, Sam Altman, Greg Brockman and Ilya Sutskever posted recommendations for the governance of superintelligence.[57] They consider that superintelligence could happen within the next 10 years, allowing a "dramatically more prosperous future" and that "given the possibility of existential risk, we can't just be reactive". They propose creating an international watchdog organization similar to IAEA to oversee AI systems above a certain capability threshold, suggesting that relatively weak AI systems on the other side should not be overly regulated. They also call for more technical safety research for superintelligences, and ask for more coordination, for example through governments launching a joint project which "many current efforts become part of".[57][58]


In July 2023, OpenAI launched the superalignment project, aiming to find within 4 years how to align future superintelligences by automating alignment research using AI.[59]


In August 2023, it was announced that OpenAI had acquired the New York-based start-up Global Illumination, a company that deploys AI to develop digital infrastructure and creative tools.[60]


On September 21, 2023, Microsoft had begun rebranding all variants of its Copilot to Microsoft Copilot, including the former Bing Chat and the Microsoft 365 Copilot.[61] This strategy was followed in December 2023 by adding the MS-Copilot to many installations of Windows 11 and Windows 10 as well as a standalone Microsoft Copilot app released for Android[62] and one released for iOS thereafter.[63]


In October 2023, Sam Altman and Peng Xiao, CEO of the Emirati AI firm G42, announced Open AI would let G42 deploy Open AI technology.[64]


On November 6, 2023, OpenAI launched GPTs, allowing individuals to create customized versions of ChatGPT for specific purposes, further expanding the possibilities of AI applications across various industries.[65] On November 14, 2023, OpenAI announced they temporarily suspended new sign-ups for ChatGPT Plus due to high demand.[66] Access for newer subscribers re-opened a month later on December 13.[67]

CEO and co-founder: , former president of the startup accelerator Y Combinator

Sam Altman

President and co-founder: , former CTO, 3rd employee of Stripe[114]

Greg Brockman

Chief Scientist Officer: Jakub Pachocki, former Director of Research at OpenAI

[103]

Chief Technology Officer: Mira Murati, previously at Leap Motion and Tesla, Inc.

[115]

Chief Operating Officer: Brad Lightcap, previously at Y Combinator and JPMorgan Chase

[115]

Chief Financial Officer: , former Nextdoor CEO and former CFO at Block, Inc.[116]

Sarah Friar

Chief Product Officer: Kevin Weil, previously at and Meta Platforms[116]

Twitter, Inc.

Motives[edit]

Some scientists, such as Stephen Hawking and Stuart Russell, have articulated concerns that if advanced AI gains the ability to redesign itself at an ever-increasing rate, an unstoppable "intelligence explosion" could lead to human extinction. Co-founder Musk characterizes AI as humanity's "biggest existential threat".[125]


Musk and Altman have stated they are partly motivated by concerns about AI safety and the existential risk from artificial general intelligence.[126][127] OpenAI states that "it's hard to fathom how much human-level AI could benefit society," and that it is equally difficult to comprehend "how much it could damage society if built or used incorrectly".[16] Research on safety cannot safely be postponed: "because of AI's surprising history, it's hard to predict when human-level AI might come within reach."[128] OpenAI states that AI "should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible."[16] Co-chair Sam Altman expects the decades-long project to surpass human intelligence.[129]


Vishal Sikka, former CEO of Infosys, stated that an "openness", where the endeavor would "produce results generally in the greater interest of humanity", was a fundamental requirement for his support; and that OpenAI "aligns very nicely with our long-held values" and their "endeavor to do purposeful work".[130] Cade Metz of Wired suggested that corporations such as Amazon might be motivated by a desire to use open-source software and data to level the playing field against corporations such as Google and Facebook, which own enormous supplies of proprietary data. Altman stated that Y Combinator companies would share their data with OpenAI.[129]

Strategy[edit]

In the early years before his 2018 departure, Musk posed the question: "What is the best thing we can do to ensure the future is good? We could sit on the sidelines or we can encourage regulatory oversight, or we could participate with the right structure with people who care deeply about developing AI in a way that is safe and is beneficial to humanity." He acknowledged that "there is always some risk that in actually trying to advance (friendly) AI we may create the thing we are concerned about"; but nonetheless, that the best defense was "to empower as many people as possible to have AI. If everyone has AI powers, then there's not any one person or a small set of individuals who can have AI superpower."[114]


Musk and Altman's counterintuitive strategy—that of trying to reduce of harm from AI by giving everyone access to it—is controversial among those concerned with existential risk from AI. Philosopher Nick Bostrom said, "If you have a button that could do bad things to the world, you don't want to give it to everyone."[127] During a 2016 conversation about technological singularity, Altman said, "We don't plan to release all of our source code" and mentioned a plan to "allow wide swaths of the world to elect representatives to a new governance board". Greg Brockman stated, "Our goal right now... is to do the best thing there is to do. It's a little vague."[131]


Conversely, OpenAI's initial decision to withhold GPT-2 around 2019, due to a wish to "err on the side of caution" in the presence of potential misuse, was criticized by advocates of openness. Delip Rao, an expert in text generation, stated, "I don't think [OpenAI] spent enough time proving [GPT-2] was actually dangerous." Other critics argued that open publication was necessary to replicate the research and to create countermeasures.[132]


More recently, in 2022, OpenAI published its approach to the alignment problem, anticipating that aligning AGI to human values would likely be harder than aligning current AI systems: "Unaligned AGI could pose substantial risks to humanity[,] and solving the AGI alignment problem could be so difficult that it will require all of humanity to work together". They stated that they intended to explore how to better use human feedback to train AI systems, and how to safely use AI to incrementally automate alignment research.[133] Some observers believe the company's November 2023 reorganization—including Altman's return as CEO, and the changes to its board of directors—indicated a probable shift towards a business focus and reduced influence of "cautious people" at OpenAI.[134]

Products and applications[edit]

Reinforcement learning[edit]

At its beginning, OpenAI's research included many projects focused on reinforcement learning (RL).[135] OpenAI has been viewed as an important competitor to DeepMind.[136]

Controversies[edit]

Contract with Sama[edit]

In January 2023, OpenAI has been criticized for outsourcing the annotation of data sets to Sama, a company based in San Francisco but employing workers in Kenya. These annotations were used to train an AI model to detect toxicity, which could then be used to filter out toxic content, notably from ChatGPT's training data and outputs. However, these pieces of text usually contained detailed descriptions of various types of violence, including sexual violence. The four Sama employees interviewed by Time described themselves as mentally scarred. OpenAI paid Sama $12.50 per hour of work, and Sama was redistributing the equivalent of between $1.32 and $2.00 per hour post-tax to its annotators. Sama's spokesperson said that the $12.50 was also covering other implicit costs, among which were infrastructure expenses, quality assurance and management.[239]

Open-source[edit]

In March 2023, the company was also criticized for disclosing particularly few technical details about products like GPT-4, contradicting its initial commitment to openness and making it harder for independent researchers to replicate its work and develop safeguards. OpenAI cited competitiveness and safety concerns to justify this strategic turn. OpenAI's former chief scientist Ilya Sutskever argued in 2023 that open-sourcing increasingly capable models was increasingly risky, and that the safety reasons for not open-sourcing the most potent AI models would become "obvious" in a few years.[240]

Non-disparagement agreement[edit]

On May 17, 2024, a Vox article reported that OpenAI was asking departing employees to sign a lifelong non-disparagement agreement forbidding them from criticizing OpenAI or acknowledging the existence of the agreement. Daniel Kokotajlo, a former employee, publicly stated that he forfeited his vested equity in OpenAI in order to leave without signing the agreement.[241][242] Sam Altman stated that he was unaware of the equity cancellation provision, and that OpenAI never enforced it to cancel any employee's vested equity.[243] Vox published leaked documents and emails challenging this claim.[244] On May 23, 2024, OpenAI sent a memo releasing former employees from the agreement.[245]

Copyrights[edit]

OpenAI was sued for copyright infringement by authors Sarah Silverman, Matthew Butterick, Paul Tremblay and Mona Awad in July 2023.[246][247][248] In September 2023, 17 authors, including George R. R. Martin, John Grisham, Jodi Picoult and Jonathan Franzen, joined the Authors Guild in filing a class action lawsuit against OpenAI, alleging that the company's technology was illegally using their copyrighted work.[249][250] The New York Times also sued the company in late December 2023.[247][251] In May 2024 it was revealed that OpenAI had destroyed its Books1 and Books2 training datasets, which were used in the training of GPT-3, and which the Authors Guild believed to have contained over 100,000 copyrighted books.[252]


In 2021, OpenAI developed a speech recognition tool called Whisper. OpenAI used it to transcribe more than one million hours of YouTube videos into text for training GPT-4. The automated transcription of YouTube videos raised concerns within OpenAI employees regarding potential violations of YouTube's terms of service, which prohibit the use of videos for applications independent of the platform, as well as any type of automated access to its videos. Despite these concerns, the project proceeded with notable involvement from OpenAI's president, Greg Brockman. The resulting dataset proved instrumental in training GPT-4.[253]


On February 2024, The Intercept as well as Raw Story and Alternate Media Inc. filed lawsuit against OpenAI on copyright litigation ground.[254][255] The lawsuit is said to have charted a new legal strategy for digital-only publishers to sue OpenAI.[256]


On April 30, 2024, eight newspapers filed a lawsuit in the Southern District of New York against OpenAI and Microsoft, claiming illegal harvesting of their copyrighted articles. The suing publications included The Mercury News, The Denver Post, The Orange County Register, St. Paul Pioneer Press, Chicago Tribune, Orlando Sentinel, Sun Sentinel, and New York Daily News.[257]

GDPR compliance[edit]

In April 2023, the EU's European Data Protection Board (EDPB) formed a dedicated task force on ChatGPT "to foster cooperation and to exchange information on possible enforcement actions conducted by data protection authorities" based on the "enforcement action undertaken by the Italian data protection authority against Open AI about the Chat GPT service".[258]


In late April 2024 NOYB filed a complaint with the Austrian Datenschutzbehörde against OpenAI for violating the European General Data Protection Regulation. A text, created with the ChatGPT, gave a false date of birth for a living person without giving the individual the option to see the personal data used in the process. A request to correct the mistake was denied. Additionally, neither the recipients of ChatGPT´s work nor the sources used, could be made available, OpenAI claimed.[259]

Removal of military and warfare clause[edit]

OpenAI quietly deleted its ban on using ChatGPT for "military and warfare". Up until January 10, 2024, its "usage policies" included a ban on "activity that has high risk of physical harm, including," specifically, "weapons development" and "military and warfare." Its new policies prohibit "[using] our service to harm yourself or others" and to "develop or use weapons".[260][261] As one of the industry collaborators, OpenAI provides LLM to the Artificial Intelligence Cyber Challenge (AIxCC) sponsored by Defense Advanced Research Projects Agency (DARPA) and Advanced Research Projects Agency for Health to protect software critical to Americans.[262]

 – Artificial intelligence research company

Anthropic

 – US-based AI safety research center

Center for AI Safety

 – Defunct Oxford interdisciplinary research centre

Future of Humanity Institute

 – International nonprofit research institute

Future of Life Institute

 – Artificial intelligence division

Google DeepMind

 – Nonprofit researching AI safety

Machine Intelligence Research Institute

Edit this at Wikidata

Official website

on X

OpenAI

on Instagram

OpenAI

on YouTube

OpenAI

by Wired

"What OpenAI Really Wants"

by The New Yorker

"The Inside Story of Microsoft's Partnership with OpenAI"