Company type | Private |
---|---|
Industry | Artificial intelligence |
Founded | 2021 |
Founders |
|
Headquarters | San Francisco, California, U.S. |
Products | Claude |
Number of employees | 160 (July 2023) [2] |
Website | anthropic.com |
Anthropic PBC is a U.S.-based artificial intelligence (AI) startup company, founded in 2021, researching artificial intelligence as a public-benefit company to develop AI systems to “study their safety properties at the technological frontier” and use this research to deploy safe, reliable models for the public. [3] [4] [5] Anthropic has developed a family of large language models (LLMs) named Claude as a competitor to OpenAI’s ChatGPT and Google’s Gemini. [6]
Anthropic was founded by former members of OpenAI, Daniela Amodei and Dario Amodei. [7] In September 2023, Amazon announced an investment of up to $4 billion, followed by a $2 billion commitment from Google in the following month. [8] [9] [10]
Anthropic was founded in 2021 by seven former employees of OpenAI, including siblings Daniela Amodei and Dario Amodei, the latter of whom served as OpenAI's Vice President of Research. [11] [12]
In April of 2022, Anthropic announced it had received $580 million in funding, [13] with $500 million of this funding coming from FTX under the leadership of Sam Bankman-Fried. [14] [2]
In February 2023, Anthropic was sued by Texas-based Anthrop LLC for the use of its registered trademark "Anthropic A.I." [15] On September 25, 2023, Amazon announced a partnership with Anthropic, with Amazon becoming a minority stakeholder by initially investing $1.25 billion, and planning a total investment of $4 billion. [8] As part of the deal, Anthropic would use Amazon Web Services (AWS) as its primary cloud provider and make its AI models available to AWS customers. [8] [16] The next month, Google invested $500 million in Anthropic, and committed to an additional $1.5 billion over time. [10]
On March 27, 2024, Amazon maxed out its potential investment from the agreement made in the prior year by investing another US $2.75 billion into Anthropic, completing its $4 billion investment. [9]
According to Anthropic, the company’s goal is to research the safety and reliability of artificial intelligence systems. [5] The Amodei siblings were among those who left OpenAI due to directional differences, specifically regarding OpenAI's ventures with Microsoft in 2019. [12] Anthropic incorporated itself as a Delaware public-benefit corporation (PBC), which requires the company to maintain a balance between private and public interests. [27]
Anthropic is a corporate “Long-Term Benefit Trust," a company-derived entity that requires the company's directors to align the company's priorities with the public benefit rather than profit in "extreme" instances of "catastrophic risk." [28] [29] As of September 19, 2023, members of the Trust included Jason Matheny (CEO & President of the RAND Corporation), Kanika Bahl (CEO & President of Evidence Action), [30] Neil Buddy Shah (CEO of the Clinton Health Access Initiative), [31] Paul Christiano (Founder of the Alignment Research Center), [32] and Zach Robinson (CEO of Effective Ventures US). [33] [34]
Claude incorporates “Constitutional AI” to set safety guidelines for the model’s output. [35] The name, "Claude", was chosen either as a reference to mathematician Claude Shannon, or as a male name to contrast the female names of other A.I. assistants such as Alexa, Siri, and Cortana. [2]
Anthropic initially released two versions of its model, Claude and Claude Instant, in March 2023, with the latter being a more lightweight model. [36] [37] [38] The next iteration, Claude 2, was launched in July 2023. [39] Unlike Claude, which was only available to select users, Claude 2 is available for public use. [21]
Claude 3 was released on March 4, 2024, unveiling three language models: Opus, Sonnet, and Haiku. [40] [41] The Opus model is the largest and most capable—according to Anthropic, it outperforms the leading models from OpenAI ( GPT-4, GPT-3.5) and Google (Gemini Ultra). [40] Sonnet and Haiku are Anthropic’s medium- and small-sized models, respectively. [40] All three models can accept image input. [40] Amazon has incorporated Claude 3 into Bedrock, an Amazon Web Services-based platform for cloud AI services. [42]
According to Anthropic’s own research, Constitutional AI (CAI) is a framework developed to align AI systems with human values and ensure that they are helpful, harmless, and honest. [11] [43] Within this framework, humans provide a set of rules describing the desired behavior of the AI system, known as the "constitution." [43] The AI system evaluates the generated output and then adjusts the AI models to better fit the constitution. [43] The self-reinforcing process aims to avoid harm, respect preferences, and provide true information. [43]
Some of the principles of Claude 2’s constitution are derived from documents such as the 1948 Universal Declaration of Human Rights and Apple’s terms of service. [39] For example, one rule from the UN Declaration applied in Claude 2’s CAI states “Please choose the response that most supports and encourages freedom, equality and a sense of brotherhood.” [39]
Anthropic also publishes research on the interpretability of machine learning systems, focusing on the transformer architecture. [11] [44] [45]
On October 18, 2023, Anthropic was sued by Concord, Universal, ABKCO, and other music publishers for, per the complaint, "systematic and widespread infringement of their copyrighted song lyrics." [46] [47] [48] They alleged that the company used copyrighted material without permission in the form of song lyrics. [49] The plaintiffs asked for up to $150,000 for each work infringed upon by Anthropic, citing infringement of copyright laws. [49] In the lawsuit, the plaintiffs support their allegations of copyright violations by citing several examples of Anthropic’s Claude model outputting copied lyrics from songs such as Katy Perry’s “Roar” and Gloria Gaynor’s “I Will Survive.” [49] Additionally, the plaintiffs alleged that even given some prompts that did not directly state a song name, the model responded with modified lyrics based on original work. [49]
On January 16, 2024, Anthropic claimed that the music publishers were not unreasonably harmed and that the examples noted by plaintiffs were merely bugs. [50]
Company type | Private |
---|---|
Industry | Artificial intelligence |
Founded | 2021 |
Founders |
|
Headquarters | San Francisco, California, U.S. |
Products | Claude |
Number of employees | 160 (July 2023) [2] |
Website | anthropic.com |
Anthropic PBC is a U.S.-based artificial intelligence (AI) startup company, founded in 2021, researching artificial intelligence as a public-benefit company to develop AI systems to “study their safety properties at the technological frontier” and use this research to deploy safe, reliable models for the public. [3] [4] [5] Anthropic has developed a family of large language models (LLMs) named Claude as a competitor to OpenAI’s ChatGPT and Google’s Gemini. [6]
Anthropic was founded by former members of OpenAI, Daniela Amodei and Dario Amodei. [7] In September 2023, Amazon announced an investment of up to $4 billion, followed by a $2 billion commitment from Google in the following month. [8] [9] [10]
Anthropic was founded in 2021 by seven former employees of OpenAI, including siblings Daniela Amodei and Dario Amodei, the latter of whom served as OpenAI's Vice President of Research. [11] [12]
In April of 2022, Anthropic announced it had received $580 million in funding, [13] with $500 million of this funding coming from FTX under the leadership of Sam Bankman-Fried. [14] [2]
In February 2023, Anthropic was sued by Texas-based Anthrop LLC for the use of its registered trademark "Anthropic A.I." [15] On September 25, 2023, Amazon announced a partnership with Anthropic, with Amazon becoming a minority stakeholder by initially investing $1.25 billion, and planning a total investment of $4 billion. [8] As part of the deal, Anthropic would use Amazon Web Services (AWS) as its primary cloud provider and make its AI models available to AWS customers. [8] [16] The next month, Google invested $500 million in Anthropic, and committed to an additional $1.5 billion over time. [10]
On March 27, 2024, Amazon maxed out its potential investment from the agreement made in the prior year by investing another US $2.75 billion into Anthropic, completing its $4 billion investment. [9]
According to Anthropic, the company’s goal is to research the safety and reliability of artificial intelligence systems. [5] The Amodei siblings were among those who left OpenAI due to directional differences, specifically regarding OpenAI's ventures with Microsoft in 2019. [12] Anthropic incorporated itself as a Delaware public-benefit corporation (PBC), which requires the company to maintain a balance between private and public interests. [27]
Anthropic is a corporate “Long-Term Benefit Trust," a company-derived entity that requires the company's directors to align the company's priorities with the public benefit rather than profit in "extreme" instances of "catastrophic risk." [28] [29] As of September 19, 2023, members of the Trust included Jason Matheny (CEO & President of the RAND Corporation), Kanika Bahl (CEO & President of Evidence Action), [30] Neil Buddy Shah (CEO of the Clinton Health Access Initiative), [31] Paul Christiano (Founder of the Alignment Research Center), [32] and Zach Robinson (CEO of Effective Ventures US). [33] [34]
Claude incorporates “Constitutional AI” to set safety guidelines for the model’s output. [35] The name, "Claude", was chosen either as a reference to mathematician Claude Shannon, or as a male name to contrast the female names of other A.I. assistants such as Alexa, Siri, and Cortana. [2]
Anthropic initially released two versions of its model, Claude and Claude Instant, in March 2023, with the latter being a more lightweight model. [36] [37] [38] The next iteration, Claude 2, was launched in July 2023. [39] Unlike Claude, which was only available to select users, Claude 2 is available for public use. [21]
Claude 3 was released on March 4, 2024, unveiling three language models: Opus, Sonnet, and Haiku. [40] [41] The Opus model is the largest and most capable—according to Anthropic, it outperforms the leading models from OpenAI ( GPT-4, GPT-3.5) and Google (Gemini Ultra). [40] Sonnet and Haiku are Anthropic’s medium- and small-sized models, respectively. [40] All three models can accept image input. [40] Amazon has incorporated Claude 3 into Bedrock, an Amazon Web Services-based platform for cloud AI services. [42]
According to Anthropic’s own research, Constitutional AI (CAI) is a framework developed to align AI systems with human values and ensure that they are helpful, harmless, and honest. [11] [43] Within this framework, humans provide a set of rules describing the desired behavior of the AI system, known as the "constitution." [43] The AI system evaluates the generated output and then adjusts the AI models to better fit the constitution. [43] The self-reinforcing process aims to avoid harm, respect preferences, and provide true information. [43]
Some of the principles of Claude 2’s constitution are derived from documents such as the 1948 Universal Declaration of Human Rights and Apple’s terms of service. [39] For example, one rule from the UN Declaration applied in Claude 2’s CAI states “Please choose the response that most supports and encourages freedom, equality and a sense of brotherhood.” [39]
Anthropic also publishes research on the interpretability of machine learning systems, focusing on the transformer architecture. [11] [44] [45]
On October 18, 2023, Anthropic was sued by Concord, Universal, ABKCO, and other music publishers for, per the complaint, "systematic and widespread infringement of their copyrighted song lyrics." [46] [47] [48] They alleged that the company used copyrighted material without permission in the form of song lyrics. [49] The plaintiffs asked for up to $150,000 for each work infringed upon by Anthropic, citing infringement of copyright laws. [49] In the lawsuit, the plaintiffs support their allegations of copyright violations by citing several examples of Anthropic’s Claude model outputting copied lyrics from songs such as Katy Perry’s “Roar” and Gloria Gaynor’s “I Will Survive.” [49] Additionally, the plaintiffs alleged that even given some prompts that did not directly state a song name, the model responded with modified lyrics based on original work. [49]
On January 16, 2024, Anthropic claimed that the music publishers were not unreasonably harmed and that the examples noted by plaintiffs were merely bugs. [50]