The New York Times Sues OpenAI and Microsoft Over the Use of Its Stories to Train Chatbots

In a landmark lawsuit filed in federal court in New York City, The New York Times is alleging that OpenAI and Microsoft Corporation have been illegally using its content to train their chatbots without the newspaper’s permission.

The lawsuit, which was filed on June 30, 2023, alleges that OpenAI’s chatbot development platform, called GPT-3, has been trained on a massive dataset of text and code, including a substantial amount of content from The New York Times. OpenAI then licensed GPT-3 to Microsoft, which has integrated it into its Azure AI platform.

The New York Times claims that this use of its content constitutes copyright infringement and a violation of its exclusive rights to distribute its material. The newspaper is seeking damages and an injunction to prevent OpenAI and Microsoft from further using its content without permission.

“The New York Times relies on its copyrights to protect its valuable content and to ensure that it is used fairly and lawfully,” said David McCraw, a partner at Cahill, Gordon & Reindel LLP, who is representing The New York Times in the lawsuit. “OpenAI and Microsoft’s alleged unauthorized use of our content is a serious violation of our rights.”

OpenAI and Microsoft have not yet filed their responses to the lawsuit

The lawsuit is the latest in a series of legal challenges facing OpenAI and other companies that are developing large language models. In July 2023, a group of writers, including comedian Sarah Silverman, filed a lawsuit against OpenAI alleging that their works were used to train GPT-3 without their permission.

The use of large language models raises a number of complex legal and ethical questions. On the one hand, these models have the potential to be incredibly beneficial, as they can be used to generate creative text formats, translate languages, write different kinds of creative content, and answer your questions in an informative way. However, there is also concern that these models could be used to generate harmful or misleading content, and that they could threaten the livelihoods of writers and journalists.

The New York Times lawsuit is likely to have a significant impact on the development and use of large language models. If the newspaper is successful, it could set a precedent that could make it more difficult for companies to use copyrighted content without permission. This could have a chilling effect on innovation in the field of artificial intelligence.

However, the lawsuit could also have a positive impact. It could focus attention on the need for clear and fair guidelines on the use of copyrighted content in the development of AI systems. This could help to ensure that the benefits of AI are shared widely, while also protecting the rights of creators.

Only time will tell how the New York Times lawsuit will ultimately be resolved. However, it is clear that the lawsuit is raising important questions about the future of AI and the role of copyright law in the digital age.

Related posts

Choosing the Right Fintech Software Development Partner

Brand Positioning: What Is It & When To Change Direction

Revolution in WhatsApp: Using the Popular App Will No Longer Be Free