OpenAI, the company behind ChatGPT, has responded to accusations from The New York Times about its AI model replicating articles. OpenAI disagrees with the claims and sees this as a chance to clarify its intentions.
Key Points in OpenAI’s Response:
- Collaboration with News Outlets:
OpenAI stressed its efforts to work with news organizations. They aim to help journalists by supporting tasks like analyzing records and translating languages. Early partnerships with Associated Press and Axel Springer showcase this approach. - Training and Fair Use:
OpenAI defended using publicly available internet content to train AI models, citing it as fair use. They’ve offered a straightforward opt-out process for publishers, including The New York Times, to limit their tools’ access. - Addressing “Regurgitation”:
OpenAI acknowledged rare instances of the AI unintentionally repeating content. They assured ongoing efforts to minimize this issue by implementing measures to prevent misuse. - Discrepancies in NYT’s Claims:
OpenAI expressed disappointment in The New York Times’ sudden lawsuit, pointing out a lack of shared examples. They suggested the examples provided were from older articles widely available, hinting at manipulation of prompts to trigger specific responses from the AI.
OpenAI’s Stance:
OpenAI disagrees with the alleged misuse, stating it’s not typical or condoned user activity. They remain hopeful about building a positive partnership with The New York Times, respecting the newspaper’s long history in journalism and technology.
In the midst of a legal dispute, OpenAI emphasizes its commitment to collaborative work with media outlets and advancing AI for positive contributions to the industry.