On Thursday, music publishers got a small win in a copyright fight alleging that Anthropic's Claude chatbot regurgitates song lyrics without paying licensing fees to rights holders.
In an order, US district judge Eumi Lee outlined the terms of a deal reached between Anthropic and publisher plaintiffs who license some of the most popular songs on the planet, which she said resolves one aspect of the dispute.
Through the deal, Anthropic admitted no wrongdoing and agreed to maintain its current strong guardrails on its AI models and products throughout the litigation. These guardrails, Anthropic has repeatedly claimed in court filings, effectively work to prevent outputs containing actual song lyrics to hits like Beyonce's "Halo," Spice Girls' "Wannabe," Bob Dylan's "Like a Rolling Stone," or any of the 500 songs at the center of the suit.
Perhaps more importantly, Anthropic also agreed to apply equally strong guardrails to any new products or offerings, granting the court authority to intervene should publishers discover more allegedly infringing outputs.
Before seeking such an intervention, publishers may notify Anthropic of any allegedly harmful outputs. That includes any outputs that include partial or complete song lyrics, as well as any derivative works that the chatbot may produce mimicking the lyrical style of famous artists. After an expeditious review, Anthropic will provide a "detailed response" explaining any remedies or "clearly" stating "its intent not to address the issue."
Although the deal does not settle publishers' more substantial complaint alleging that Anthropic training its AI models on works violates copyright law, it is likely a meaningful concession, as it potentially builds in more accountability.