BBC threatens AI firm Perplexity with legal action over unauthorised use of news content

The BBC has issued a legal warning to US-based artificial intelligence company Perplexity, accusing it of reproducing BBC content without permission and demanding that the company stop using its material, delete existing data, and propose financial compensation.

This marks the first time the BBC has threatened legal action against an AI company, as concerns escalate across the media industry over how generative AI tools use protected journalism.

In a letter sent directly to Perplexity CEO Aravind Srinivas, the broadcaster alleged that the firm’s AI-powered chatbot was presenting verbatim BBC content to users in breach of UK copyright law and the BBC’s terms of use. The corporation claims the activity is damaging its reputation, especially among UK licence fee payers, by producing inaccurate or misleading summaries of news stories.

“It is highly damaging to the BBC, injuring the BBC’s reputation with audiences… and undermining their trust in the BBC,” the letter states.

The legal move follows BBC research earlier this year which found that several major AI tools — including Perplexity’s — frequently misrepresented news stories, falling short of BBC editorial standards around impartiality and accuracy.

In a brief statement, Perplexity dismissed the claims, saying: “The BBC’s claims are just one more part of the overwhelming evidence that the BBC will do anything to preserve Google’s illegal monopoly.”

The company did not clarify how it believes Google relates to the BBC’s legal concerns and offered no further explanation.

At the heart of the dispute lies the practice of web scraping, where bots extract content from websites en masse — often without explicit permission — to train or feed AI models. While robots.txt files are commonly used to instruct bots not to access certain content, compliance is voluntary, and numerous reports suggest some AI firms ignore these restrictions.

The BBC says it has explicitly disallowed two of Perplexity’s crawlers but alleges that the company has continued to scrape its content regardless.

Perplexity has previously denied breaching robots.txt rules. In a June 2024 interview with Fast Company, CEO Srinivas claimed that its bots comply with such directives and that the company does not use content to train foundation models, stating that it instead operates as a “real-time answer engine”.

The chatbot presents users with aggregated answers to queries, pulling in and synthesising live information from across the web — a process that, according to Perplexity, does not involve the same training processes used by large language model developers.

Still, the BBC and other media organisations argue that this real-time scraping and content repackaging represents a serious breach of intellectual property. The BBC’s stance is echoed by the Professional Publishers Association (PPA), which represents over 300 UK media brands.

In a statement, the PPA said it was “deeply concerned” by current AI practices, warning that the unauthorised use of publishers’ content to power AI tools poses a threat to the UK’s £4.4 billion publishing industry and the 55,000 people it employs.

“This practice directly threatens the UK’s publishing industry and the journalism it funds,” the PPA said, calling on the government to enforce stronger copyright protections for media content used by AI firms.

The BBC–Perplexity standoff comes amid mounting tension between news organisations and generative AI companies. While AI chatbots such as OpenAI’s ChatGPT, Google’s Gemini, and Perplexity’s own assistant continue to grow in popularity, they have been repeatedly criticised for presenting misleading summaries, failing to credit original sources, or diverting traffic away from the publishers who create the content.

In January, Apple suspended an AI-driven feature that generated misleading BBC headlines on iPhones, following complaints from the broadcaster.

Quentin Willson, founder of the FairCharge campaign and a former Top Gear presenter, said the unauthorised use of journalistic content poses existential risks for trusted media organisations.

“If AI is allowed to scrape and regurgitate verified journalism without consent or compensation, the business model for serious news collapses,” he said.

While many publishers have begun signing licensing deals with AI companies — including The Associated Press, Axel Springer and News Corp — others are taking legal action. The New York Times is currently suing OpenAI and Microsoft, and more lawsuits are expected as the technology advances.

For now, the BBC is demanding a halt to unauthorised use, full deletion of scraped data, and financial reparations. Whether it follows through with formal legal proceedings could set a major precedent in the global fight over AI and journalism.

Read more:
BBC threatens AI firm Perplexity with legal action over unauthorised use of news content