ChatGPT vs DeepSeek: Which Free AI for Beginners is Smarter?
ChatGPT and DeepSeek are two leading free AIs for beginners. This guide compares their features, writing skills, and ease of use to help you choose.

TL;DR: The News/Media Alliance struck a collective AI licensing deal with Bria in March 2026, offering 2,200+ member publishers a 50/50 revenue split on enterprise retrieval-augmented generation (RAG) queries. It is the first structured mechanism for small and local newsrooms to opt into AI revenue rather than only opt out. Bria controls the attribution model that determines each publisher’s share.
Key Takeaways:
On March 24, 2026, the News/Media Alliance announced a licensing agreement with Bria AI covering more than 2,200 of its member publishers. The revenue model is 50/50 — half to publishers, half to Bria — applied to enterprise retrieval-augmented generation (RAG) queries that draw on member content.
For local and mid-sized newsrooms, this is the first structured path to AI licensing revenue that does not require the traffic scale of the AP or the legal budget of the New York Times. Direct deals with OpenAI, Google, or Microsoft are not available at that level. The NMA deal, for most of its 2,200 members, is the only option on the table.
The uncomfortable question is not whether the deal should exist. It is who controls the math — and what that means for publishers who sign.
This is not a training data deal. Bria AI is not licensing publisher content to train its models in the traditional sense. The agreement covers RAG: a system where an AI model retrieves and synthesizes content from an external document library at query time, rather than encoding it into model weights during training.
Enterprise customers pay to run queries against Bria’s content network. When those queries draw on an NMA member’s content, a portion of the query revenue flows back to that publisher. Participation is opt-in — NMA membership does not automatically enroll a publisher.
The distinction between RAG and training data licensing matters practically. Training data licensing is typically a flat or annual fee for historical archive access. RAG revenue is continuous and usage-based: publisher payouts depend on how often their content gets retrieved, and how much each retrieval is worth. Both variables are set by Bria.
Evaluating this deal requires understanding what the alternatives actually are. The starting point is not flattering.
Chartbeat data published in March 2026 shows that small publishers — those with 1,000 to 10,000 daily page views — lost 60% of their Google search referrals over two years. Large publishers lost 22%. AI chatbots currently account for under 1% of publisher referrals. The Reuters Institute’s 2026 Future of News report, drawing on 280 newsroom executives across 51 countries, found that publishers expect search referrals to fall another 40% by 2029.
Direct AI licensing is not realistic at the scale most NMA members operate. OpenAI’s deal with the Associated Press, the Financial Times’ partnership with OpenAI, and the New York Times copyright litigation were each shaped by publishers with significant traffic, archives, and legal resources. A regional business journal or local news outlet does not have those negotiating cards.
The NMA–Bria deal is, in that context, better than nothing. The question is by how much — and on whose terms.
A 50/50 split sounds balanced. But a revenue-share percentage is only as meaningful as the denominator it applies to.
Bria controls the attribution model. That means Bria decides which queries count as drawing on publisher content, how much revenue a single query generates, and how that revenue is allocated when a query retrieves content from several publishers simultaneously. A query that draws on five publishers’ content does not necessarily produce five equal shares — the allocation depends on Bria’s methodology.
No independent auditor has been named. The Wisconsin Newspaper Association described the arrangement as “a 50/50 split based on Bria’s own attribution,” with no verification mechanism publicly disclosed.
This pattern is not new in publisher-platform revenue sharing. Google’s Showcase program faced sustained criticism from publishers who argued they could not independently verify the proprietary metrics Google used to calculate payouts. The News Media Bargaining Code in Australia forced greater transparency only after publishers escalated through regulatory channels. Buyer-controlled attribution, no third-party audit — that sequence produced years of disputes before structural changes arrived. This deal starts in the same position.
There are now at least four distinct structures for publishers seeking revenue from AI companies. They are not equivalent in terms of leverage, payout certainty, or who controls the terms.
| Model | How it works | Who controls payout | Ongoing? | Best for |
|---|---|---|---|---|
| Training data licensing | Flat or annual fee for archive access | Both parties negotiate | No | Large publishers with traffic scale and legal leverage |
| RAG monetization (NMA–Bria) | Usage-based revenue from enterprise queries | AI company controls attribution | Yes | Small publishers with no direct deal leverage |
| Opt-out registry | Exclusion from AI training — no payment | Publisher controls opt-out | N/A | Publishers who reject all AI licensing |
| Mandatory collective licensing (EU model) | Legislated fee pool distributed via collecting societies | Regulator or collecting society | Yes | Publishers in EU jurisdiction where law applies |
The NMA–Bria deal fills a real gap: ongoing, usage-based revenue for publishers too small for direct deals and unwilling to opt out entirely. That gap is genuine. The terms determining how much flows through it remain opaque.
No revenue projections have been disclosed for this agreement. That absence matters more than it might appear.
RAG enterprise pricing is typically structured at the query level or as a subscription over a defined content corpus. If Bria’s enterprise customers pay, say, $0.01 per retrieved content chunk — a plausible rate for enterprise search products — and each query retrieves content from multiple publishers, the per-publisher attribution per query could be fractions of a cent. Divided across 2,200 publishers, aggregate payouts depend entirely on Bria’s enterprise contract volume.
Bria is a visual AI company that recently expanded into enterprise content retrieval. Its RAG product is early-stage, with no public figures on its customer count or contract values. This works out reasonably for publishers who understand the upside is speculative — but most press coverage of the deal did not frame it that way.
The deal could produce meaningful recurring revenue as enterprise RAG adoption grows. It could also produce very little if Bria’s enterprise contracts remain small. Publishers who sign have no disclosed mechanism to know in advance which outcome they are entering.
Not every NMA member should opt in without careful review. These conditions warrant caution before signing:
What is RAG and why does it create a new revenue question for publishers?
RAG (retrieval-augmented generation) retrieves external content at query time rather than relying solely on model training data. Enterprise RAG products synthesize publisher content on demand — a usage category separate from training data licensing, which creates both a new revenue opportunity and unanswered questions about attribution and substitution.
How does the NMA–Bria deal differ from deals the AP or FT signed with OpenAI?
The AP and FT deals are direct negotiations with individually set terms, pricing, and audit structures. The NMA–Bria deal is collective: 2,200 publishers, uniform terms, attribution calculated by Bria. Scale, leverage, and transparency all differ significantly.
Can publishers audit their payouts under this arrangement?
No independent audit mechanism has been publicly announced as part of the March 2026 agreement. Publishers receive payouts calculated by Bria’s own attribution model with no disclosed third-party verification process.
Does joining this deal affect a publisher’s ability to opt out of AI training data use?
The agreement covers RAG queries, not training data. Whether the two categories overlap legally depends on the specific contract terms. Publishers should seek legal review before assuming they are entirely independent categories.
Is this the only option for small publishers seeking AI revenue?
Currently, it is the only structured collective mechanism available for small and mid-sized U.S. publishers. EU publishers may have access to collective licensing via national collecting societies, depending on jurisdiction and which frameworks are in force.
The NMA–Bria deal represents a structural shift: for the first time, small and local publishers have a formal mechanism to opt into AI revenue rather than only opt out of AI use. That matters.
But the terms of this first wave of small-publisher AI deals will set expectations for the next five years. Buyer-controlled attribution, no independent audit, and no disclosed revenue projections are not a foundation for a durable licensing model. They are the opening conditions of a negotiation that most small publishers may not realize has already begun.
If you publish content under an NMA membership, the practical next step is not to opt in or decline immediately — it is to request Bria’s attribution methodology before making either decision. That document will tell you more about this deal than any press release.
For policymakers watching this space: the EU mandatory licensing framework and Australia’s News Media Bargaining Code both emerged from exactly this dynamic — deals that sounded fair until publishers tried to verify the arithmetic. The question is whether English-speaking markets outside the EU will require the same structural pressure before transparency becomes a baseline condition.
ChatGPT and DeepSeek are two leading free AIs for beginners. This guide compares their features, writing skills, and ease of use to help you choose.
Master advanced prompting techniques 2026 like Chain-of-Thought and Self-Ask to get better results from ChatGPT, Grok, and Gemini.
An accessible overview of the history of artificial intelligence, from early theoretical ideas to modern deep learning.
In 2026, mainstream content creators and new AI adopters have powerful AI video tools at their fingertips.
Discover the most powerful AI productivity tools for 2026, including Gemini, Claude, and top emerging alternatives.
China LLMs 2026: Qwen vs DeepSeek vs ERNIE vs Hunyuan Compared
Machine learning vs deep learning explained with clear differences, real world use cases, and guidance for beginners and professionals
Stop collecting AI tools. Start building a system that works like a fractional employee-automate smarter, not harder.