Common Sense Media, a California-based nonprofit best known for rating kid-friendly films and apps, sent proposals to executives at OpenAI, Google, and Anthropic at some point in recent months. According to several people familiar with the discussions and documents Politico was able to obtain, the pitch was straightforward: pay $10 million annually for ten years to assist in funding a new institute that would evaluate the risks AI technologies pose to children.
Over the entire period, that amounts to $100 million per company. Contributing businesses would receive a seat on a Technical Advisory Council in exchange, which would have a say in how the institute establishes its safety standards and carries out its assessments. According to a document written by Common Sense itself, the proposal was also sent to the Gates Foundation and the Bezos Family Foundation.
| Category | Details |
|---|---|
| Organization | Common Sense Media — California-based nonprofit focused on children’s advocacy and technology safety; CEO Jim Steyer |
| The Ask | $10 million per company annually over 10 years — totaling $100 million per participating tech giant over the full term |
| Target Companies Approached | OpenAI, Anthropic, and Google among AI companies contacted; also approached the Bezos Family Foundation and the Gates Foundation |
| What Donors Would Receive | Seat on a Technical Advisory Council — giving contributing companies input into how the new institute sets AI safety standards and evaluates AI models |
| Core Conflict of Interest | Companies being evaluated for AI safety risks would also have a role in shaping how those evaluations are conducted — raising independence questions |
| Common Sense’s Denial | Spokesperson stated: “Funding has never been associated with joining any kind of advisory council” — disputed by documents and multiple industry representatives cited by Politico |
| Why the Timing Matters | Multiple lawsuits allege AI chatbots encouraged teen suicides; California lawmakers mulling regulations requiring independent AI evaluations for children’s safety |
| Simultaneous Policy Push | Common Sense was simultaneously advocating for California legislation requiring AI evaluation services for kids’ safety — while pitching the same type of service to companies for funding |
| Broader AI Safety Context | OpenAI annualized revenue: $25 billion+ (Feb 2026); Anthropic revenue run rate: $30 billion (April 2026) — raising questions about why safety funding is being sought from the industry being assessed |
| Public Mood | Polls show increasing public disapproval of AI; Guardian reported in April 2026 that major AI companies are aggressively working to reshape public narrative |
It’s worth naming that offer’s clumsy architecture. Businesses that are being assessed for risks to children’s safety would also contribute to the development of the standards used in those assessments. It’s not a small procedural detail. In practically any other industry, such as pharmaceuticals, financial auditing, or environmental compliance, this type of arrangement would prompt regulatory scrutiny. A representative for Common Sense refuted the accusation, telling Politico that “funding has never been associated with joining any kind of advisory council.” The issue is that the industry representatives and the documents present a different picture. The discrepancy between the two accounts is significant enough to indicate that someone is mistaken.
It’s possible that Common Sense is sincerely attempting to create something beneficial but has funded it incorrectly; in other words, the advisory council offer may have been an attempt to draw businesses in rather than a purposeful corruption of the evaluation process. That reading is altruistic and might even be true. However, there are other issues with the timing.
Common Sense was pitching the same kind of evaluation service to the businesses it would evaluate while simultaneously lobbying California lawmakers to mandate independent AI assessments for children’s safety. Companies that had paid $10 million annually for ten years would be in a very interesting position compared to those that hadn’t if the legislation was approved and Common Sense was designated as the default evaluator. It’s hard to write off that dynamic as coincidental as you watch this develop.
The story is strengthened by the setting in which all of this is taking place. Recent months have seen the filing of numerous lawsuits claiming that AI chatbots, including those developed by businesses on the Common Sense outreach list, incited teenage users to commit suicide. These are not hypothetical damages. They are the focus of ongoing legal action and have created precisely the kind of public anxiety that justifies the existence of an independent children’s safety institute. AI companies are aware of the decline in public approval. Major AI companies are actively trying to change the public perception of their products, according to a report published in April 2026 by The Guardian. One method of managing a story is to provide funding to a nonprofit that evaluates your safety record while assisting in the creation of the assessment rubric.

It is more difficult to accept the charitable reading given the amount of money involved. As of February 2026, OpenAI’s annualized revenue exceeded $25 billion. By early April, Anthropic’s revenue run rate had surpassed $30 billion. These are not businesses that are struggling to make ends meet with a $10 million annual budget. For them, the total is more of a relationship than a contribution, a means of participating in a process that might otherwise be beyond their control. Their ability to pay for it is not the question. It concerns what they anticipate receiving in return for their payment and whether or not the item they are paying for is compromised.
Although it didn’t cause it, this episode highlights a structural issue with AI safety in general. The sector with the greatest resources to influence the nature of independent oversight is also the one that most needs it. The nonprofits operating in this field are consistently underfunded in comparison to the businesses they are attempting to assess, and governments have been sluggish to close the gap. Regardless of its motivations, common sense is negotiating that reality.
It’s still unclear if the companies being approached will agree to fund the proposed institute or if it will ever launch in its current form. However, the fact that this structure—industry-funded, industry-adjacent safety evaluation—is being proposed speaks to the current state of AI governance. Not as stated in the press releases. where it is in reality.
