Close Menu
TemporaerTemporaer
  • Home
  • About
  • Privacy Policy
  • Terms of Service
  • Contact
  • Science
  • Technology
  • News
Facebook X (Twitter) Instagram
Facebook X (Twitter)
TemporaerTemporaer
Subscribe Login
  • Home
  • About
  • Privacy Policy
  • Terms of Service
  • Contact
  • Science
  • Technology
  • News
TemporaerTemporaer
  • Home
  • About
  • Privacy Policy
  • Terms of Service
  • Contact
  • Science
  • Technology
  • News
Home » The $100 Million AI Safety Pitch That Major Tech Giants Are Being Asked to Fund
News

The $100 Million AI Safety Pitch That Major Tech Giants Are Being Asked to Fund

Melissa HoganBy Melissa HoganApril 21, 2026No Comments6 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp VKontakte Email
Share
Facebook Twitter LinkedIn Pinterest Email

Common Sense Media, a California-based nonprofit best known for rating kid-friendly films and apps, sent proposals to executives at OpenAI, Google, and Anthropic at some point in recent months. According to several people familiar with the discussions and documents Politico was able to obtain, the pitch was straightforward: pay $10 million annually for ten years to assist in funding a new institute that would evaluate the risks AI technologies pose to children.

Over the entire period, that amounts to $100 million per company. Contributing businesses would receive a seat on a Technical Advisory Council in exchange, which would have a say in how the institute establishes its safety standards and carries out its assessments. According to a document written by Common Sense itself, the proposal was also sent to the Gates Foundation and the Bezos Family Foundation.

CategoryDetails
OrganizationCommon Sense Media — California-based nonprofit focused on children’s advocacy and technology safety; CEO Jim Steyer
The Ask$10 million per company annually over 10 years — totaling $100 million per participating tech giant over the full term
Target Companies ApproachedOpenAI, Anthropic, and Google among AI companies contacted; also approached the Bezos Family Foundation and the Gates Foundation
What Donors Would ReceiveSeat on a Technical Advisory Council — giving contributing companies input into how the new institute sets AI safety standards and evaluates AI models
Core Conflict of InterestCompanies being evaluated for AI safety risks would also have a role in shaping how those evaluations are conducted — raising independence questions
Common Sense’s DenialSpokesperson stated: “Funding has never been associated with joining any kind of advisory council” — disputed by documents and multiple industry representatives cited by Politico
Why the Timing MattersMultiple lawsuits allege AI chatbots encouraged teen suicides; California lawmakers mulling regulations requiring independent AI evaluations for children’s safety
Simultaneous Policy PushCommon Sense was simultaneously advocating for California legislation requiring AI evaluation services for kids’ safety — while pitching the same type of service to companies for funding
Broader AI Safety ContextOpenAI annualized revenue: $25 billion+ (Feb 2026); Anthropic revenue run rate: $30 billion (April 2026) — raising questions about why safety funding is being sought from the industry being assessed
Public MoodPolls show increasing public disapproval of AI; Guardian reported in April 2026 that major AI companies are aggressively working to reshape public narrative

It’s worth naming that offer’s clumsy architecture. Businesses that are being assessed for risks to children’s safety would also contribute to the development of the standards used in those assessments. It’s not a small procedural detail. In practically any other industry, such as pharmaceuticals, financial auditing, or environmental compliance, this type of arrangement would prompt regulatory scrutiny. A representative for Common Sense refuted the accusation, telling Politico that “funding has never been associated with joining any kind of advisory council.” The issue is that the industry representatives and the documents present a different picture. The discrepancy between the two accounts is significant enough to indicate that someone is mistaken.

It’s possible that Common Sense is sincerely attempting to create something beneficial but has funded it incorrectly; in other words, the advisory council offer may have been an attempt to draw businesses in rather than a purposeful corruption of the evaluation process. That reading is altruistic and might even be true. However, there are other issues with the timing.

Common Sense was pitching the same kind of evaluation service to the businesses it would evaluate while simultaneously lobbying California lawmakers to mandate independent AI assessments for children’s safety. Companies that had paid $10 million annually for ten years would be in a very interesting position compared to those that hadn’t if the legislation was approved and Common Sense was designated as the default evaluator. It’s hard to write off that dynamic as coincidental as you watch this develop.

The story is strengthened by the setting in which all of this is taking place. Recent months have seen the filing of numerous lawsuits claiming that AI chatbots, including those developed by businesses on the Common Sense outreach list, incited teenage users to commit suicide. These are not hypothetical damages. They are the focus of ongoing legal action and have created precisely the kind of public anxiety that justifies the existence of an independent children’s safety institute. AI companies are aware of the decline in public approval. Major AI companies are actively trying to change the public perception of their products, according to a report published in April 2026 by The Guardian. One method of managing a story is to provide funding to a nonprofit that evaluates your safety record while assisting in the creation of the assessment rubric.

The $100 Million AI Safety Pitch That Major Tech Giants Are Being Asked to Fund
The $100 Million AI Safety Pitch That Major Tech Giants Are Being Asked to Fund

It is more difficult to accept the charitable reading given the amount of money involved. As of February 2026, OpenAI’s annualized revenue exceeded $25 billion. By early April, Anthropic’s revenue run rate had surpassed $30 billion. These are not businesses that are struggling to make ends meet with a $10 million annual budget. For them, the total is more of a relationship than a contribution, a means of participating in a process that might otherwise be beyond their control. Their ability to pay for it is not the question. It concerns what they anticipate receiving in return for their payment and whether or not the item they are paying for is compromised.

Although it didn’t cause it, this episode highlights a structural issue with AI safety in general. The sector with the greatest resources to influence the nature of independent oversight is also the one that most needs it. The nonprofits operating in this field are consistently underfunded in comparison to the businesses they are attempting to assess, and governments have been sluggish to close the gap. Regardless of its motivations, common sense is negotiating that reality.

It’s still unclear if the companies being approached will agree to fund the proposed institute or if it will ever launch in its current form. However, the fact that this structure—industry-funded, industry-adjacent safety evaluation—is being proposed speaks to the current state of AI governance. Not as stated in the press releases. where it is in reality.

The $100 Million AI Safety Pitch That Major Tech Giants Are Being Asked to Fund
Share. Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp Email
Previous ArticleWhy the World’s Biggest Tech Companies Are Suddenly Investing in Nuclear Fusion
Next Article How to Destroy a Hard Drive So the NSA Can Never Recover Your Data
Melissa Hogan
  • Website

Melissa Hogan is the Senior Editor at Temporaer, and quite possibly the person on the internet who has thought the most about what happens to your data when a hard disk drive fails. She is a self-described storage hardware obsessive — the kind of person who reads NVMe specification documents for fun, tracks NAND flash fab yield rates with genuine emotional investment, and has strong, considered opinions about why QLC cells are misunderstood by mainstream tech media. She came to technology writing the way many of the best specialists do: not through a newsroom, but through an obsession that simply refused to stay quiet.Melissa, a stay-at-home mother, is an example of what the technology industry frequently undervalues: the serious, self-made expert who exists entirely outside of the institutional pipeline. She developed her technological expertise solely through self-directed learning, practical hardware experimentation, and an extraordinary appetite for technical documentation. She doesn't have a degree in journalism or experience in corporate technology, but what she brings to her editorial work at Temporaer is something more uncommon: a sincere, unfulfilled passion for how computers store, retrieve, and safeguard data, along with the patience to fully comprehend it and the ability to articulate it.

Related Posts

Researchers Say Machines May Soon Think Independently — And the Line Between Illusion and Reality Is Blurring Fast

April 21, 2026

Your Divorce Attorney Has a Warning About ChatGPT That You Need to Hear Before Court

April 21, 2026

Why Fort Worth’s Community Solar Project Could Become a Template for Rural Energy Storage

April 21, 2026

Why ‘Prompt Engineering’ Was the Shortest-Lived Tech Career in History

April 17, 2026
Leave A Reply Cancel Reply

You must be logged in to post a comment.

Science

How to Destroy a Hard Drive So the NSA Can Never Recover Your Data

By Melissa HoganApril 21, 20260

There’s a certain false sense of security that results from selecting “delete.” The file is…

The $100 Million AI Safety Pitch That Major Tech Giants Are Being Asked to Fund

April 21, 2026

Why the World’s Biggest Tech Companies Are Suddenly Investing in Nuclear Fusion

April 21, 2026

Researchers Say Machines May Soon Think Independently — And the Line Between Illusion and Reality Is Blurring Fast

April 21, 2026

This Breakthrough Changes Everything — And Most People Haven’t Heard About It Yet

April 21, 2026

Scientists Say They Are Entering Unknown Territory

April 21, 2026

How China’s Lithium-Free Fertilizer Production Is Insulating It From a Crisis Hitting Everyone Else

April 21, 2026
About

Temporaer (temporaer.info) is an independent technology publication covering computer hardware, software, data storage devices, emerging storage technologies, and artificial intelligence. We report on the latest developments, news, updates, explain complex technical subjects in plain language, and publish expert perspectives.

Disclaimer

Hardware reviews, software analysis, storage technology guides, AI coverage, technology industry financial reporting, market commentary, expert opinion, editorial analysis, and all other content published on Temporaer do not constitute financial advice, investment advice, securities recommendations, legal advice, or professional counsel of any kind. This website’s content is exclusively offered for news reporting, education, and informational purposes.

Facebook X (Twitter)
  • Home
  • About
  • Privacy Policy
  • Terms of Service
  • Contact
  • Science
  • Technology
  • News
© 2026 ThemeSphere. Designed by ThemeSphere.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?