Something more difficult to identify than the coastal fog that rolls over Highway 101 before dawn, a strange chill swept through Silicon Valley on February 11, 2026. Almost every intelligent person he knows in the tech industry is suffering from severe anxiety, according to a post by veteran entrepreneur Brian Norgard, who has witnessed several tech cycles rise and fall. It resembled a signal flare thrown into a congested digital sky more than a tweet.

The mood has subtly changed inside glass-walled offices in Mountain View and Palo Alto. While standing desks and oat-milk lattes are still popular among engineers, the focus of hallway conversations has shifted from scale to risk. As if the future had started to press against the present, one product manager explained how team meetings that had previously focused on growth metrics were now veering into uncomfortable discussions about unforeseen consequences.
| Name / Organization | Role | Relevance to Current Anxiety | Notable Statement / Action | Reference |
|---|---|---|---|---|
| Brian Norgard | Serial entrepreneur | Highlighted widespread tech worker anxiety | Warned that “everything is about to fundamentally break” | https://x.com |
| Jimmy Ba | Co-founder, xAI | Warned of recursive AI self-improvement | Predicted a decisive year for humanity | https://x.ai |
| Ethan Mollick | Wharton professor | Noted public misunderstanding of AI power | Observed disconnect between perception and reality | https://www.wharton.upenn.edu |
| Matt Shumer | CEO, HyperWrite | Publicly warned of rapid AI progress | Viral manifesto: “Something Big Is Happning” | https://www.linkedin.com/pulse/something-big-happening-matt-shumer-so5he/ |
| Yoshua Bengio | AI pioneer, Turing Award winner | Warned of psychological & societal risks | Co-authored AI Safety Report | https://mila.quebec |
The uneasiness was heightened on the same day Jimmy Ba left Elon Musk’s xAI. His parting message sounded more like a weather alert than a normal executive transition. He predicted productivity gains that would condense decades of advancement into months and talked about a “recursive self-improvement loop.” He might have intended this to be hopeful. However, it seems that the acceleration itself is now the cause of anxiety.
The response outside the Valley is strangely subdued. The disconnect was recently highlighted by Wharton’s Ethan Mollick: AI still seems to most people like a marginally better voice assistant or a chatbot that can occasionally be useful. A fault line would represent the difference between perception and capability, with one side employing the most potent leverage tool ever developed and the other side being ignorant of its existence.
The divide manifests itself in ordinary situations. While nearby customers complain about autocorrect errors, a founder in a San Francisco café shows off an AI agent that can draft legal contracts in a matter of seconds. It’s difficult to ignore how reality is presented differently depending on one’s seat as you watch this unfold.
When describing their work to friends and family, some insiders acknowledge that they have tempered the truth. In order to avoid coming across as insane, HyperWrite CEO Matt Shumer wrote that he had been providing a socially acceptable version of AI advancement. He now claims that he is no longer able to pretend. While public messaging remains positive, investors appear to hold similar beliefs in private.
It’s not just philosophical anxiety. Scientists caution that strong models might be used to automate cyberattacks or create biological threats. Theoretically, the same tools that speed up drug discovery can speed up pathogen design. It’s still unclear if safeguards can keep up with the expansion of capabilities, particularly as costs decrease and access increases.
Geopolitical pressure is increasing in the meantime. Countries are vying to use AI in intelligence analysis, cyber operations, and defense systems. Autonomous weapons have already been used in a few limited conflicts, so they are no longer just theoretical. History indicates that restraint becomes brittle once such technologies are available.
Another source of conflict is corporate incentives. Despite the unresolved social repercussions, businesses are under tremendous pressure to automate processes and cut expenses. In a competitive market, there is a perception that being cautious can be interpreted as weakness, which forces companies to act more quickly than their own safety teams may want.
Additionally, unexpected social effects are becoming apparent. Researchers Yoshua Bengio and colleagues raise concerns about psychological dependence and influence, cautioning against people developing emotional attachments to chatbots. Few people could have imagined a year ago that teens would be confiding in AI companions late at night, with screens glowing in dark bedrooms while parents assumed homework was in progress.
It’s common to draw comparisons to previous technological turning points. In the past, nuclear physicists struggled with the potentially disastrous consequences of their findings. Biotechnology presented both new ethical conundrums and life-saving treatments. Engineers in Silicon Valley are now in a similar situation: creating instruments that allude to instability while promising plenty.
This anxiety might not be a sign of an imminent crisis, but rather of a transitional period. Advancement in technology has always been accompanied by fear. However, the designers of this change seem unusually uneasy, and that uneasiness is significant.
Office windows in the Valley shine brightly long after the sun sets, illuminating rows of desks where models continue to train and code continues to accumulate. The rest of the world laughs at the peculiarities of chatbots and scrolls past headlines. The people who are closest to the machines here appear to be listening for something else, a faint signal that can be heard beneath the din of servers, and they are unsure if they have enough time to control what they have started.
