Expectations in the tech world are always centred around “tech bros:” everyone expects the next eccentric genius to emerge, develop a new app, and lead the change. This app can be for entertainment purposes, providing direct access to sites such as Fortunica Casino, or it can be based on fintech. In any case, the tech industry is built on hype cycles.
But that’s changing now: the next breakthrough won’t come from a tech genius who develops a flashy app. With the emergence of AI, the industry’s power centers have quietly shifted: standards committees, regulatory task forces, procurement offices, and liability frameworks are now the true drivers of change.
This may not be the romantic AI revolution we envisioned, but these power centers will shape the next decade. Those who write the rules will have more influence than those who create the apps: let’s take a look at how this will happen.
The Misconception: “The AI Revolution Will Be Decided by Apps”
Until now, in the tech industry, shipping the latest features as quickly as possible has been the most important thing. Designing the right UX, setting a new trend by offering the right features, and leading the growth curve was the job of tech bros. You know the stereotype: eccentric individuals who don’t look like billionaires—in other words, “superstars.”
But AI has changed this pattern forever. Tech industry decisions are no longer made by tech bros; policies are needed to define what AI can automate, where it can be deployed, and how it should be evaluated. And these policies are being created by authorities unrelated to the tech world, such as:
- EU policymakers, writing the AI Act.
- US agencies, defining safety and reporting rules.
- NIST committees, setting evaluation standards.
- Algorithmic registries in China, defining transparency obligations of AI.
And perhaps for the first time, these rules are advancing faster than the tech industry itself. This means that change will be spearheaded by “committees,” not eccentric billionaires, because they decide what AI and the apps that use it can do.
Why This Shift Happened
Truth be told, this was inevitable, and it was caused by the tech industry itself. Over time, every AI lab started using the same techniques, architectures, and training tricks. The performance gap between AI models narrowed considerably, and the only differences became strange metrics that meant nothing to regular users. In other words, AI became predictable, industrialized, and commoditized, not revolutionary.
Because every AI company used roughly the same models, governance became the only way to create a competitive advantage. Regulators recognized this and quickly intervened, and AI companies didn’t object, knowing that governance models were the only thing they had left to make a difference.
The Policy Layer Is Quietly Becoming the Power Layer
In a short time, the policy layer has become the leading force for AI applications. For example, the European Union now has an AI Act that governs which apps are considered legal. In the US, new safety protocols have been implemented through executive orders. China has mandated algorithm transparency by creating a registry.
All these regulations are shaping the playing field: committees now decide what is considered high risk, what needs to be audited, what requires provenance, and who is held responsible if AI causes harm. AI applications can no longer move beyond the boundaries set by these committees. In other words, a different world is leading the tech world.
The New Talent Migration
Whether this change is what AI labs want is debatable: perhaps they’ve lost control while trying to leverage governance as a competitive advantage. But whatever the truth, this change is causing a new talent migration. Those who previously worked as senior researchers, ML engineers, and PMs are gradually transitioning to governance, assurance, and policy roles.
This means they are now taking on positions where they can influence other things, not new features. They’ll focus on safety thresholds, interpretability requirements, evaluation methodologies, disclosure norms, and industry consortium standards.
And this is a change with enormous implications: for example, even a single regulation change in a risk category could prevent some AI applications from legally entering certain markets. A lab that misses a single reporting requirement could be forced out of the market or face significant expenses to meet it.
The future no longer belongs to eccentric billionaires or coders who develop the flashiest app. Regulation makers will determine what the future will be and what it will entail. Whether this will result in excessive restrictions or lead to applications that are far more beneficial to people remains to be seen. But this is not an unexpected situation: throughout history, regulation has ultimately determined the direction of every technological revolution, and the same will be true for AI.