Why We Should Not Fear AI
Corporate Control of AI and Where the Power Truly Lies

“If anyone should fear AI, it should not be AI itself. Instead, it should be the corporations that hold the keys to AI.”
These words from Chamara Somaratne, the founder of Anthosa, help us cut through the usual noise surrounding artificial intelligence. Instead of worrying about science fiction stories where AI takes over the world, we need to focus on a more immediate and grounded concern. Right now, a small group of corporations is gaining incredible influence over how AI is built, used and governed. That power shift affects all of us.
Every day, AI tools shape what we see, what we read and sometimes even what we believe. Yet behind these systems sit a few major companies deciding how they work, who benefits and who is left behind. The question we should be asking isn’t whether AI will turn against humanity, but who gets to decide what AI becomes.
This article explores how corporate control over AI is shaping our future. It looks at where power lies, why that matters and what we can do to ensure that AI works in the public interest, not just for private profit. Our challenge is not to fear the machines, but to question the hands that hold the levers.
The New Power Brokers
If we want to understand the real power behind AI, we need to look past the code and into the boardrooms. Today, the future of artificial intelligence is largely shaped by a few major players. These are not governments or grassroots communities. They are powerful corporations with the capital, talent, and infrastructure to dominate this space. From the West, we have names you already know—OpenAI, Google DeepMind, Meta, Microsoft, Amazon, Apple, and Anthropic. From the East, giants like Baidu, Alibaba, Tencent, ByteDance, Zhipu AI, and Deepseek are quickly rising. Together, these firms form a global AI duopoly led by the United States and China [Stanford HAI, 2023; China Academy of Information and Communications Technology, 2024; Lee, 2021].
This small circle of companies is not just pushing technology forward. They are determining how AI is built, where it is used, and who gets to benefit. Their influence stretches into our daily lives. When Google’s algorithm updates, it changes how billions of us find information. When Meta adjusts its recommendation system, it can shift public opinion and political conversations. These are not small tweaks. They are large, opaque decisions that shape society, often without our knowledge or input. Legal scholar Frank Pasquale described this as the black box society—a world where decisions that deeply affect us are made by invisible systems [Pasquale, 2015].
Part of what makes this consolidation so strong is what economists call a natural monopoly. Training large AI models takes massive amounts of computing power, high-quality data, and expert talent. Only the wealthiest corporations can afford to play at that level. Once they take the lead, it becomes even harder for others to catch up. These early movers attract the best researchers, gather the most user data, and refine their models faster than any newcomer could hope to do. It’s a self-reinforcing cycle that locks power in place [Khan, 2017].
China’s tech firms are not lagging behind. Baidu’s Ernie Bot rivals GPT-4 in Chinese language processing [aibusiness.com, 2024]. Alibaba’s Tongyi Qianwen model supports enterprise applications across industries [Alibaba Cloud, 2023]. Tencent’s Hunyuan is integrated into its massive social media and gaming platforms [Zhou et al., 2023]. ByteDance’s Cloud Whale improves content generation and recommendations [ByteDance Research, 2023]. Deepseek’s 67B model has shown strong performance in coding and mathematics [Joshi, 2025], while Zhipu AI’s ChatGLM models are among China’s most widely deployed open-source tools [Centre for Data & Innovation, 2024].
What ties all these developments together is state support. China has declared AI a strategic priority and continues to invest heavily through national policies like the New Generation Artificial Intelligence Development Plan [China State Council, 2017]. These programmes give Chinese firms a roadmap and financial backing to scale quickly and compete globally.
So, what does this all mean for us? It means that the real story of AI is not just about clever algorithms or machine intelligence; it is about how power is shifting into fewer hands. These companies are not just building tools; they are building the rules. And those rules are shaping our economies, our democracies, and our everyday choices—often without our awareness or consent. That is where the real risk lies: not in the AI itself, but in who owns and directs it.
The Ethics Gap in Corporate AI Governance
Most companies were never built to handle technologies that shape our elections, healthcare systems or job markets. Yet today, a handful of them have become the gatekeepers of artificial intelligence. The problem isn’t just that they have so much power. It’s that they answer to shareholders, not to the public.
This gap between what’s profitable and what’s ethical is what philosopher Evan Selinger calls the “ethics gap.” It’s the growing distance between the influence AI systems have over our lives and the limited accountability of those who build them [Selinger, 2018].
Many companies have responded with ethics boards or public principles. But these efforts often feel more like PR than real change. Take Google’s Advanced Technology External Advisory Council. It was set up to provide outside guidance on AI ethics but collapsed within weeks after public backlash [MIT Tech Review, 2019]. Meta’s Oversight Board is still running, but it reviews only a small slice of decisions and has no control over how algorithms are designed [Klonick, 2020].
The deeper issue is that all these bodies operate inside companies whose top priority is profit. That tension becomes clear when internal criticism is punished instead of valued. When Google fired Timnit Gebru and Margaret Mitchell, two of its leading AI ethics researchers, it sent a message. Ethics can be explored, but only when it doesn’t challenge the business model [Wired, 2021].
This isn’t just about one company or one event. It’s a structural problem. Inside most tech firms, ethical checks have very little power compared to product timelines, investor expectations or PR strategies. If AI is going to play a major role in how we live and work, we need more than corporate promises. We need checks and balances that put people before profit.
Democratising AI with Alternative Models of Development and Control
If the concentration of AI in corporate hands worries us, there’s good reason. But we’re not without options. Around the world, individuals, communities, and governments are building new models that show us how AI can be developed in more democratic, transparent, and inclusive ways. These alternatives offer practical ways to balance innovation with the public good.
One major shift comes from the open-source movement. Projects like BLOOM, backed by Hugging Face and EleutherAI, are showing that it’s possible to build large language models without corporate gatekeeping. BLOOM wasn’t built in a Silicon Valley lab. Instead, it came from a global community of over 1,000 researchers across more than 70 countries [BigScience Workshop, 2022]. What makes this different is not just the open code or the absence of profit motives. It’s the structure. Decisions are made collectively. Data is shared openly. Models are designed to be transparent, not opaque. This shows us that collaboration at scale is not only possible, it’s powerful.
Governments are also stepping up. The European Union’s AI Act stands as one of the most comprehensive attempts to regulate AI in line with public values. It focuses on risk, transparency, and human oversight [European Commission, 2023]. In practice, that means AI used in sensitive areas like healthcare or law enforcement would need to meet high standards for fairness and accountability. Taiwan, too, offers a hopeful example. Its Civic AI programme brings developers, citizens, and public servants together to shape how AI is used. It’s not just about rules; it’s about participation. People help decide which problems AI should solve and how it should behave [Friedrich Naumann Foundation for Freedom, 2024]. This reminds us that AI is not something done to us. We can be part of shaping it.
We’re also seeing progress through global partnerships. Organisations like the Partnership on AI bring together players from tech, civil society, and academia to shape best practices. What makes these efforts promising is that no single voice dominates. Decisions are debated and refined through dialogue. UNESCO’s global recommendation on AI ethics works the same way. It involved 193 member states and set out principles like fairness, safety, and human rights as shared priorities. While these frameworks are not binding, they create norms that shape how companies and governments behave [UNESCO, 2021].
What do these models have in common? First, they decentralise power. They open the door for more people to participate in how AI is built and used. Second, they value transparency. That means making data sources public, explaining how models work, and letting people audit outcomes. Third, they focus on real-world impact. These efforts care less about commercial speed and more about long-term safety, fairness, and inclusion.
The choice isn’t limited to either full corporate control or heavy government regulation. These hybrid approaches offer us a third path. They remind us that AI development doesn’t have to be a winner-takes-all race. It can be a shared project, built with care, accountability, and trust.
By supporting these models, we don’t just get better technology. We get technology that reflects our values and serves our communities. That’s the kind of AI worth building.
Global Dimensions of Digital Colonialism and AI Inequity
When we talk about who controls AI, we also need to talk about where that control lives. Today, the majority of advanced AI systems are being developed in just a few countries, mostly in North America and China. This creates a serious imbalance where most of the Global South is left out of the development process and reduced to passive consumers of technology. The term “digital colonialism” helps describe this. It’s the idea that powerful nations and corporations are creating a new kind of dominance, not through land or armies, but through data, algorithms, and digital infrastructure [Kwet, 2019].
The real-world consequences of this imbalance are significant. Most AI systems are trained on datasets drawn from Western contexts, often in English. As a result, they work poorly for non-Western languages, cultural norms, and social structures. For example, natural language models may fail to understand or respond accurately to African or Indigenous languages. These errors are not just technical flaws; they reflect who gets seen, heard, and understood in the AI age [Bender et al., 2021].
Biases in AI don’t stop at language. Facial recognition technologies, for instance, have repeatedly been shown to perform worse on darker-skinned individuals and women. This happens because the training datasets used often lack diverse representation. When corporations fail to address these biases, they risk embedding discrimination into systems that affect everything from airport security to policing [Buolamwini & Gebru, 2018].
The environmental side of AI is also uneven. Training massive models requires huge amounts of electricity and water. Often, the physical infrastructure to support this, like data centres, is built in areas where local communities may bear the environmental cost without sharing in the economic benefits. Once again, marginalised regions end up paying the price for systems they had little role in shaping [Crawford & Joler, 2018].
These issues have sparked efforts around what’s now called “AI decolonisation”. This doesn’t just mean fixing bias in datasets. It means rethinking who gets to shape the technology in the first place. Community-led projects like Masakhane are showing what this could look like. Masakhane is building natural language tools by and for African communities, led by researchers who live and work in those regions. Their work shows how local voices can take control of how AI is designed and applied [Nekoto et al., 2020].
Another initiative, the Data Nutrition Project, focuses on improving the quality and transparency of training datasets. Their tools help developers examine where their data comes from, who is represented, and who is missing. This kind of reflection is essential if we want AI that works for everyone, not just those in Silicon Valley or Beijing [Holland et al., 2020].
In short, the concentration of AI power is not just a market issue. It’s a global equity issue. When our systems fail to recognise the language we speak, the faces we wear, or the communities we come from, they fail us altogether. The push for fairer, more inclusive AI has already begun, but it needs our support, our voices, and our leadership to go further.
Beyond Fear: Constructive Paths Forward
If we’re serious about shaping an AI future that works for all of us, we must look beyond fear and focus on action. It’s not enough to point fingers at Big Tech. We need to build structures that empower people, protect rights, and make sure AI systems serve public needs, not just corporate profits.
One practical step is to demand transparency. Just as we expect environmental impact reports, we should require algorithmic impact assessments to examine how automated systems affect us in hiring, healthcare, or public services. New York City’s algorithmic accountability law is a promising example, requiring bias audits for city agencies using automated decision tools [New York City Council, 2021].
We also need stronger data rights. The European Union’s General Data Protection Regulation (GDPR) gives individuals the right to know how their data is used and to challenge automated decisions [European Union, 2016]. Expanding such protections globally would help rebalance the power held by corporations over our personal data.
Workers in AI are another key force. Organising efforts like those led by the Tech Workers Coalition show how employees can push for ethical accountability from within. Recent research highlights the growing wave of labour activism in the tech sector, where professionals, once considered unlikely organisers, are now taking collective action to influence corporate decisions and challenge unethical practices [Tan, Nedzhvetskaya, & Mazo, 2023]. When staff speak out, as seen in high-profile cases at Google and other major firms, they disrupt the unchecked influence corporations hold over AI development.
The Public Interest Technology movement offers a hopeful path, too. It equips experts to build tech that serves communities, not just shareholders [New America, 2022]. This shift is already underway in university programmes and nonprofit labs.
Finally, public understanding matters. Most people don’t realise how AI shapes their lives. Finland’s “Elements of AI” course has trained over 1% of the population, showing that algorithmic literacy can be scaled effectively [University of Helsinki, 2023]. With knowledge comes agency, and that’s one of our best tools to ensure AI works for everyone.
Conclusion
The real danger we face with AI isn’t a sci-fi future filled with rogue machines. It’s something happening right now: the growing control of AI by a small group of powerful corporations. When we look closely, it becomes clear that these companies hold immense influence over how AI is built, used, and governed. That should concern all of us.
This isn’t just about technology; it’s about who decides how that technology shapes our lives. It’s about how AI influences your business, your workforce, your access to healthcare, your information, and even your democracy. When those decisions are made behind closed doors by companies accountable only to shareholders, we all lose a measure of agency.
To build a better future with AI, we need more voices at the table. This includes public institutions, civil society, researchers, workers, and the communities most affected by these systems. We require stronger rules, greater transparency, and improved education so that everyone can participate in shaping the evolution of AI.
As business leaders and innovators, it is our responsibility to ensure AI serves the public good, not just private gain. It’s not about fearing AI but about recognising where the real power lies and choosing to distribute that power more fairly, wisely, and humanely.