Big Tech's Dangerous Grip on AI Regulation: Why We Should Be Worried | World Briefings
Subscribe to World Briefings's newsletter

News Updates

Let's join our newsletter!

Do not worry we don't spam!

World

Big Tech's Dangerous Grip on AI Regulation: Why We Should Be Worried

16 September, 2024 - 4:30PM
Big Tech's Dangerous Grip on AI Regulation: Why We Should Be Worried
Credit: ytimg.com

Over the past year, the regulation of AI has been a hot political topic in Europe and in the world. Politicians and regulators have attempted to come up with responses to the new risks created by the rapid development of AI, from surveillance to AI’s impact on the media and on democracy. Some markets like the EU have developed a dedicated rulebook for AI, while others have taken a more cautious approach. But certain issues like the proliferation of AI-powered disinformation seem to be testing the limits of what the law can accomplish.

The threats posed by digital systems are complex and far-reaching. New technologies are dramatically widening global inequality, and tech giants have emerged as massive energy users, with serious implications for climate change and the environment. Perhaps most worrying are the near-constant violations of the right to privacy, owing to the lack of data security or protections against surveillance. It is standard industry practice for vast amounts of data to be collected and sold to the highest bidder. As a result, digital platforms seem to know us better than we know ourselves, and life online is awash in economic and political manipulation.

Moreover, algorithmic manipulation and disinformation have already been shown to threaten the proper functioning of democracy. Ahead of the 2016 presidential election in the United States, for example, the political consulting firm Cambridge Analytica harvested information from as many as 87 million Facebook users in an attempt to sway voters. The company and its affiliates had already likewise misused data to try to influence the United Kingdom’s Brexit vote.

More recently, the rapid development of large language models such as OpenAI’s ChatGPT has opened up new avenues for fraud, including through audio and visual deepfakes that can destroy reputations. LLMs have also facilitated the spread of fake news, a scourge that is most acutely felt in democracies, where a flood of AI-generated content threatens to drown out quality journalism and to destabilize entire countries within a few hours (as happened with the recent far-right riots in the UK). Moreover, the same strategies can be used to hoodwink consumers.

But that is not all: use of social media has been associated with significant mental-health harms for young people. And many in the field have expressed concern about the disruptive impact that AI-enabled cyberattacks and autonomous weapons could have on international peace and security, not to mention the existential risks such weapons pose.

Big Tech firms have consistently shown little concern about harming people and violating their rights. That is especially true for social-media companies, which generally earn more in advertising revenue the longer that users stay on their platforms. In 2021, a whistleblower provided documents showing that Facebook knew that its algorithms and platforms promoted damaging content but failed to deploy meaningful countermeasures. That should come as no surprise: studies have found that users spend more time online when expressing hate, anger, and rage.

Despite its unwillingness to police itself, Big Tech wants to help devise regulations for the digital sphere and AI. Giving these companies a seat at the table is both ironic and tragic. Governments and the international community are allowing these behemoths to dominate the process of establishing a new global regulatory framework and oversight mechanisms. But entrusting those who profit from the sector’s fundamental problems is a dangerous mistake.

The good news is that there are plenty of independent experts and academics who can provide valuable input about how best to regulate the development and use of AI and other digital technologies. Of course, the private sector must be involved in such policymaking processes, but not more than other stakeholders, including civil-society organizations.

AI Regulation: A Collaborative Effort?

But when it comes to regulating the digital transformation and artificial intelligence, both of which pose myriad risks, policymakers are doing the opposite. They are collaborating with Big Tech companies such as Meta (Facebook), Alphabet (Google), Amazon, Apple, and Microsoft, even though their executives have demonstrated a brazen willingness to create dangerous tools and harm users in the name of maximizing profits.

For example, national, regional, and international “working groups,” “expert groups,” and “advisory boards” that include representatives from Big Tech companies are preparing proposals to regulate the digital transformation and AI. Beyond that, some initiatives and conferences on this topic are funded by the very companies those endeavors aim to regulate.

AI's Impact on Democracy: A Looming Threat

The rapid development of AI, particularly large language models (LLMs) like ChatGPT, has heightened concerns about the potential for AI-generated disinformation to further destabilize democracies. These advanced AI systems can create convincing deepfakes and generate vast amounts of fake news, posing a significant threat to the credibility of information and the integrity of democratic processes.

While the potential benefits of AI are undeniable, it is crucial to address the risks associated with its unchecked development. AI has the potential to be a powerful force for good, but only if its development and deployment are carefully managed and regulated in a way that prioritizes human values and safeguards against potential harms.

The Future of AI: A Call to Action

Technological innovation should no longer serve only the interests of a few multinational corporations. To ensure a sustainable future in which everyone can lead dignified and prosperous lives, policymakers must not allow tech giants to steer the regulation of digital platforms and emerging AI applications. We need a global regulatory framework that is inclusive, transparent, and accountable, one that safeguards human rights, protects democracy, and ensures the responsible development and deployment of AI for the benefit of all.

Tags:
Artificial Intelligence Regulation Public policy AI regulation Big Tech digital transformation privacy democracy
Maria Garcia
Maria Garcia

Editor

Passionate editor with a focus on business news.