Speaking of Witch

AI Mirrors Humanity—And That's the Problem

image

It has been more than two years since Artificial Intelligence entered the consciousness of everyday people with the launch of ChatGPT on November 30, 2022. At first, the air around it seemed almost palpable with magic and possibility. But as more and more people became familiar with it, higher-order consequences emerged, and public sentiment turned sour. At this point, AI appears firmly on its way into the "Trough of Disillusionment" phase of Gartner's hype cycle. Thus, now is the perfect moment to reflect on where things went wrong and consider where we go from here.

The Dark Side of Accessibility and Scale

The democratization of powerful AI tools—large language models, diffusion models, and generative AI—has dramatically lowered barriers to entry. Among the first to exploit these tools were the usual suspects: the greedy and the lustful. Simply posting photos online now exposes you to plausible deepfakes created and distributed entirely without consent. Whether it's human-like profiles sliding into your DMs or sophisticated phishing attempts targeting your e-wallet, the quality of online interactions has noticeably deteriorated. Traditional signals used to dismiss low-effort content became useless overnight. Now, every interaction demands vigilance, and it's easier than ever to make mistakes that haunt you later. Worse still, defensive measures against these threats often catch innocent people in the crossfire. Accusations of bot-like behavior or AI-generated content consume communities and derail genuine discussions.

Beyond misinformation and scams, AI's rapid proliferation carries significant environmental and societal costs. Training large AI models consumes enormous amounts of energy, contributing substantially to climate change. The relentless pursuit of ever-larger models and datasets exacerbates this environmental footprint, raising urgent questions about sustainability and responsibility.

Additionally, aggressive data-scraping practices impose hidden burdens on web administrators and content creators. Websites face increased costs and resource demands, pushing them toward restrictive measures such as paywalls, authentication barriers, and anti-DDoS protections. These defensive measures erode the open, collaborative spirit that once defined the internet, transforming it into a fragmented landscape of gated communities and restricted access.

AI's rapid advancement has outpaced our ethical and legal frameworks, creating profound dilemmas around privacy, intellectual property, and economic justice. Companies are incentivized to scrape vast amounts of data without consent or compensation, exploiting creators' work without acknowledgment or remuneration. Artists, writers, and content creators rarely benefit from their contributions being used to train AI models, raising fundamental questions about fairness and ownership. Automation disproportionately affects vulnerable workers, exacerbating existing inequalities. While AI promises efficiency and productivity, it also threatens livelihoods, raising urgent questions about societal responsibility and economic justice.

AI as a Mirror of Human Flaws

Examining the very real problems caused by AI proliferation, a common thread emerges: these issues are far removed from the theoretical "paperclip maximization" scenarios that the AI safety field once feared. Instead, they represent humans inflicting harm upon other humans—a tale as old as time. Artificial intelligence is often portrayed as a mysterious force, an autonomous entity capable of reshaping society for better or worse. Yet, the current generation of AI possesses no inherent agency; its morality is defined by creators who curate training data and operators who influence its decision-making.

AI systems do not spontaneously generate greed, prejudice, or unethical behavior. Instead, they amplify and accelerate flaws already embedded within human society. When AI perpetuates biases, it is because humans have fed it biased data. When AI facilitates scams or misinformation, it is because humans have chosen to exploit its capabilities for personal gain. AI is neither inherently good nor inherently evil—it is a neutral tool, reflecting the intentions and values of its creators and users.

Yet, paradoxically, humans often hide behind AI as a shield, deflecting accountability onto algorithms and machines. By blaming AI, we conveniently avoid confronting the uncomfortable truth: the root of these problems lies within ourselves.

image

Who Aligns the Aligners?

AI's lack of agency means harmful outcomes ultimately stem from human intentions. Whether through negligence, ignorance, or deliberate manipulation, humans shape AI's behavior. Governments and powerful entities leverage AI for surveillance, propaganda, and geopolitical influence, embedding intentional or unintentional biases driven by political, economic, or social pressures.

Thus, the question of AI safety and accountability is fundamentally a question of human ethics and intentions. AI does not act independently; it acts as a proxy for human values, biases, and agendas.

AI labs and futurists often sell the dream of technological singularity—the idea that advanced AI will seamlessly align with human values and ethics, solving humanity's greatest challenges. The timeline usually looks like this:

  1. We burn vast resources to produce a super-powered AGI.
  2. We align it using a set of rules reflecting humanity's goals.
  3. AGI hands us the keys to reality.
  4. ...
  5. Happily ever after.

It is a beautiful picture, but it quickly unravels upon closer inspection. The idea of a unified code of ethics and morality has long been an appealing prize for thinkers, yet humanity has never achieved consensus on ethics, morality, or values. Every ethical framework, no matter how well-intentioned, faces misuse, contradictions, or morally repugnant conclusions. This trope is so common and well-explored that we have a saying for it: "The road to hell is paved with good intentions."

But let's imagine alignment succeeds and the keys to reality are handed to humans. Who actually receives and uses them? According to real-world power hierarchies, AGI will hand them to researchers, who in turn will pass them to lab directors, who inevitably will hand them to governments. This is where things become truly frightening. Considering the locations of the largest AI labs and humanity's historical record, it's only natural to assume that the moment humans receive these keys, they will use them against other humans. What ordinary people will experience likely resembles less "fully automated luxury space communism" and more "bio reactors powered by minorities, immigrants, and enemies of the state".

This provocative scenario exposes a deeper truth: AI alignment is not merely a technical challenge—it is fundamentally a human challenge. Until we confront our own ethical shortcomings, biases, and divisions, AI will continue to reflect and magnify our flaws.

Conclusion: Facing Our Reflection

AI is not the root cause of our societal problems; it merely magnifies and accelerates existing human flaws. To address AI's negative impacts, we must first confront our own ethical shortcomings, greed, biases, and divisions. Blaming AI is convenient but ultimately misguided. The responsibility lies squarely with us.

As we stand before the mirror of artificial intelligence, we must ask ourselves difficult questions: What values do we truly hold? What kind of society do we wish to build? And most importantly, are we willing to take responsibility for the reflection staring back at us?

Only by honestly confronting these questions can we hope to shape AI—and our collective future—in a way that reflects the best, rather than the worst, of humanity.