Where Are We Now? Ted Kaczynski and Technological Society
By HarryC
- 1442 reads
Ted Kaczynski, otherwise known as the 'Unabomber', died in June 2023 - twenty-five years into his eight life sentences in prison for his crimes. Between 1978 and 1995, Kaczynski murdered three people and injured twenty-three others in a nationwide US mail-bomb campaign against those he believed to be advancing technology, and thus hastening the destruction of the natural environment. In his 'manifesto', Industrial Society and its Future, Kaczynski further argued that modern technology is eroding human freedom and fulfilment, and creating psychological as well as environmental crises. He believed that, because the system is self-perpetuating, reform is futile - and that therefore the only solution to save the world was to dismantle industrial society altogether. In a word, destroy it.
As horrific as his crimes were, his manifesto (first published publicly in The Washington Post in 1995) resonated with many people around the world. It's a rambling, sometimes incoherent document in which he mainly seems to blame 'leftists' for the continuance of mass-industrialisation. But whilst his 'solutions' (as well as many of his arguments) are extreme and ethically unconscionable, his diagnosis of the potential dangers of technological dependence and centralisation were prescient then, and remain so - perhaps, it could be argued, more so - now. They raise concerns about human autonomy, vulnerability to failure, and the psychological and societal impact of living in an increasingly artificial and controlled world. The world that, arguably, we now have.
The internet, smartphones and social media have become essential now for communication, commerce, education - even the carrying out of daily routines such as shopping, parking and navigation. Whilst these technologies offer immense and undeniable convenience, they also vastly increase society's vulnerability. If critical systems (such as the internet, power grids, or mobile networks) were disrupted, the consequences could be devastating. We recently witnessed an example of this with the Crowdstrike security update, which crashed Windows systems around the world (the dreaded BSOD, or 'Blue Screen Of Death'). The resulting IT outage led to the grounding of airline fleets in the US, and the stalling of other essential services such as banking, healthcare and food supply chains. Businesses that no longer accept cash lost tens or hundreds of thousands in sales. And that was all due to one small file, and one small mistake. So think what a dedicated and deliberate cyber-attack might achieve. Vladimir Putin has recently threatened retaliation against NATO countries if the long-range missiles they've supplied to Ukraine are used to attack targets inside Russia. This needn't simply be more nuclear sabre-rattling. Cyber-warfare is, if anything, now the predominant fear. With compromised or downed networks, societies could collapse. It's the scenario depicted in the film 'Leave The World Behind'. It's science-fiction, of course. But like the best science-fiction, it offers a highly plausible vision of the future - and one that's perhaps not too far ahead of us. As the Ethan Hawke character says at one point: "Without my satnav and my cell phone, I'm lost. I'm a useless man." It's a funny moment. But it's poignant, too. Is this what our dependence could actually lead us to?
Another problem is that human adaptation to technology isn't keeping up with the speed of development. For all the efficiency and connectivity it provides us with, it also reshapes human behavior and relationships, and can have huge impacts on things like mental health and social cohesion. The rise of social media, for instance, has been linked to anxiety, depression, bullying, social isolation, and even suicide - especially among younger generations. This supports Kaczynski's argument that technology creates psychological distress by disconnecting humans from their natural environment and deeper needs. Moreover, the flood of information (and misinformation) in the digital age has made it harder for people to discern truth from falsehood, potentially making them more susceptible to manipulation and control. Witness again the violent street riots following the horrific Southport murders. The very idea that social media algorithms shape user behavior by prioritising engagement (by encouraging the addiction we all see) often means that sensationalism, propaganda, false information and public outrage are amplified. Again, this aligns with Kaczynski’s warnings about how technology can undermine and erode human autonomy and agency.
As technology becomes more integrated into the fabric of society, power is increasingly centralised in the hands of a few large corporations and governments - and even individuals. Control over communication networks, data, and artificial intelligence systems is highly concentrated, leading to power imbalances. In spite of regulations such as the GDPR, the ordinary individual has less and less control over how their data is used, how much privacy they can have, or even how their world-view is shaped. Many are even happy to surrender that privacy on social media. The regular argument of 'I have nothing to hide, so I have nothing to fear' is false on both premises. Everyone has something to hide. It doesn't necessarily need to be a dirty secret - an affair, or a crime. Do we want the whole world to know our medical records? Financial details? Children's details? Passwords? This again echoes Kaczynski’s concerns about the loss of individual autonomy. The decisions and directions of society are largely dictated by those who control the technology, whether they are corporations like Google and Meta, or state actors involved in mass surveillance. As our addiction and dependence grows, the more powerful and influential these corporations and actors will become.
And now we have the new concerns over the development of AI. AI can undoubtedly bring huge benefits to mankind: cures for diseases, automation, better efficiency and decision-making. But the other side of that coin is more daunting. It could also accelerate the centralisation of control, raising newer risks. Again, the development of AI systems is driven by a small number of key players, meaning the technology could easily be used for mass surveillance, manipulation, or even warfare. Additionally, as AI grows in its uses in society, humans may find themselves further distanced from meaningful work and creative autonomy. Again, Kaczynski would likely have seen this as another step in the dehumanising march of technological progress.
Let's consider, for a moment, our first real encounter with AI: so-called 'curation' AI - better known as social media. The problems this has created globally have already been alluded to. But here's a reminder: information overload, addiction, doomscrolling, influencer culture, online harassment, sexualisation of children, shortened attention spans, filter bubbles, grooming, polarisation, deepfakes, cult factories, breakdown of democracies, fake news, mental health impacts... and on and on. Are we anywhere close to solving those problems? It appears not. And this is just our first experience - way down the scale in terms of the development and roll-out of advanced AI. Since AI is fundamentally a general-purpose technology, it can be adapted to a wide variety of uses - both constructive and destructive. If we haven't learned how to tame one beast, how are we going to fare when a far bigger and infinitely more powerful and sophisticated beast comes along? Kaczynski's point was that the system - driven by technological innovation - develops its own momentum, and once that momentum reaches a certain level, humans lose the ability to control it. AI and machine learning algorithms can sometimes operate in ways even their creators don’t fully anticipate. For example... using just 1% of Persian words, an AI system learned to speak and understand fluent Persian. And still the developers of that system do not know how it managed to do this. This raises ethical and existential concerns, as technology begins to influence decisions at societal levels (from governance to warfare), without sufficient human oversight or accountability.
The challenges we face now, with technological advances racing ahead of us, are daunting to contemplate. We could think back to dire crises we've faced in the past as a civilisation. The Cuban Missile Crisis of 1963, for example. During those two weeks, the world stood on the brink of nuclear apocalypse, with no one knowing which way it would go. But then human agency - rational heads and diplomacy - won out. As advanced as we were then, too - with the nascent space race that would send us to the moon - it looks like ancient history compared to where we are now.
So... how do we begin to address these current and more pressing problems? Governmental regulation is an obvious starting point. However, with AI development being driven by the global race of a few corporations to dominate the field, regulation becomes trickier to enact. Nations and corporations are competing to be leaders in AI because of the enormous economic, military and geopolitical advances it offers. This creates strong incentives to prioritise speed over caution, and innovation over safety. Some of the major challenges we face are the following:
- Regulatory Lag: Technological advancements in AI often outpace the ability of governments and regulatory bodies to keep up. The complexity of AI systems, combined with their rapid evolution, makes it difficult for regulators to effectively address emerging risks.
- Global Competition: Companies and countries may resist strict regulation because they fear it will slow down their progress and give rivals a competitive edge. This creates a race to be the first in fields like autonomous weapons, advanced AI-powered surveillance, or economic AI applications.
- Concentration of Power: As already mentioned, AI development is often concentrated in a few powerful corporations (like Google, OpenAI, or Tencent) or countries (such as the U.S. and China). These entities have significant influence over the direction of AI development, and their priorities may not always align with the broader public interest.
- Climate Change: AI consumes huge amounts of energy. This not only generates massive increases in carbon emissions, but also challenges crucial national grid infrastructures: they simply may not be able to keep up with the accelerating demands that AI and digital services generally place upon them (see comment below).
Solutions, then, will require more than just the former Cold War cool heads and diplomacy. Some potential ways forward include:
- International cooperation: AI regulation is most effective when it’s coordinated globally. This could involve agreements similar to nuclear non-proliferation treaties, where countries agree on certain restrictions or ethical principles around the development of AI. For example, the EU has already proposed comprehensive AI regulations that aim to address some of the technology’s risks, including bias, transparency, and accountability.
- Transparent and explainable AI: Developing AI systems that are more transparent and explainable can help address some of the concerns around accountability. If AI decisions are more understandable, it becomes easier to regulate their use and prevent harmful outcomes like discrimination or misinformation.
- Proposing incentives for ethical AI: Governments and corporations could work together to assist the development of AI technologies that prioritise safety and ethics. This could involve public funding for AI research that meets ethical standards, or creating rewards for companies that prioritise responsible AI development.
- Independent oversight bodies: Establishing such bodies could help monitor the development and deployment of AI systems. These bodies could ensure that AI is being used responsibly, audit AI algorithms for bias or unethical behavior, and impose penalties for misuse.
- Public engagement: Since AI is reshaping every aspect of society, it’s crucial for the public to be part of the conversation about how it should be developed and regulated. This could lead to democratic processes that influence AI policy and regulation. Without this, decisions about the future of AI may be made by a small group of elites, further concentrating power and sidelining concerns about the broader societal impacts.
In summary, our over-dependence on technology and our lack of proper adaptation to it lays us open to many future problems which could easily undermine civilisation as we know it. As mentioned already: the increasing centralisation of power and the loss of human autonomy. But perhaps most importantly, the fragility of the system - the way it leaves us all so vulnerable in terms of cyber terrorism, hackers, enemy governments, and so on. In this sense, Kaczynski’s argument that technology could lead to societal collapse if it becomes too complex and uncontrollable seems increasingly relevant. If society cannot function without its technological infrastructure, and if that infrastructure is vulnerable to disruption, the scenario of collapse becomes more likely.
In the end, it won't be down to extremists like Ted Kaczynski to destroy technological society. We'll do it ourselves, by allowing it to take us to the point where we no longer have the power to defend ourselves against it. And the seeds of our destruction will be right there, in everyone's hands... waiting for our next swipe, input, post, comment or command.
This is why it's imperative that we act now, and quickly - in ways outlined above - to ensure that technology, and our futures, remain under our control.
- Log in to post comments
Comments
I don't normally read this
I don't normally read this kind of information. But you make so many valid points that I just had to comment. It sounds like Ted Kaczynski had some notions of where technology is taking us that compare to my feelings.
Like you said there are many areas of technology that are used for good, as in the medical profession, and finding new cures for diseases. The problem is that it doesn't stop there and abuse comes at a price.
I don't know how long I can go on refusing to keep up with the times. I'm just glad I had a good younger life and didn't have to go through what young people do today, even though they've never known anything different.
That saying:- We never had it so good in the 60s, 70s, 80s and 90s is so true...well for me anyway.
I found your written piece most compelling reading.
Jenny.
- Log in to post comments
An interesting and well
An interesting and well written piece, thank you Harry
- Log in to post comments
phone-centric is a good way
phone-centric is a good way of putting it. AI is a danger. One among many.
- Log in to post comments
I am sorry, I had missed this
I am sorry, I had missed this. You have made the argument very clear. Like you, I see this as madness.
The one thing you don't mention is that powering AI will necessitate continuing reliance on fossil fuel, therefore will speed up and make irreversible the mass destruction of climate change.
All this to make a very, very few people incredibly rich.
AI should stand for Absent Intelligence
- Log in to post comments
You are so good at putting
You are so good at putting information clearly in a non ranting way!
- Log in to post comments
This well argued piece on the
This well argued piece on the implications of AI is Pick of the Day! Please do share if you can
- Log in to post comments
Congratulations! This is
Congratulations! This is Story of the Week!
- Log in to post comments
I'm so glad to see this well
I'm so glad to see this well explained writing got story of the week. Let's hope it's read by many.
Big congrats from me.
Jenny.
- Log in to post comments
when all of these things
when all of these things about climate change do happen, and they will, very soon (in the next 20 or thirty years, as they've already begun) it will be too late. Ironically, your arguement is made online to a writers' forum.
- Log in to post comments