Resist AI - a handbook for concerned citizens
This website is a citizen's guide to opposing generative AI's harmful effects on our society.
Note that when we talk about generative AI (GenAI), we mean LLM based text creation models like ChatGPT and Claude, image creation models like DALL-E, Stable Diffusion, etc and video generators like Runway.
Why resist AI?
Generative AI is an interesting technology with some use cases, but it is being pushed aggressively by tech companies with no regard for its harmful effects on society. Through boycotting, suing and sabotaging generative AI, we can slow its attack on our society and inspire a more thoughtful conversation about how to use these technologies to benefit rather than harm humanity.
Energy use
Generative AI requires a huge amount of electricity to train and run models. At a time when the world is facing a massive environmental crisis due to carbon emissions, it is beyond stupid to build more fossil fuel powered data centres. But that is exactly what is happening. Unfortunately, this is only expected to get worse as tech companies rush to push AI into every corner of our lives.
- Elon Musk’s xAI gets permit for methane gas generators
- Meta sponsoring construction of new gas generation in Ohio
- DOE Releases New Report Evaluating Increase in Electricity Demand from Data Centers
Disinformation, deep fakes
Disinformation is nothing new, but AI turbocharges it:
- AI generated deep fake videos are used for harassment, or political manipulation.
- AI generated images flood social media, drowning out human connections.
- AI is being used to generate bogus research articles hampering the scientific process.
Harming workers
AI systems rely on hidden human labour for content moderation and annotation. Content moderation in particular exacts a large toll on its workers, often leaving them traumatised and without access to sufficient psychological support.
- OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic
- The human cost of our AI driven future
- Who trains the data for European artificial intelligence?
Harming users
LLMs are increasingly being connected with severe mental health crises in their users.
- They do not to have adequate safeguards for users at risk of mental health issues.
- They have been connected with multiple cases of psychotic breakdown.
- A lawsuit against OpenAI alleges that ChatGPT encouraged a young man to commit suicide.
- Another story describes another's psychotic breakdown and violent death after heavy use of ChatGPT.
How to resist AI?
For everyone
- Don't believe the hype! AI companies routinely exaggerate the capabilities of their models in order to attract investment and drive growth. LLMs are good at writing text, but that is not a replacement for human intelligence. Moreover, outlandish predictions of future benefits from AI are designed to mask the real harm that AI systems are causing now.
- Oppose data center construction: GenAI companies are building gigantic data centers across the world to power their systems. These consume massive amounts of electricity and water and provide little benefit to the communities affected. In countries such as Chile and the USA, activists have succeeded in blocking these developments through public information and protest. Support campaigns against data centers in your area.
- Digital rights organisation noyb has launched numerous lawsuits against AI providers. You can become a member to support their work.
- Tell your friends! The more people call out this nonsense, the fast the bubble deflates.
For social media users
- Don't boost AI generated content through liking it, sharing it or commenting on it. You can learn to spot AI generated images through this guide.
Do not allow social media companies to use your content to train their models. This guide shows how to opt out across multiple different platforms.
If you can, move away from platforms that push GenAI on their users:
For online writers
If you write online, AI companies will harvest your writing to build their models. How to resist will depend on what platform you use to publish:
For visual artists
If you're a visual artist who publishes online, you can "poison" your images so that they break the AI models that they are used to train. The Nightshade tool will alter your image in a way that is invisible to humans, but will damage any model that uses it.
For website owners
If you own a website, you can sabotage or block AI crawlers by using CDN (Content Delivery Network) level bot protection:
- Cloudflare allows all users to trap AI crawlers in an AI generated hall of mirrors.
- Akamai provides a product to manage bots, including AI bots.
- Squarespace provide a guide for requesting bots to exclude your website.
More technically confident users can find potential solutions on this list of anti-AI tools.
For workers
Many companies are mandating use of GenAI by employees in a misguided attempts to increase productivity. If you are able to, you can push back against these mandates by pointing out that their purported productivity benefits are grossly overstated:
- A systematic study of software developers' use of AI tools found that they actually decreased productivity.
- Another study found that use of chatbots failing to have any impact on earnings across 7,000 workplaces in Denmark.
- AI "agents" fail on multi-step challenges making them unsuitable for most real world applications.
For educators
- Read the open letter from educators refusing to adopt GenAI and sign and share it if you agree with its contents.