top of page
Search

The ethical issues with AI feel overwhelming

  • Writer: Rosie Oldham
    Rosie Oldham
  • 3 days ago
  • 8 min read

Updated: 3 days ago

AI, AI, AI - if you're not sick of hearing hot takes about it yet... In all the conversations I've seen and been part of about AI in the charity sector, there is often some rather vague talk about 'ethical issues'. But my overwhelming feeling is that we haven't gone far enough in naming these. So I started this as a LinkedIn post, realised it was getting too long, and have now re-written it into my first ever blog for your enjoyment (or not, maybe! Please let me know either way.)


To be clear I’m talking about generative AI, which includes Large Language Models, or LLMs such as Chat GPT, Gemini, Claude, etc, here rather than other forms. Machine learning and robotics are both subsets of AI, which have been used as far back as sending people to the moon. But it’s gen AI which is all over our screens and lives right now.


I do get that gen AI has a role in levelling the playing field in terms of access – and I’m not saying we shouldn’t use it for this. But also, one person’s better access may mean another person’s greater harm – the harm being done by the explosion in AI will disproportionately (as ever) affect the most marginalised societies and there’s perhaps a question here about whether we are prioritising individual comfort over addressing wider problems. I don’t have a clear view on this, it just worries me.


I also want to acknowledge the points I’ve seen made recently about not wanting to shame people for using AI – I agree, and absolutely believe these issues are systemic. But I also believe that systemic issues are created and upheld by individuals when we don’t surface these, when we do things unthinkingly or without being aware of the wider implications.


I think my main call to action from writing this is for us to consider these implications and be aware of them when tempted to use AI frequently, simply for convenience. And I know this makes me sound like a dinosaur, or a limpet clinging to my Google-shaped rock (and yes Google is hardly a beacon of ethics). But I just don’t feel I’m seeing enough real chat about the ethical and environmental implications of gen AI for a sector which usually prides itself on taking other ethical issues incredibly seriously. It feels like we're collectively burying our heads in the sand.


I’ve compiled a non-exhaustive list of these implications below:


Energy use


Huge amounts of energy and water are used to run and cool data centres, the number of which is multiplying – in the UK alone an estimated 100 new data centres will be built in the next few years. By 2027 AI will have the same energy demand as the Netherlands. Whilst renewable energy industries are growing, much of this renewable energy is just being used on AI data centres (e.g. between 2017-2023, all additional wind energy generation in Ireland was absorbed by data centres). Renewable energy can’t keep up with burgeoning AI use by billions of people, so this electricity demand will generate more greenhouse gases into the atmosphere. And Elon Musk's plan to send a million data centres into space is definitely not a solution driven by climate needs but, of course, by profit (I don't think we have the right to use space as our Earth litter pile but this is for another day).

 

The point is often made that this is no different from the demands of other technology that we have happily integrated into our daily lives – like Zoom or streaming services. The answer isn’t that those are fine and AI isn’t. We should look at all of our technology use and its environmental impact and perhaps this is an opportunity for us to do just that? Things like not leaving Netflix on autoplay, not upgrading your smartphone every year… I also think we can strive to be careful about the AI habits we are forming whilst we are in the early stages of forming them – in the same way we maybe would have done decades ago if we had been fully aware of the ethical and environmental impacts of our other digital consumption platforms.

 

But also, the impact globally is just not the same. The rise of gen AI right now is like an explosion – the world’s electricity supply simply cannot keep up with the increasing and predicted future energy demands of AI use. Between 2005-2017, loads of new data centres were being built to support these other online platforms (Netflix, Instagram, I don’t know enough about gaming to name an example but also gaming) but the amount of electricity going to data centres remained pretty steady as energy efficiencies were being found all the time. This changed with AI, as efficiencies are now no longer keeping up with the energy-intensive hardware needed for training models.

 

OpenAI’s Sam Altman has himself stated that the AI industry is heading for an energy crisis. Which investors are currently looking to solve through investment in more nuclear power (which environmental charities call ‘a slow and costly solution to the climate crisis’). Again, this is a systemic problem perhaps rather than an individual one - but I think it’s useful to know.

 

Some other stats that put gen AI energy use into context:

  • A ChatGPT request consumes 10 times the electricity of a Google Search

  • Each year, the 117 lowest-consumption countries each consume less electricity than ChatGPT

  • One global wave of AI-generated self-portraits was estimated to consume over 200 million litres of water in under a week - roughly the monthly water usage of a small city.


(Most of these are from this ChatGPT-focused Business Energy UK piece which also includes a handy visual tool.)


‘But in the future AI will save the climate’

 

AI can be used for climate and biodiversity modelling, which is often the argument we hear about the huge climate benefits AI will ‘one day’ bring - but this is typically predictive AI (machine learning), not generative AI (which is the type of AI I am talking about here and that most people are currently talking about when they say AI). But this feels like a classic argument used by those who want to push the climate crisis can down the road – as has been happening for literal decades. We need serious climate action now, and of course the tech bros want us to be distracted by the shiny idea of some vague future gen AI model that will solve all our climate issues – how convenient.


Lucy Caldicott drew an excellent comparison in a recent Change Out newsletter with the industrial revolution – in the early 1800s the fossil fuel economy began and humans weren’t thinking about climate change, but about economic progress. But we now know about the huge issues facing our planet in terms of the climate, so we do have a choice in how we approach this.

 

Harm to marginalised communities


Generative AI is currently a hugely extractive industry (of data, labour, and materials) doing harm to vulnerable communities, particularly global majority and marginalised communities across the world – this is often referred to as environmental racism. Extractive cycles of history repeat themselves – and just as oil and gas extraction have historically harmed marginalised and more powerless communities, so does mining for copper, lithium and materials required for building data centres.


Human labour is essential for gen AI to function properly and the training of models is done by people often in the global South where labour is cheaper: ‘invisible workers’. Much of this workforce is exploited, with people spending hours viewing and tagging hate speech, sexual content and graphic violence with no proper employment rights or mental health support.


The climate crisis is inherently racist, with extraction and severe weather events impacting people in global South countries far more than people in the global North which bears most responsibility for it.


In Quilicura, Chile, the local community recently ‘turned off’ AI for a day and answered prompts themselves to raise awareness of the environmental impacts of hugely increased water use on their small town. Organisers described this as ‘a moment of pause – an invitation to prompt responsibly, and consider how these systems should scale in regions already strained by drought’.


AI only amplifies existing injustices

 

AI amplifies injustice on a major scale that goes beyond just ‘being biased’ – just like automated systems have been shown to carry the prejudices of the societies that have built them, large language models are the same. I keep sharing Elisa Lindinger’s blog for Superr, in which she says 'AI systems can by design only learn from the past –  a past which is shaped by structures of colonialism, white supremacy and patriarchy.' It’s difficult for us as end user individuals to do much about this apart from be aware of it, and is that enough?


Mike Zywina says in Lime Green’s excellent recent blog about funders and AI that ‘capitalism doesn’t have a great track record of translating technological progress into social gains.’ – exactly! The washing machine was hailed as a massive time saver, a liberator (at least the Vatican thought so) of women (to give them… more time for their other domestic chores or preparing their husband’s dinner) and yes I’m glad I don’t have to handwash and then dry all my clothes in a mangle, but all that happened was we now live in a constant cycle of washing and drying our clothes (which, news flash, the majority of the time is still done by women). Washing machines arguably did nothing for household gender equality. I don’t imagine gen AI to have different results and it can’t possibly be feminist.


(This also links to an interesting conversation I was part of in a webinar recently – some people are naming their LLM of choice and I’d be fascinated to know anecdotally if we are naming them mostly female names, like other ‘assistive’ technology which is typically female-coded because God forbid a male chatbot would help write up meeting notes).


Ethical conflicts for charities


Generative AI is directly causing or perpetuating some of the reasons charities exist – unemployment, climate breakdown, inequality, marginalisation. I was at a charity’s event recently where a learner from one of the organisation’s training programmes stood up to speak about her experience, and began by telling the audience how she become unemployed in the first place because of AI. I just don’t feel I’m seeing this conflict addressed sufficiently in the sector-wide conversation about adopting AI tools.


I attended an excellent webinar by Bethany Helliwell-Smith and Phoebe Broad this month where they made the point that use of gen AI should be handled in the same way charities would handle other tools and approaches that come with ethical risk – fundraising, investments, etc. An environmental charity, for example, should consider use of gen AI with the same seriousness it would its corporate gift acceptance policy.


So anyway


Some others in the sector have written really well-balanced pieces about use of gen AI and their reasons for deciding to use it. I respect this and think people should draw their own conclusions based on what our work involves and how our organisations/charities operate. I guess I have drawn a different conclusion, which is that I will avoid using LLMs wherever possible.


It’s obviously not as simple as ‘generative AI = bad’. It’s the people, companies, systems and structures behind AI which are causing the problems. But we do have a choice as to how responsibly we use it and how responsibly we talk about it.


I am now adding to my contracts with future clients that I won’t use generative AI for my work with them. Because a lot of my fundraising work is around nature, climate, inclusion and supporting charities often representing marginalised communities, it feels hypocritical of me not to do this. Maybe I’ll change this in future but for now my concerns about generative AI are too big to ignore.


Thank you for reading!




I wrote this with the help of AI. Jokes! I did use Google though, and stats/views from these excellent resources linked below:


General/sector articles:



Environmental impact:


Impact on marginalised communities:


Inequality:


 
 
 

Comments


What did you think?

I'd love to hear your feedback! Get in touch to let me know your thoughts.

bottom of page