đź’«5 questions to ask yourself before putting in that AI prompt

The April Spark

This is the monthly newsletter from Bemari where we talk about how to not get lost in sustainability. This month we explore the biggest buzz in the recent month - whether the use of AI can be consistent with sustainability principles. 

Recently most of the social media, conversations, LinkedIn feeds and panels have been dominated by one thing - Artificial Intelligence (AI). There is no getting away from it as it is now not only ubiquitous but is embedded in a lot of every day tools we use. Even Google now offers AI summaries with any search.

We have increasingly been getting questions on the environmental impact of AI and how a business may use it and still remain true to its sustainability commitments. Even though AI has been around for a very long time, its public use has exploded in the last year. The research and understanding of its impacts is still nascent - although some of the AI experts, including the founders have issued various warnings. One thing is clear, like with many significant inventions and changes - whilst there may be a lot of benefits to AI, we need to apply caution and have some guardrails around the use of AI for multiple reasons.

In this newsletter we want to offer a few considerations that may help us explore this nexus of AI and sustainability.

(This newsletter was written by a human with inputs from other humans!)

1. Am I conscious of environmental costs of using AI? AI requires vast amounts of energy and water as well as physical infrastructure, rare earth minerals and waste management, to name just a few. Cloud computing is not really in the cloud, and the physical locations of the data processing computing facilities may drive further inequalities.

Some estimates suggest that data centres to support cryptocurrencies and AI will be consume electricity approximately equivalent to that used in Japan by 2026, or 4% of global demand. By 2026, 35% of overall energy demand of Ireland is predicted to be due to data servers need, driven by the use of AI by 2026. A 100-word email generated by AI uses an equivalent of a bottle of water, according to Pew Research centre published in 2024.

The Economist offers insight on Meta’s Llama 3.1, large language model (LLM) which was trained using chips from Nvidia which can draw 700 watts of power each (around had that of a kettle), and it ran those chips for a cumulative 39.3m hours. The resulting energy used is enough to supply 7,500 homes with a year’s worth of power.

Most people are not aware of the environmental impact of digital activities, which is rapidly increasing (between 1% and 3.5% of global GHG emissions in 2020 (surpassing aviation - 2.5% in 2024). It is indeed small when you look at it at an individual level - but with the ever increasing digitisation and indeed the amount of time we spend on the internet numbers start to add up.

And yet, AI is used for all sorts of trivial stuff - AI doll trend, anyone?

2.What am I trying to achieve? Am I trying to create something that will support people, communities, nature or systems or is this likely to contribute to amplify the consumption culture and feed the algorithms?

AI, like computers and industrial revolution, helps accelerate and amplify what people already do - but do it faster and more efficiently. There may be a lot of applications where AI has already delivered benefits to people and nature that would take years otherwise - biodiversity research and wildlife monitoring, food waste reduction, energy greening and distribution optimisation, health diagnostics, disaster recovery and early warning systems and fighting hate speech and greenwashing etc.

At the same time, AI can also help us flood the internet with low-quality, SEO-driven, lifeless content purely to rank, draw attention and sell - which is a marketing equivalent of ecological monocultures.

You can find AI now in everything - stand alone tools, WhatsApp messaging, social media apps, chatbots, search engines. More and more AI start ups pop up and it is one of the growing investment sector. Is AI really needed to be applied to all parts of our lives? Social media has altered so much about how our society operates and engages, all the way to actually altering children’s brains which results in abilities for reasoning and critical thinking to be affected. Now if we amplify those effects with the power of AI, what are the chances it will help our societies thrive?

3.Am I contributing to replacement and/or devaluing of human creativity? AI can create images, websites and any visuals in seconds. Institute for AI ethics argues that AI is an indirect threat for creativity even though it cannot eliminate human creativity completely. From a business perspective the time and cost savings cannot be argued against, but what about instances where this “efficiency” and “ease” cross the line of IP and copyrights and are replacing the very creators, like with the Ghibli trend? Harward Review argues that our existing IP and copyright laws were not created to account for complexities that AI pose and do not offer sufficient protections, and more importantly, the ability of AI to be creative might take away the incentive to strive to be creative too. Do we run the risk of these skills being deprioritised?

Some might argue that jobs change, technologies move on and we just need to adapt. In previous years this will have been easier to accept because the pace of change was slower so people had some time to adapt, retrain, get new jobs and pivot. The pace of change in our society today is unprecedented, and is only accelerating. We do not yet have the ability to navigate this pace, business leaders in large struggle to manage this change - but it is often the most vulnerable who end of up impacted the most, and in the current globalised, technology and competition driven world, there is very limited support to someone in a forced career transition. The societal impacts of this does not only cause economic and mental health hardship, but potentially significant social unrest.

Leave jobs aside - creativity is what makes us human, it what allows us to develop problem solving skills, self express and relate to one another. If we outsource creativity or limit it to directing AI to create our vision, what is our role?

(Our newsletter image was created by a human, Rory Lawrence from WayMoby).

4.How might I make sure that the AI outputs represent the diversity of voices, perspectives, cultures and amplify the value they can bring? Most AI models have been trained on Western-centric data-sets by people with little diversity. As it was initially released some of these issues had been identified and addressed, but training AI is an ongoing process. Let’s not forget that AI is a very powerful language model that essentially predicts the most likely combination of words and sequence. So the outputs are as good as the training input - and biased ingrained in that material will pass on to the model, and even small biases will be significantly amplified and lead to widespread discriminatory outcomes.

For example healthcare outcomes if the model has been trained on one ethnic group, hiring and recruitment decisions that may perpetuate workplace biases as it learns that them from historical training data, image recognition and generation with biases ingrained in them. Some more examples provided here. This might also show up in the language, accent, communication style and other cultural signifiers used by AI - normalising certain dominant cultures for the rest of the world that uses AI.

Whilst it may be more obvious to users right now, over time as reliance on AI becomes more widespread, and we question it less, these biases, if unchecked, will become accepted as normal - just like Wikipedia, “Internet”, and TikTok have become acceptable sources of information that is deemed reliable by many.

5.What can I do to minimise environmental impact of AI?

Use of AI by corporates can threaten their Net Zero goals. Integrating AI in day to day processes will inevitably increase its overall carbon footprint, at least in the initial period before efficiency savings in the AI models start showing up and AI is used for unnecessary purposes, whilst its real potential is being worked out. Recent McKinsey report indicates that 65% of surveys companies use Regenerative AI and use is scaling up.

The challenge is most companies do not quantify their digital footprint, so AI footprint is likely to go unaccounted for until various standards catch up - there is currently no recognised LCA or accounting standard for AI and data is hard to obtain.

It is estimated that search by ChatGPT takes 10 times more electricity than Google search. Other research suggests that this might not longer be the case for more efficient models and might also depend on the type of search given that sometimes a Google search might involve a lot of scrolling through unhelpful but SEO-ranked or sponsored results. There will, of course, be instances where ChatGPT will save time, computer power, energy and resources - it still needs to be a conscious assessment of cost-benefit, especially if ease of access means that the volume of queries will increase.

 

There are still ways that impact can be minimised - AI governance systems need to be established to start embedding some principles for use, transparency and reporting mechanisms.

  • choice of low-energy AI models and renewables-powered data centres

  • monitoring use of AI to minimise unnecessary application where conventional methods can deliver the same output.

  • writing prompts that get you optimal results means you have to send fewer queries

  • training your teams so they understand what AI is, what it is not and how best to engage with it

  • avoiding using and training AI unnecessarily

  • switching off AI results where not needed - switching it off in browser, using privacy-focused browsers, adding “-AI” or “&udm=14” at the end of your search query. Some people suggest inserting a curse word in your query also removes Google AI summary!

  • tracking use of AI and quantifying its impact, even if as an approximation. ScaleDown offers

Like with any technology - if the benefits of its use (to people and planet’s wellbeing) outweigh the costs - it is worth a go. Otherwise, consider alternatives.

Our collective understanding of AI, its opportunities and risks is still evolving. AI is here to stay and we need to work out how to live with it in the context of the current environmental and social crises. We do not have all the answers - but we are looking to learn.

What is your take on how AI can be used in a way that is aligned with regenerative principles and support us with the transition to sustainability?

For your toolkit

Bemari offering update

We’re excited to launch seven brand-new workshops designed to help businesses not just adapt to the future but shape it. Our new workshops equip leaders with the tools, mindsets, and strategies needed to drive real, lasting impact.

Whether you're just starting your sustainability journey or ready to go deeper, there’s something here to power your next steps:
✨ Climate Resilience Builder
✨ Navigating Nature Positive
✨ Ripple Effect: Water Stewardship
✨ Introduction to Regenerative Business
✨ Goodbye Greenwashing
✨ Futures Thinking Factory
✨ Regenerative Redesign

Ready to lead the change? Book your workshop here.

Meme of the month

Recap of Good News this month

đź’« That’s it for this month. We hope it sparks a change for you and your organisation - we would love to hear what change it is. Let us know!


Is there an advice that has been very helpful to you and you think others would find helpful too? Please share by emailing [email protected].

Reply

or to participate.