AI, social justice & the planet: a beginner’s guide to the 4th Industrial Revolution
By Tim Thorlby
7 min read
This is a blog about Artificial Intelligence for people who don’t really care about AI.
It answers seven questions about the 4th Industrial Revolution, which is now in full swing.
Whether the acronym ‘AI’ makes you groan or salivate, there are issues in here we should all ponder. Will this new wave of technology serve society or will we serve it? Or can we just leave it to the tech bros?
Q1: What is Artificial Intelligence?
Artificial Intelligence (AI) is a broad umbrella term used to describe an area of science and engineering which is making ‘intelligent machines’, often in the form of highly sophisticated computer programs. People have been working on AI since the 1950s but the technology has now reached a point where it is beginning to make an impact in the real world.
According to the Encyclopaedia Britannica, artificial intelligence is:
“…the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience.”
It is sometimes described as part of the ‘Fourth Industrial Revolution’. So, significant.
“Your move, human”
One of the earliest examples of an AI computer program to hit the headlines was the computer which had been ‘trained’ in how to play chess and was then tested on the World’s chess grandmasters. The computer program had been taught the rules of the game, given lots of information on possible moves and had a lot of processing power and speed.
In 1997, after several attempts, IBM’s computer ‘Deep Blue’ finally beat Garry Kasparov, a Russian chess grandmaster; the first time that a computer had convincingly beaten a (human) world expert in chess. Cue lots of headlines about robots getting ready to take over the world.
Since then, the very best software programs still consistently beat the best human chess players, not least as they can consider thousands of moves in a second or two.
Was this the end of chess? No. People play it for fun. Chess has carried on. Gukesh Dommaraju from India is the current World Chess Champion, winning the title in 2024 at the age of 17. (What was I doing when I was 17?)
The game of chess is a highly specific and rather narrow ‘task’ for a computer to perform, so it demonstrated how powerful a computer processing program can be, but the bots are still not much closer to taking over the world. They still can’t pop down to the corner shop to get me a pint of milk. I mean, that would be useful sometimes.
Interestingly, and relevant to this blog, when I used the Google search engine to find out who the current ‘World Chess Champion’ was, its new Generative AI feature at the top of the search findings page informed me that it is the 35 year-old Norwegian, Magnus Carlsen. Ah, but this is not correct. The Generative AI bot has failed to spot that there is a difference between the ongoing World Chess Rankings (updated weekly) which has Magnus at the No. 1 spot, and the World Chess Championships (held every 2 years) and of which the young Gukesh is currently the undisputed title holder. So, yes, Generative AI gave me the wrong answer. We will return to this later.
Machine learning
A significant step forward in technological development came in the 1980s with the growth of ‘machine learning’. This is a very different way of building a computer’s capability to perform tasks. Instead of just coding up a software program with lots of rules on how to do something, a software ‘model’ is built which contains algorithms (a complex set of rules). It is ‘trained’ by feeding it with relevant data to read and ‘learn’ from, so that it can recognise patterns and information and subsequently make decisions when it sees new data for the first time. After testing to check how effective it is, it can be deployed in the real world. The difference with machine learning is that it is not being ‘programmed’ in a traditional sense, it is ‘learning’ from past data so that it can recognise and work out what to do with new data. Machine learning was a whole new way of building software models.
Applied AI
AI is a broad field, with scientists working towards different goals. The original hope (or fear) of creating ‘conscious machines’ is clearly not imminent, and many scientists are sceptical it will ever happen. The most impactful field so far has been ‘Applied AI’ which seeks to use AI for commercially viable tasks, supported by ever faster processing power and ever larger datasets. AI is now finding its way into our lives, with a growing number of applications.
Applied AI is increasingly being used for solving specific problems, handling large volumes of data, automating repetitive tasks and personalising user experiences, amongst other tasks.
Generative AI is one step further on, software models that can use what they have ‘learnt’ in order to create (‘generate’) new examples or attempt to answer questions. It is still early days for this field.
By way of illustration, some of the common examples of applied AI include:
Virtual assistants – Amazon’s Alexa (a little computer that sits in your house) uses AI to allow you to give it verbal instructions to find information or play music, etc. It uses voice recognition, natural language processing and other forms of AI to try to answer your questions.
Chatbots – Lots of websites have simple chatbots on them to perform a limited set of tasks, especially shops and banks (“Do you need any help today?”) The most famous chatbot is ChatGPT (which stands for Chat Generative Pre-Trained Transformer). This is a generative AI tool created by OpenAI and launched in 2022. It is free to use, you can try it today. It is actually several software models with a single front door. You can type a request into it in plain English (‘natural language processing’) and then it will choose which tool to use and generate an answer or output for you, using the vast amount of data it has ‘learnt’ from. It uses a Large Language Model. It has basically spent several years scraping the internet for published information, processed it and now awaits your instructions. It’s like an enormous, fast, search engine but it can do more than answer questions by pointing you to websites (like Google), it can also create images and write poems and synthesise information and generate answers for you. It’s a powerful beast, but also – as we discovered with our chess question – not nearly as intelligent or well informed as its founders might have you believe. It has been ‘trained’ on the internet but the world wide web is not the sum total of human knowledge, nor is everything you can read there actually true. More on this later too.
Q2: Are robots going to destroy all of our jobs and take over the world?
No.
Q3: So, is AI just a lot of silly hype dreamed up by the marketing department of a rich Tech Bro?
Also, no.
There is indeed a lot of breathless, and rather silly, hype about AI. But AI is beginning to have an impact on society, our economy and the planet, so it’s good to know what’s going on. It is a real technological step forward and it is already beginning to change the way that some industries work and how some services are designed. It has much potential. It also comes with some significant issues. The better informed that we are about this technology, the better we can regulate it to deliver good socially useful outcomes and, hopefully, minimise the bad ones.
I apologise for using the words ‘tech bro’ by the way. It’s an annoying phrase isn’t it? But then, I think it’s meant to be annoying. According to the Cambridge Dictionary ‘tech bro’ means:
someone, usually a man, who works in the digital technology industry, especially in the United States, and is sometimes thought to not have good social skills and to be too confident about their own ability
Ouch, a bit disparaging, eh.
Also, why not ‘tech sis’?
Well, according to recent research, less than one third of jobs in the tech industry in the UK are held by women[1] so, yes, it is still very much a man’s world. More on this later too.
Q4: Should I believe what an AI generative bot tells me?
For the time being, I would say that we should be fairly sceptical of outputs from generative AI and check the answer out with more traditional sources. I wouldn’t bet my pension on an answer from AI.
We need to remember that the AI industry is still at a relatively early stage of development and so moderate our expectations.
There are also some inherent risks and challenges with AI. I have picked out three here.
Issue 1 - A lack of transparency
Because these models are sucking up and processing so much data, it is impossible to really know what is going on in the ‘black box’. So, when AI gives an answer, it may often not be possible to know where it came from or why.
Issue 2 - Bias
I think this is a big issue and not sufficiently acknowledged by the more commercial end of the industry. Because AI is often ‘trained’ on data that is publicly available on the internet or is fed such huge amounts of data that it cannot all be comprehended by a single human, the risk of bias is enormous. The data on the internet isn’t all true (some is ‘fake news’ or propaganda), some is simply incorrect and much of it is highly partial – ie it doesn’t tell the full story of human experience – but the AI doesn’t know what’s missing. Put simply, ‘rubbish in, rubbish out’. The algorithms used by AI may also have some biases within them too, just to complicate things. The reason it matters is that AI may unwittingly perpetuate and amplify unhelpful stereotypes and social injustices but do so in a way that lacks transparency and is hard to spot.
For example:
The US Courts system used a new AI tool to help predict which offenders might be most likely to reoffend and then treated them differently as a result. The AI was much more likely to point to black offenders than white offenders, and generated twice as many ‘false positives’ for black offenders, because it was trained on historic data that gave a misleading impression.
Also in the US, an AI tool used in the healthcare sector and affecting 200 million Americans incorrectly predicted that white people would need significantly more healthcare than black people and recommended tailoring services to meet this ‘need’. Its analysis had unwisely used past healthcare expenditure as a guide to future healthcare needs, not grasping that a lack of expenditure can also reflect a lack of income as much as the absence of health issues.
Amazon used AI to help it sift job applications and shortlist the best candidates. It favoured men. They eventually realised that this was because it had been ‘trained’ on past job applications which were disproportionately from men and did not really know how to analyse applications from women properly.
These examples of bias have now been identified and at least partly addressed. The challenge is that the bias was inadvertently designed in from the start and had to be spotted. Which biases have we yet to identify?
Issue 3 - Invasion of privacy
As the bots graze the extensive undergrowth of the internet for relevant data to use, including images, words and documents, I wonder how much of your own data is in there? You may yet find your name, face or life story featuring in an AI-generated output one day. Will you have any rights or scope to influence this? At the moment, unlikely.
Q5: How might AI impact on our jobs?
Companies which are largely or entirely focused on developing or using AI are growing in number and scale in the UK, although still account for a fairly small part of our economy[2].
How might AI impact upon the UK economy, and jobs, more generally as the technology diffuses across more sectors and organisations?
The short answer is that we don’t really know. We can throw some straws into the wind and try to estimate it though, hey why not?
A 2021 report by consultancy firm PWC suggested that the overall impact on jobs in the UK in the next 20 years might be “broadly neutral”. They thought that manufacturing jobs were most at risk of being lost to AI (‘automated’) compensated with job gains perhaps in health and social care sectors and some professional services sectors. They thought that lower paid process-oriented jobs were most at risk.
A 2023 report by Open AI itself (creator of ChatGPT) reckons that it is the high-paid jobs which are most at risk of being automated instead, contradicting dear old PWC.
Perhaps more constructively, a 2023 report from the House of Commons Business Select Committee[3] identified the potential productivity gains that might be made across many sectors of the economy in future years by adopting AI.
So, we don’t really know. What we do know is that AI will certainly have an impact as a new technology and that our economy will continue to evolve, with sectors rising and falling. It has potential to help us improve our productivity (in the private sector and in public services), accelerate some decision-making and, as we have seen, it brings new risks and challenges with it, as all technologies do.
What is perhaps most concerning to me is that the rapidly emerging AI industry is very strongly focused in and around London, with 75% of all AI companies in London, the South East and the East of England. If we are to rebalance our national economy and get the rest of the UK firing on all cylinders, this new technology needs to be diffusing at least as fast in the North as in the South. This is clearly not happening yet. Let’s hope that a lot of Sir Keir Starmer’s new ‘AI Growth Zones’ (announced in January 2025) are going to be north of Watford.
Q6: Where, physically, does AI actually happen? Is it all in the cloud? Where is the cloud anyway? Is magic involved?
AI software models are large and they run on hardware machines which are also large. It turns out that the ‘cloud’ is a lot of very large buildings in surprising places using vast amounts of water and energy. They are very real indeed.
AI (and much other computing) now takes place in giant ‘data centres’ all around the world. They require a lot of electricity to power them and cool them, together with lots of water to assist with that cooling.
At the last count, the UK had 512 data centres, one of the largest tallies in the world after the 5,000+ in the USA. Hundreds more are planned in the UK. And, oh look, 80% are located in Greater London[4]. Europe’s largest cluster of data centres can be found on a very dull-looking business park in Slough, on the M4, where some 35 data centres are located.
They vary in size but are becoming larger. A typical data centre is the KAO Data Centre in Harlow, which is 15,000 square metres of ‘technical space’; that’s about two football pitches full of computer racks. It requires 40MW of power to run it, roughly the output of two small onshore windfarms. Such centres do not create many jobs on site, but the jobs are high skill and high paid. My own back-of-the-envelope sums suggest perhaps 2 full-time jobs are created per MW, so this KAO data centre might employ roughly 80 people on site. They do seem to bring wider economic impacts with them too.
MIT[5] estimate that data centres already account for up to 2% of global electricity demand. The International Energy Agency thinks that this is doubling in just four years from 2022 to 2026. It is rocketing as AI grows. Dealing with an AI query takes ten times more computing power (and electricity) than a simpler Google search engine request. So, energy consumption is a massive issue for these facilities. They are alert to this and a lot of work is going in to sourcing renewable energy for them, although success in this will ultimately depend on the UK’s wider ability to make this happen.
Water consumption is also a huge issue for these buildings, as they generate a lot of heat and need 24/7 cooling. In terms of planning for new datacentres, this is a key consideration, particularly in areas of the South East of England which suffer from water shortages. This is not a bad reason to speed up plans for these behemoths to be based in the North, where rainwater does not seem to be in short supply.
Q7: Is AI being regulated by Government? Should we just let Mark Zuckerberg do what he likes? He seems like such a nice boy
If we have learnt anything about digital technology in the last twenty years it is that the industry cannot be safely left to its own devices. It is quite happy to trample on people’s rights, dignities and pay packets to make an ever-growing profit for its small number of uber-wealthy owners. See Messrs Musk and Zuckerberg for details.
As ever with new technology, Government is playing catch up on regulation. There is currently no AI-specific legislation or regulation in the UK. There does, however, some to be growing recognition that AI needs regulation to manage its worst excesses and protect the rights of ordinary citizens and so the Government is beginning to negotiate with the industry on this. Whether the power of these huge (largely American) corporations can be managed in the UK remains to be seen. The tussles over social media regulation show how hard it is to regulate powerful global industries.
The challenge for us, collectively, is to ensure that new technology serves the purposes of society and the common good. AI has the potential to improve our lives and create new opportunities. Left entirely to the market (or, in practice, a small number of rich business owners) it has the potential to cause significant harm and exacerbate global trends in inequality.
In particular, it seems to me that there are some important aspects of our common life which may need protecting as we engage with AI:
The truth still matters – AI is presently and unhelpfully fuzzing the boundaries between facts and made-up-non-facts and it is increasingly hard to tell the difference online. This matters. Some things, even in the 21st Century, are still true or not true. We need to be able to tell the difference and navigate a world full of accelerating and increasingly complicated information. We may need to become better equipped to tell the difference and we also need the producers of AI to be clear about what their businesses are producing.
Doing harm accidentally is still doing harm – The online world can sometimes feel weightless, free and airy. What does it matter what happens online? But if an AI app is – even unwittingly – discriminating against certain people or groups or causing harm to individuals in the real world, then this obviously still matters. I personally think this is a pressing reason for robust regulation of these technologies in the UK.
The environmental impact of AI is large – AI absorbs a lot of energy and water. Those AI interactions are not ‘free’, they come at significant cost. There are resource limits to how far we can go in energy-intensive industries.
When we put aside the hype, AI is still just a bit of technology which is owned by somebody who is (increasingly) making a profit somehow and causing all sorts of social and environmental impacts along the way. There is sometimes a sense in which the online world gets a ‘free pass’ and is absolved of taking responsibility for its impacts and harms because we perhaps don’t quite understand what is going on, or because regulators have been slow to catch up.
I think we need to be clear that all industries and all technologies, no matter how clever, are still tools which must serve society – not the other way round. They must therefore be subject to democratic discussion and governance to ensure that they work for us in making the world a better place, not perpetuating patterns of harm and injustice. We also need to make sure that the people working in AI and our tech sectors reflect wider society. We need to move on from the tech bro.
So, even if AI is not really ‘our thing’, it’s in all of our interests to show up when the role of AI is being discussed, because, whether we like it or not, it will one day affect ‘our thing’.
Your move, human.
This blog was written by Tim Thorlby. Please sign up for the email alert if you’d like to know about future blogs, usually published once a month.
Notes
[1] 29% of jobs in the tech sector in the UK are held by women. Data drawn from: Ville, L (2023) System Update: Addressing the gender gap in tech, The Fawcett Society| Access: https://www.fawcettsociety.org.uk/Handlers/Download.ashx?IDMF=1f3cb985-8552-464b-b374-42084def22bb
[2] A good readable update can be found in a 2023 House of Lords report on AI in the UK | Access: https://lordslibrary.parliament.uk/artificial-intelligence-development-risks-and-regulation/
[3] House of Commons Business, Energy and Industrial Strategy Committee, Post-pandemic
economic growth: UK labour markets - Tenth Report of Session 2022–23 | Access: https://committees.parliament.uk/work/6729/postpandemic-economic-growth-uk-labour-markets/publications/
[4] From: TechUK (2024) How datacentres can supercharge UK economic growth
[5] Source: https://mitsloan.mit.edu/ideas-made-to-matter/ai-has-high-data-center-energy-costs-there-are-solutions