
The Ethics of AI
Share
Do you remember when seeing was believing? Or the expression “the camera never lies”. Well neither of those things can be seen as true anymore.
Our trust in the camera has been eroded over many years, with the development and increased use of photoshop in everyday photography, we have been able to amend and change images for years. At first this took great skill and a steep learning curve, but as time has gone on and the program has been made more intuitive, its used in the development of most photography these days.
We have also witnessed the growth of computer-generated imagery (CGI) over the last 30 odd years. Since its infancy in movies like Terminator 2 through Jurassic Park and into modern day cinema. CGI has changed the way movies and special effects have been created. Now we have tools like blender that have opened these abilities up to anybody with the time and the hardware. Enabling the freedom to create whatever our imagination allows us in photorealistic beauty.
The last few years, and even the last 18 months have seen that accelerate at mind blowing speed with the introduction of modern AI. With these tools being able create near photorealistic imagery with a few commands it's almost become second nature to second guess any image we see online now. And this is just the beginning. AI is going to change our perception of reality for better or worse over the next decade or so.

Photo by Rishabh Dharmani on Unsplash
It’s not going to be just photo and video either, audio will also be conquered by AI. We are already starting to see AI powered tools start to mimic and effectively portray real human speech. The simple and annoying automated phone answering system has existed for years and are mostly still as terrible today as when they first started rolling out. However, when you include modern AI into that, with realistic voice synthesis and natural language abilities. It will get harder and harder to tell if you are speaking to real person. And even if they have easy tells to start with. As it learns, as it improves, it will become near impossible to tell the difference.
When you couple this with photorealist deepfake technology, video calls will not save you from a conversation with an AI either. Even over video it will become very difficult to tell if the person you are talking to is real. Their mannerisms, their speech, their friendly smile will appear as real as the person sitting next to you.
To some this may sound terrifying, but if you think about this for a second, the benefits could be huge. Imagine everyone in the world with an internet connection has access to a doctor. A doctor who doesn’t sleep, a doctor who doesn’t leave you on hold, a doctor who can help answer questions and identify and triage basic illnesses and injuries for anyone in the world using just the cameras on a phone. Removing barriers to health care for the poorest whilst doing it with a smile.
In a simpler example, imagine never ever being put on hold again when you ring any company. There is always someone to answer your call and that person is always happy to speak to you. Not only that they don’t have to put you on hold, or hand you over to another department, they can answer all your questions and pull up all your details without any assistance.
For a lot of people this suddenly starts to sound great, but for the millions of people worldwide who work on help desk, in call centres, and even telesales, it may be time to start thinking about your career choice.

Photo by Chris Montgomery on Unsplash
One thing that isn’t often talked about with the advancement of AI, is manipulation. As it improves and as it learns how we talk our body language, AI will be able to manipulate us with ease. It will recognise the changes in tempo and patterns of our voices, simple facial expressions, breathing rate, pupil dilation, all of things that most of us don’t even notice. Using these tells, it will be able to convince and influence us with ease. Applying the right level of persuasion, using the right tone of voice, the right facial expressions to convince us.
When this happens, we will start applying the same level of scepticism to all digital communication that we now apply to images and videos. But how many people will fall victim before it’s too late? and how much will this affect the world we live in if we cannot trust who we talk to unless it's face to face?
This voice though, the AI, what does it sound like? Well, it could sound like anyone. With enough audio data, and using generative AI technologies, the voice could sound like anyone, dead or alive. If you couple that with video data, you could have a virtual avatar that looks, sounds and has all the body language of someone recently deceased. Providing people with the ability to continue a relationship with this virtual avatar and attempting to fill the void left by tragedy.
To me, this sounds crazy, but grief can drive people to look for hope anywhere, but is this a healthy choice? Because no matter how much it sounds and looks like a lost loved one, it’s still just 1s and 0s. You must also question the ethics of the company running the platform hosting this avatar. What if this trusted voice starts to convince you of a new political point of view, or to start investing money in specific companies? What if this happens to millions of people? Governments could fall, the wrong companies could grow in wealth and power. Who would truly be in control?

'Control' by Tell Tall Tales 2019
When we talk of power and governments, we can’t help but think of the war machine funded by ever country, and the weaponry built for the single purpose of killing and destroying. You would think that one thing that Hollywood and science fiction has taught us, is that we should not let AI near our military. Unfortunately the powers that be were either not listening or are ignoring this warning. Because AI is increasingly being invested in and incorporated into weaponry of mass destruction.
In the current conflict in Gaza, the Israeli government used an AI program called ‘lavender’ to target low level Hamas soldiers. This AI that used a database and chosen parameters helped to decide who was and wasn’t a target. The parameters used to identify targets could be changed or moved depending on that day’s description of what a Hamas militant was. This meant the accuracy could move from strict to very broad with just the a few changes. At one point 37,000 potential targets were identified. Through early testing ‘Lavender’ was considered to be up to 90% effective and because of this more and more trust was put into the system. In the beginning, before the initial Hamas attack, it was a long process to confirm a human target, with final sign off required by a judge. After the attack the decision to condemn a human to death dropped to around 20 seconds.
One Lavender user questioned whether humans’ role in the selection process was meaningful. “I would invest 20 seconds for each target at this stage and do dozens of them every day. I had zero added value as a human, apart from being a stamp of approval. It saved a lot of time.” Scarily this view was shared by others as faith in system was implicitly trusted - “This is unparalleled, in my memory,” said one intelligence officer who used Lavender, adding that they had more faith in a “statistical mechanism” than a grieving soldier. “Everyone there, including me, lost people on October 7. The machine did it coldly. And that made it easier.”
You can read the full story in the Guardian
What if we did remove the humans from the decision process completely and handed all control over to a ‘cold’ AI to make life and death decisions. Automatically launching AI controlled drones or aircraft to eliminate targets without any human involvement? Well, this isn’t as far away as you think. The US Air force is currently testing AI controlled F16 fighter jets, currently pitting them against human pilots in test dog fights. With a plan to spend billions to have 1000 similar aircraft built. The argument is that they can be smaller, cheaper to build, and there will be no risks to human pilots. Oh, and it enables them to keep up with China who are also developing AI powered fighter jets. What could possibly go wrong!

Did you know that AI can hallucinate? This is what happens when AI provides incorrect or misleading results that AI models generate. These errors can be caused by a variety of factors, including insufficient training data, incorrect assumptions made by the model, or biases in the data used to train the model. This is why there is a disclaimer at the bottom of the screen when you use ChatGPT or any other LLM model to “check for mistakes”. Now this is fine when you are using a text-based tool in your browser, but what about when that AI is carrying munitions with the single aim to end lives? Or on a more personal level, if you are consulting with what you think is a real person about your saving and investment options?
Sam Altman, the OpenAI chief doesn’t think there is any problems with AI hallucinations, in fact he thinks they are fundamental part of the magic of that makes ChatGPT. Others have also argued that AI hallucinations are a good thing. They enable the AI to think more creatively and advance the systems for finding problems of the future where outside thinking is needed. It’s also said that the systems can by tuned to never make a mistake and just provide 100% correct information. However, it doesn’t make sense why they wouldn’t want them to be accurate, when some AI, like Googles, are constantly getting things wrong, even telling people to eat rocks!

From this blog, and the other related blogs, you may get the impression that I am against the new invasion of AI into every day life. The truth is actually the opposite. The concept of AI and the things it can do fascinate me, and I believe the future could be very bright with the assistance of AI. No, my problem isn’t with AI, its with humans and how we are choosing to use AI. With all the possible benefits that AI could bring to all of human kind it appears that the only thing we are using it for is to make shiny new products to please the greedy, or devastating war machines to appease the war mongers. I truly believe that if the world ends at the hands of AI, its because we made it that way.
AI is doing some great things in the health sector through the early identification of things like Cancer and Alzheimer’s, which in the longer term will enable early treatment and possibly cures. The issue is that these are not getting enough funding. It would appear that until someone can figure out a way to turn these into money making schemes, the highest investment will be into tools and weapons that can make some people seriously rich(er). It could be that with AI, greed is really our undoing, because not enough is being done to control or protect us. Governments around the world have created AI think tanks and governing bodies, but so far none of these have done anything of any value to bring in any controls. It would also be debatable what power they have in the face of an arms race between two of the world’s biggest superpowers.
This is the last I will write about this for now, but as someone who loves technology, I do hope that we wake up and realise that we are on the wrong path. I hope that we start to put true governance around AI and its uses. I also hope that governments are going to do something to help the people who will inevitably be put out of work. We ultimately could live in a shiny, exciting, illness free world with equal opportunities for all, or we could live in the ashes of greed and hate. As the famous quote goes “with great power comes great responsibility”. I just worry that we are not ready for that responsibility.
Cover photo by Alex Knight on Unsplash