Artificial Intelligence… the next Beyoncé?

Louise Fenn 15th September 2021

Artificial Intelligence AI 2

Hello and welcome back to the hottest topic of the year… Artificial Intelligence!

In our previous blog, AI – the stuff of movies, now a reality, we took a stab at describing something famously hard to understand – what AI actually is. We also discussed a brief timeline of AI, dating all the way back to WW2, where we use AI in everyday life and AI-powered tech that is saving lives!

So, what do we have in store for part two?

Today, we will focus on the potential drawbacks of AI itself; aside from the theoretical murderous robots taking over the world! We’ll then get out our crystal balls and discuss what the future of AI looks like, getting an opinion from our very own data expert, Sean Robertson.

 

AI… what’s the catch?

We previously discussed AI being used to combat climate change and save lives across the globe… all the power of a comic book superhero! But, as with everything, there could be some potential drawbacks.

Let’s take a look at what three of these could be:

 

1. Lack of regulation

Organisations of all shapes and sizes have accelerated their use of AI to improve business efficiency, but without an AI rulebook, those same organisations are at risk of breaking any number of future AI regulations. The problem is, since AI is the not-so-new-but still-new-enough kid on the block, it’s been notoriously hard to regulate. How do you put rules in place for something that you don’t even understand yourself?

Back in 2017, Elon Musk tried to pressure governments to regulate AI before it’s ‘too late’…

“I have exposure to the very cutting-edge AI, and I think people should be really concerned about it… I keep sounding the alarm bell, but until people see robots going down the street killing people, they don’t know how to react, because it seems so ethereal”.

It’s unchartered territory and governments just aren’t quite sure what to do with it yet. This has led some to abuse the current ‘relaxed’ ruling around AI, such as the Artificial Intelligence X-Ray App that generates realistic nude images of women simply by feeding the program a picture of the intended target wearing clothes… yep you read that correctly. This app has sparked a parliamentary debate (prompted by MP Maria Miller) on whether digitally generated nude images need to be banned altogether, because – you guessed it – the law doesn’t yet exist.

Sundar Pichai, Chief Executive Officer of Alphabet, agrees “that artificial intelligence needs to be regulated. It is too important not to. The only question is how to approach it… companies cannot just build promising new technology and let market forces decide how it will be used” (Financial Times).

So, the issue is not the want to regulate, but knowing what to regulate, and keeping pace with the industry.

 

2. Trust in the digital age

Back in 2020, Channel 4 delivered an alternative message from the Queen, to warn us of the upcoming battle between misinformation and truth. William Bartlett, the film’s director said…

“This is an interesting spin-off from the recent advances made in ML and AI and while it is a powerful new technique for image makers everywhere, it is also a tool that can be used to misrepresent and deceive… we wanted to create a sequence that is hopefully entertaining enough that it will be seen by a lot of people and thereby spreads the very real message that images cannot always be trusted”.

The technique in question was deepfake. A use of AI and deep learning to trick people into thinking what they see is real, which is causing concerns that this tech could be used to exuberate reports on fake news and spread misinformation.

These deepfakes are not just limited to visual media. A realistic audio of a CEO was created in 2020 in an attempt to commit fraud and convince employees to send cash to their ‘boss’ for an ‘urgent business deal’ (Vice).

If that’s not terrifying enough, check out 14 other examples of deepfake here.

 

3. AI-powered machines lack empathy

Empathy. Defined by the Oxford Dictionary as the ability to understand another person’s feelings, experience, etc. Seems simple enough. Easy enough to train a piece of tech to understand emotions perhaps?

Artificial Intelligence 3

It’s long been thought that your facial expressions are a number one indicator of how a person is feeling. This has led scientists to use facial expressions to teach their AI-powered tech to ‘detect emotions’. Named ‘emotion recognition technology’ (ERT), this is today a multi-billion-dollar industry.

ERT is used in lots of industries like recruitment – with Unilever claiming it saved 100,000 hours of human time – through to the education system monitoring which students are paying attention in class (BBC).

The truth is, this old age theory that facial expressions convey emotions is losing support. Observations in the 1960s and 1970s by Psychologist Paul Ekman are being revisited and questioned on validity. He suggested that emotional expressions are universal – meaning that the way you or I might convey happiness would be exactly the same. However, many researchers now think facial expressions vary widely between contexts and cultures e.g. a study found that whilst Westerners and East Asians had similar ways to display pain, they had different ideas about expressions of pleasure (Nature)… making it a whole lot harder for AI algorithms to understand what we’re feeling.

Could this lead to AI programmes misinterpreting? Perhaps leading to bias across different cultures?

A little in-browser web game built by researchers from the University of Cambridge aims to show why AI might be flawed based on facial expressions alone – give it a go!

With researchers still toing and froing over whether AI can, or even should be able to understand and interpret emotions, people are insisting that it’s not used at all. The AI Now Institute, a research centre at New York University, has even called for a ban on uses of emotion-recognition technology in sensitive situations, such as recruitment or law enforcement.

 

Drawbacks aside, where can we see AI going and how will it be a part of our future?

Back in 2018, AI ‘oracle’ and venture capitalist Dr. Kai-Fu Lee said…

“AI is going to change the world more than anything in the history of mankind. More than electricity”.

Just like the common lightbulb, Alexa has quickly become a household name.

Alexa is just one example of how AI has crept into our daily routine – what’s the weather looking like? Ask Alexa. Need directions? Ask Alexa. How long do you boil an egg for? Ask Alexa. This AI powered tool has been so successful, it’s given the name ‘Alexa’ a whole new meaning – pretty powerful stuff!

Household necessities aside, AI is also used heavily in the customer service industry. Picture this… you’re having issues with your gym membership. You go online, a little box pops up, you chit chat with an agent online, problem resolved. But this isn’t an agent at all. This is an AI programme – populated with a predetermined workflow that covers a long list of queries that a customer might have.

AI has even begun to compose songs. It’s not going to give Beyonce a run for her money, but just as sound developed from a low quality, fuzzy noise and TV from a pixeled grey mass, AI is set to do the same – perhaps, one day, making it indistinguishable from the real artists.

AI has started to write software as well, and many believe it will have the potential to write more complex AI – writing itself! Diffblue, an Oxford University start-up, already has an AI system that automates the writing of software tests. Some researchers envision a time when anyone can create software simply by telling AI what they want the software to do (New York Times).

What’s interesting is that the AI that everyone imagined – humanoid robots collecting groceries, sweeping your floor, keeping you company – is also progressing, it just has a little way to go before it’s adopted into our homes as quickly as we invited Alexa into the kitchen.

In fact, most AI works predominantly behind the scenes – silently monitoring, collecting data and solving complex problems like it’s not even there. And the prevalence of this tech is no doubt going to increase over the next few years.

Google, Apple, Microsoft and Amazon are reportedly spending billions to create AI powered products and services, and Universities are also making AI a more prominent part of the curriculum, with MIT alone dropping $1 billion on a new college devoted solely to computing with an AI focus (Built In). 2021 UCAS figures also show a whopping 400% increase in students enrolled on AI courses in the UK (UCAS).

Big things are on the horizon. It’s just a matter of time.

 

Augmentation intelligence

If you’d asked someone what the future of AI was 10 years ago, they might say, “AI powered tech will put millions of workers out of jobs”. But many researchers seemed to have changed their tune – stating that augmentation intelligence is now the way forward.

Gartner defines augmented intelligence as ‘a design pattern for a human-centred partnership model of people and AI working together to enhance cognitive performance, including learning, decision making and new experiences’. In other words, combining AI with human efforts to get the best results and bringing a different set of skills to the table.

Take Grammarly. A piece of tech that monitors us as we type, spellchecking and suggesting new words along the way. This is a great example of AI augmentation – it’s not replaced the need for a writer, it simply tries to steer them in the right direction for some killer copy. It also helps the writer understand if a sentence makes sense and reduces spelling mistakes, making the proof-reading process more efficient.

Another example of augmentation today is Telsa’s self-driving cars. These cars have autopilot yes, but they still need human hands on the wheel to operate. Filled to the brim with ultrasonic sensors, forward facing radars and 360-degree cameras, these vehicles help us get from A to B, without crashing into C.

For anyone that has ever used Grammarly, there’s still some way to go before we get the perfect tool. Everybody writes differently, using different languages, so it’s no surprise that Grammarly suggests the wrong word, or picks up a spelling mistake that isn’t there from time to time. The same goes for self-driving cars – anyone that has watched the news will know that these cars aren’t perfect.

With every trial and error, updates and new product release, we are moving closer to this kind of augmented tech maturing enough that it becomes part and parcel of everyday life – enabling everyday people and organisations to be more efficient, save costs and focus on more interesting projects in life.

 

Regulation that builds trust

Nobody wants to put a lid on innovation, but when that innovation impacts the very fabrics of society, a lack of regulation becomes a troublesome affair.

The EU has described regulation to be essential to the development of AI tools that consumers can trust, stating that opportunities and challenges of AI need to be addressed in order to effectively promote the development and deployment of AI (European Commission).

Then there’s the fact that AI relies on big data, impacting privacy in a major way. You just need to take one look at the Cambridge Analytica scandal involving Facebook aiming to harvest millions of people’s profiles, build powerful software programs and then predict and influence choices at the ballot box for the Presidential Election (Guardian).

“Advancing AI by collecting huge personal profiles is laziness, not efficiency. For artificial intelligence to be truly smart, it must respect human values, including privacy. If we get this wrong, the dangers are profound” – Tim Cook, Apple CEO

Recently, regulators and lawmakers around the world made it clear that new AI laws are inbound. Financial regulators in the US released a request for information on how banks use AI, signalling that new guidance is coming for the finance sector. The US Federal Trade Commission released a set of guidelines on ‘truth, fairness, and equity’ in AI (Harvard Business Review). And, last but not least, the European Commission has released its own proposal for the regulation of AI, including fines of up to 6% of a company’s annual revenues for violations.

With new AI regulations on the horizon, companies could do worse than to look at existing guidance to prepare their business for innovation with boundaries. The likes of Google and Microsoft are already developing formal AI policies with commitments to safety, fairness, diversity, and privacy. Other companies have even appointed ethics staff to monitor the introduction and enforcement of such policies (HBR).

Could the new rules damage innovation?

Many believe that tighter regulations could damage the rate at which organisations innovate. An example that the Financial Times reported on involves a company called Duolingo – cut a long story short, Duolingo’s has developed a piece of software named ‘The English Test’. This allows people to demonstrate language proficiency to educational institutions online, anywhere in the world. New regulations could see this software as a ‘high-risk’ AI system – resulting in expensive and time-consuming requirements for Duolingo, and any other AI systems, to meet. Could this be a potential deterrent for tech entrepreneurs?

The reality is, people don’t like change until they can see and understand the advantage it can bring to them. Take the General Data Protection Regulation (GDPR) for example. For years and years there had been little regulation around how a person’s data was held, used, for how long… and now, just three years after the GDPR came into power, we see GDRP policies as standard. Being dropped into society with a bang, disrupting many business as usual processes, but eventually slotting into society – becoming the norm.

Regulation has the potential to see human rights policies upheld, discrimination and bias levels reduced, as well as the misuse of AI altogether – perhaps leading to higher levels of trust in AI.

 

Final thoughts from our resident AI expert, Sean Robertson – Principal Consultant in Data and Analytics

I think there are three main areas where we will see significant uptake and value in AI in the next three years.

Firstly, in drug design and discovery. Driven by the pandemic, the development of vaccines has been reduced from decades to years. We will see a similar reduction for scientist’s drug design and discovery with the application of deep learning to augment the workflow.

Artificial Intelligence 4

Estimations, so far, see reductions in processes from years to months – which will have an exponential effect on new drug generation and people’s health.

Secondly in IT operations. The pandemic has driven a huge growth in digital uptake and experiences which span customer experience platforms, communication platforms, data platforms, applications, infrastructures, and networks both in the cloud and in organisations data centres.

This is a really big complex ecosystem to monitor, tune and fault find and difficult for humans to do. AI will provide the capabilities to collect system data in real data in real time, integrate it, apply advanced models to it and then either recommend self corrects to humans or self-correct independently for low-risk interventions.

Thirdly the massive growth in data fusion which is applicable to self-driving cars, warships and crime prevention!

The underlying principles are the same: the collection of data from a range of sensors whether that’s cameras, radars, sound, motion, satellites and then fusing this data together and then looking for areas of interest or concern. There are billions being invested in self driving cars and, in the military, so this will spill over into less well funded areas like crime prevention and emergency response.

 

Closing remarks

AI isn’t perfect. Perhaps it never will be. But there is more we could do to help guide its integration into society in a way that’s mutually beneficial to the businesses that want to profit out of it, and the consumers that wish to reap the ideas it sows, the problems it solves.

Still in its infancy phase, we’ve only just scratched the surface of its potential. What we can be sure of is some really exciting advances in the coming years, and we can thank the major investments and increase in AI education for that.

What will be interesting to see is how imminent AI regulations will play out and the impact this will have on innovation.

Let us know your thoughts on the ECS LinkedIn page or Twitter.

If you missed part one of the AI series, AI – the stuff of movies, now a reality, make sure to check it out here.

****

More about the authors:

Louise Fenn is from sunny Yorkshire! Before making the jump into the technology sector, she spent the early parts of her career working in healthcare. She is now the Content Marketing Executive at ECS, with a keen interest in writing and design.

Sean Robertson has spent his entire career attempting to make sense out of data. He started off in hands on roles building predictive machine learning models in the energy and banking industries. He then progressed into leadership roles both in client and consultancy imagining, shaping and leading large data transformation programmes. His current data interests include data modernisation and industrial IOT.

Found this interesting? Why not share it: