W1siziisimnvbxbpbgvkx3rozw1lx2fzc2v0cy9ozxh0iedlbmvyyxrpb24vanbnl2ljb24uanbnil1d W1siziisimnvbxbpbgvkx3rozw1lx2fzc2v0cy9ozxh0iedlbmvyyxrpb24vanbnl2ljb24uanbnil1d W1siziisimnvbxbpbgvkx3rozw1lx2fzc2v0cy9ozxh0iedlbmvyyxrpb24vanbnl2ljb24uanbnil1d W1siziisimnvbxbpbgvkx3rozw1lx2fzc2v0cy9ozxh0iedlbmvyyxrpb24vanbnl2ljb24uanbnil1d W1siziisimnvbxbpbgvkx3rozw1lx2fzc2v0cy9ozxh0iedlbmvyyxrpb24vanbnl2ljb24uanbnil1d W1siziisimnvbxbpbgvkx3rozw1lx2fzc2v0cy9ozxh0iedlbmvyyxrpb24vanbnl2ljb24uanbnil1d W1siziisimnvbxbpbgvkx3rozw1lx2fzc2v0cy9ozxh0iedlbmvyyxrpb24vanbnl2ljb24uanbnil1d W1siziisimnvbxbpbgvkx3rozw1lx2fzc2v0cy9ozxh0iedlbmvyyxrpb24vanbnl2ljb24uanbnil1d W1siziisimnvbxbpbgvkx3rozw1lx2fzc2v0cy9ozxh0iedlbmvyyxrpb24vanbnl2ljb24uanbnil1d W1siziisimnvbxbpbgvkx3rozw1lx2fzc2v0cy9ozxh0iedlbmvyyxrpb24vanbnl2ljb24uanbnil1d W1siziisimnvbxbpbgvkx3rozw1lx2fzc2v0cy9ozxh0iedlbmvyyxrpb24vanbnl2ljb24uanbnil1d W1siziisimnvbxbpbgvkx3rozw1lx2fzc2v0cy9ozxh0iedlbmvyyxrpb24vanbnl2ljb24uanbnil1d W1siziisimnvbxbpbgvkx3rozw1lx2fzc2v0cy9ozxh0iedlbmvyyxrpb24vanbnl2ljb24uanbnil1d W1siziisimnvbxbpbgvkx3rozw1lx2fzc2v0cy9ozxh0iedlbmvyyxrpb24vanbnl2ljb24uanbnil1d W1siziisimnvbxbpbgvkx3rozw1lx2fzc2v0cy9ozxh0iedlbmvyyxrpb24vanbnl2ljb24uanbnil1d
9 days ago by Next Generation

What We’re Getting Wrong About AI

W1siziisijiwmtkvmdcvmdgvmtmvndevmtyvotizl2fkdmfuy2vtzw50cy1pbi1bss5qcgcixsxbinailcj0ahvtyiisijc1mhg0ntbeil1d

Artificial Intelligence (AI) has a serious branding problem - and most of us are exacerbating it.

The bulk of conversations taking place around AI are dystopian in nature, proclaiming that robots are coming to take all the jobs. Or even worse; AI will render humans redundant and then kill us off.

In late 2018, a different perspective was added to the ongoing conversation about AI.

However, it was one no less polarising. Andrew Moore, VP of AI for Google Cloud, spoke at a Google AI event in November last year and proclaimed that “AI is very, very stupid.”

Well, that tells us, but it’s no more helpful than the machines stealing our jobs story. Where’s the nuance? And what exactly is the real story with AI?

AI has been around for a while

Now, it’s highly unlikely that Andrew Moore really believes that AI is stupid. Instead his comment (which has been retweeted extensively) alerts us to the fact that AI, in its current form, has limitations.

In this regard, the Google VP’s comment was actually helpful. It draws us away from an unquestioned narrative that robots and machines are going to replace all of us and takes a more honest look at AI’s capabilities.

It’s a little known fact that AI has been around for nearly 80 years, and the concept of artificial beings that can think and perform human tasks features prominently in storytelling through the ages, e.g, Mary Shelley’s Frankenstein.

The earliest work today recognised as AI was based on a theory put forward by Alan Turing, English mathematician and computer scientist. Turing posited that “if a human could not distinguish between responses from a machine and a human, the machine could be considered ‘intelligent’”. In 1943, two American computer scientists built on this theory to create a formal design of data manipulation rules for “artificial neurons”.  This work is considered the start of AI, making AI, as a field, 76 years old in 2019.

In these past 76 years a lot of technological advances have taken place and various AI innovations are working for us as humans already. If we think of stock picking within the financial services industry, or spotting suspicious transactions on our credit cards, or image recognition on our mobile devices, or chatbots helping us on ecommerce sites and websites, we have a good idea of the AI we interact with everyday already. The examples given here are called Artificial Narrow Intelligence (ANI).

The AI that people fear, the “robots-will-steal-my-job-and-kill-me” AI, is called Artificial General Intelligence (AGI). A (most often poorly portrayed) version of this AI is the one that dominates mainstream conversations around AI’s potential, and gives rise to the belief that machines will act and have the appearance of humans. But it looks like AGI is nowhere near being a reality anytime soon. Not least because of the AI effect.

The AI effect

The AI effect, in part, explains why such an alarming narrative has come to dominate the AI story in recent years.

As mentioned earlier in our post, AI developments are all around us. Many are helpful, such as the ability to spot and halt fraudulent financial transactions as cited above. But what happens with these types of AI programs, when they’re implemented and become part of our day-to-day life, is that they seem too tame and are not considered real intelligence.

Pamela McCorduck is an American author who writes extensively on the philosophical aspects of artificial intelligence. She says. “It's part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there (has been a) chorus of critics to say, 'that's not thinking'”.

Rodney Allan Brooks, an Australian roboticist and former director of the MIT Computer Science and Artificial Intelligence Laboratory concurs. He says, “Every time we figure out a piece of it (AI), it stops being magical; we say, ‘Oh, that’s just a computation.’”

In light of this human behaviour, it’s easy to see why an alarming misunderstanding about AI has developed and come to dominate the prevailing narrative.

On the one hand, as Pamela McCorduck and Rodney Brooks highlight, once an AI program becomes commonplace our natural inclination is to dismiss it. On the other hand, we also have a fear of the unknown. The future is a big unknown. Speaking about AI, and the possibilities it might hold, the conversation most often faults to a frightening future where AI overthrows humans.

Sabine Hauert, Assistant Professor in the Bristol Robotics Laboratory at the University of Bristol brings a sense check to things. “I think there’s this idea that AI is going to happen all of a sudden,” she says, “(but) the reality is that we’ve been working on AI for 50 years now, with incremental improvements.”

Humans give AI too much (human) credit

Let no one say that as humans we aren’t a funny bunch!

Along with claiming AI hasn’t really arrived yet, while decrying that AI is going to make humans extinct, we also project human qualities onto the AI we think is coming down the pipeline. This projection is called anthropomorphisation. We believe that AI will look and act like us, with our egos, consciousness and instinct for self preservation.

We’d be wrong, but it’s clear to see why we anthropomorphise AI in our own image. AI intelligence won’t look anything like human intelligence, but how do we understand that when the highest intelligence we have witnessed in the world is our own?

What AI research is showing us is that there are many different forms of intelligence - and many of them are unlike human intelligence. When an AI program is conceptualised and built, and can then perform a task that requires it to look for patterns in data, and take a specific programed response to that data, it doesn’t look like AI in the way we imagine it to look.

But we need to expand our understanding of what intelligence is.

Thomas G. Dietterich, President of the Association for the Advancement of Artificial Intelligence from 2014 - 2016, says that “intelligence” as one word covers many different types. "We measure intelligence by how well a person or computer can perform a task, including tasks of learning. By this measure, computers are already more intelligent than humans on many tasks, including remembering things, doing arithmetic, doing calculus, trading stocks, landing aircraft."

Anyone interested in AI is not being served well by the current media coverage of the topic.

In June 2018, Facebook’s AI unit published an article on how bots could have a negotiation-like conversation. Most of the conversation was coherent, but at some points the bots would mutter sentences such as “Balls have zero to me to me to me to me to me to me to me to”.

When these results were investigated, the scientists at Facebook realised that they had not included a constraint to limit the bots in their use of English, and by default, the machines invented a machine-English lingo. For experts in the field, this was an interesting finding but not groundbreaking. Nor even surprising.

Not according to Fast Company, who immediately sounded the alarm bell in an article entitled “AI Is Inventing Language Humans Can’t Understand. Should We Stop It?. The reported claimed that once the researchers realised that the bots were using a new language they pulled the plug, leading to a perception that the bots were out of control - and possibly dangerous.

The article quickly went viral and was picked up by other publishers. The zenith of panic-inducement was reached by The Sun who suggested that the experiment “closely resembled the plot of The Terminator in which a robot becomes self-aware and starts waging a war on humans”.

This was in no way what the experiment resembled, nor what the research paper had highlighted, but it does provide a recent example of how much incorrect hype surrounds AI.

For a deeper and more nuanced look at AI developments, as well as the role it can play in solving (some or all) humankind’s problems, we’ll need to look further than the scandalous headlines. This resource is a useful list of real thought leaders in the AI space. It includes courses, YouTube videos and blogs worth checking out too.

AI is already here and has been part of our lives for many decades. Rigorous thinking and debate needs to accompany the developments we are seeing in technology, and having the right conversations about AI is a good start to understanding it better.