AI AND HUMANITY

Angelo Fernando wonders whether we should prepare for or panic about the advent of artificial intelligence

Elon Musk wants to keep tabs on Google. Musk is a heavyweight among technology entrepreneurs by virtue of the fact that he owns and operates a space company (SpaceX), an electric car producer (Tesla) and a high-speed transportation firm few have even heard of. Musk also founded the company that he sold to PayPal. And there’s his solar company too.

But the reason some take Musk seriously is not because he boasts a PhD in applied physics; it’s because he invested in an outfit called DeepMind to keep an eye on AI. In 2014, DeepMind was acquired by Alphabet – Google’s parent company.

ACCIDENT WAITING TO HAPPEN Others are equally concerned. OpenAI is a nonprofit AI research company backed by Amazon, Microsoft and PayPal cofounder Peter Thiel. It aims to promote and develop ‘friendly AI’ in such a way as to benefit humanity as a whole.

And it believes that artificial intelligence should be an extension of individual human wills, and be broadly and evenly distributed – in other words, AI that cannot be controlled by one entity.

The excessive and extreme level of power we invest in machines could be dangerous. If these people are not held accountable, says Musk, we could be putting humanity at risk. Intriguingly, one of his other companies has plans to implant brain-computer interfaces. In other words, he is preparing for that future while having panic attacks about it.

Dozens of companies – even Chinese firm Alibaba and several others flying under the radar – view AI as the next opportunity with wide-ranging business advantages. Alphabet has acquired a Belarus outfit called AIMatter, which created a neural network based platform, and another called Dark Blue Labs. And of course, Apple’s pedestrian version of AI Siri is widely adopted. And there’s also Amazon’s equivalent – Alexa.

Stephen Hawking, who died earlier this year, also spoke out about the downside of AI. He said that “the real risk with AI isn’t malice but competence.” In other words, we could accidentally unleash an element of AI that could be hostile to us humans. The ‘intelligence explosion’ has triggered a race to create and prime intelligent machines. But should these machines surpass us, we could be in trouble, Hawking warned.

It’s fascinating to play ‘what if?’ and wonder if we humans might ‘outsource’ our brains to AI, which is tipped to be a superior intelligence.

In 2016, IBM’s Watson – which in the AI world is considered a superior ‘cognitive computing platform’ – came up with a brain cancer treatment plan for a 76-year-old patient in 10 minutes. A team of humans took six days.

The march is on to do more of this – the kind of thing that was the stuff of science fiction in the past. In Isaac Asimov’s short story The Bicentennial Man, a robot named Andrew is at a meeting with the head of the robotics corporation that created it. Andrew comments that “in the end, the corporation will produce one vast brain controlling several billion robotic bodies. All the eggs will be in one basket.”

He adds: “Dangerous. Not proper at all.”

It’s interesting how it’s not the human but the robot that thinks it’s a dangerous trend to have an outsourced brain!

BOON TO HUMANITY And then there’s Tim Berners-Lee whom we ought to take seriously if only because he invented that platform we all use – the World Wide Web. He admitted that our capacity to give machines the ability to do the chores we’d rather not do is not necessarily terrible.

“Many humans are placeholders doing work just until the robots are ready to take over for us,” he said, wryly alluding to the fact that we humans are acting as if we could be replaceable.

But are we ‘replaceable placeholders’?

Some jobs are certainly going to be obliterated by machines invested with AI. One study says robots and AI could replace 38 percent of jobs especially those in service sectors. We are not talking of humanoid robots or ‘androids’ moving into the workforce. But workers at service kiosks or those who provide intermediary services (think call centre staff, insurance adjusters or tax consultants) – are more at risk since intelligent agents housed in some offsite server could do their work.

Some envision that AI ‘scientists’ could solve some of humankind’s undefeated challenges such as global warming or terminal diseases. Already, some hedge funds use AI to do the job of humans when playing the stock market (for example, when engaged in ‘quantitative trading’) by monitoring and analysing large swaths of data, and making decisions.

Bill Gates is more optimistic…

He views AI as helping the world do what other technologies have done – i.e. improve our lifestyles: “We used to all have to go out and farm. We barely got enough food; when the weather was bad people would starve. Now, through better seeds, fertiliser, lots of things, most people are not farmers. And so AI will bring us immense new productivity.”


PEOPLE-MACHINE HYBRIDS Pricewater­houseCoopers (PwC) has a different perspective on labour and AI.

“Everyone has seen the headlines: Robots and AI will destroy jobs. But we don’t see it that way. We see a more complex picture coming into focus with AI encouraging a gradual evolution in the job market that – with the right preparation – will be positive. New jobs will offset those lost. People will still work but they’ll work more efficiently with the help of AI,” it says.

The idea is that a hybrid workforce of humans and AI can achieve what humans alone could not. PwC goes on to state that “a human engineer defines a part’s materials, desired features and various constraints, and inputs it into an AI system, which generates a number of simulations. Engineers then either choose one of the options or refine their inputs and ask the AI [system] to try again.”

Hawking would have scorned such optimism.

“We stand on the threshold of a brave new world,” he said, evoking author Aldous Huxley who warned us through his novels about a dystopian world run by machines.

Hawking remarked: “We all have a role to play in ensuring that we and the next generation have the determination to engage with science … and create a better world for the whole human race.”

So while many cheer on autonomous vehicles and smart tractors that run on AI, we could enter an era where we also rely on AI to make legal, medical or editorial decisions once made by humans with morals, ethics and a sense of accountability at least to their community.

ALGORITHMS AND BIG DATA Algorithms are being trained (with limited success, for the time being) to spot and delete fake news. Even if it is as serious as the AI powered Russian bots involved in election interference, laypeople such as us assume we don’t have to worry about these kinds of information wars.

To many bystanders, AI is just ‘Absolutely Inconceivable.’ At the entry level are intelligent algorithms such as Halo, which PwC uses for risk assessment. It also has a fraud detection AI bot that uses machine learning.

Which begs the question…

If mathematical models or algorithms could make AI based decisions that are fair to some at the expense of others, what’s there to complain about?

Berners-Lee identified three sticky issues that could affect us.

On the 28th anniversary of the web (12 March 2017), he published a letter expressing concern about the collection for sharing of personal data that could be misused by states and corporations; the wildfire-like spread of information both real and untrue, which can be gamed by actors and bots; and online political advertising based on algorithms that can be targeted and create imbalance in the democratic process.

He is referring to big data being used by AI for micro-targeting before elections.

They infiltrate the online world as swarms of bots that simulate human accounts.

So does this mean that despite all the optimism about AI and machine learning, we should proceed with caution?

Pattern recognition, machines talking to and learning from machines, and even the word ‘artificial’ are overhyped. Most experts (Musk, Gates and Berners-Lee aside) agree that we are just at the tadpole stage of the lifecycle of AI and it is not about to unleash mayhem or sentient robots that destroy us.

Accenture predicts growth in three sectors: intelligent automation, labour capital augmentation and innovation diffusion.

To unpack what these mean, take a look at the healthcare sector (which leads retail and telecom).

According to Accenture, about 15 percent of healthcare companies are applying machine learning to their processes, people and data. They look to machine learning enabled processes to reduce costs.

Countries such as Sweden, Japan and the US will witness an upwards of 30 percent boost to their economies.

I found one reference in the Accenture report intriguing. Healthcare companies in the survey have said they hope that machine learning would help ‘invent’ new jobs.

On a human resource (HR) dimension, they envision it will help them emphasise “distinctively human capabilities when hiring.”

So does that mean that machine learning could help those in HR better recognise human potential?

The implication seems to be that humans are not exactly savvy in spotting human potential. If humans are that incompetent, no wonder machines are winning!

STRAWBERRY FIELDS FOREVER

Elon Musk (listed by Forbes as the 54th richest person in the world): “Let’s say you create a self-improving AI to pick strawberries, and it gets better and better at picking strawberries, and picks more and more; and it is self-improving so all it really wants to do is pick strawberries. So then it would have all the world be strawberry fields. Strawberry fields forever [no room for human beings].”

THE BIRTH OF AI

If AI was a person, that person would be in his or her early 60s. Depending on your point of view, that’s not exactly old. When the first programme to mimic a human’s problem solving capabilities was designed, it was not known as AI.

Nobel Prize (in Economics) winner Herbert A. Simon partnered with Allen Newell (a scientist who had been studying logistics and organisation theory), and they hit it off and began working on teaching machines to think. It was all mathematical. Their first project was a ‘thinking machine’ called Logic Theorist.

There was also Alan Turing, the young polymath from Britain, who in 1950 wrote a paper that discussed the ‘imitation game.’ Prior to this time, computers were simply machines that executed commands and could not process logic. As with Simon and Newell, Turing wondered if machines could do what humans did – i.e. solve problems.

AI’s IMPACT ON LABOUR

Surprisingly, most of these jobs will be unskilled (although proportionally skilled jobs will be more positively impacted). Two-thirds of jobs in 2030 that will depend on AI will be for unskilled work although this should be interpreted in the context of unskilled labour accounting for 69 percent of jobs in the baseline scenario.

Estimated number of jobs impacted by AI in 2030 – over 326 million

A QUESTION OF ETHICS

ROBOT ETHICS ‘Robot rights’ refer to moral obligations that humans should have towards machines that they build and operate. Do humans ‘give’ robots a right to exist? Should robots be invested with intelligence that limits them to serving humanity?

MACHINE ETHICS The idea behind ‘machine ethics’ is that they could be inadvertently designed to make moral decisions with no human oversight. Three ‘laws of robotics’ were proposed by Isaac Asimov throughout his science fiction stories, which refer to how we assign ethics to machines.

ISAAC ASIMOV

Before AI came into the picture, writer Isaac Asimov brought up the uncomfortable issue of machines that ‘lived’ alongside (and behaved like) humans. His subjects were robots. Asimov invested in them the advanced thinking capacity that made robots useful even though they lacked morality and ethics.

In one of his short stories, Robbie – a robot companion for a child named Gloria whose father worked at a major US robot manufacturing company – has to be given away because the neighbours are nervous. The mother worries that her daughter has more empathy for a robot than her human friends.

When Robbie mysteriously disappears, Gloria screams: “He was no machine! He was a person just like you and me; and he was my friend. I want him back. Oh Mamma, I want him back.”

What the girl does not know is that Robbie was invested with the ‘three laws’ of robotics that Asimov came up with as the basis for much of his fiction – so ‘he’ was programmed to be intensely protective of her.

To Asimov, robots are not “blasphemous imitations of life” but machines that were story material.

His three laws were as follows: a robot may not injure a human being or through inaction allow a human being to come to harm; a robot must obey orders given to it by human beings – except where such orders would conflict with the first law; and a robot must protect its own existence as long as it does not conflict with the first or second law.