While AI is making a positive impact across the world, several prominent business leaders have raised the alarm about the dangers of uncontrolled advances in this sphere

BY Rehan Fernando

One may ponder how advances in AI could challenge organisations and individuals in the years ahead. Here are some thoughts.

CONTROL For decades, computers have followed commands given by humans. The advent of artificial intelligence blurs this divide, and places greater control and autonomy with computer systems.

Considering advances in machine learning where systems can improve on their own, it’s possible that AI could reach a point where reliance on human expertise or human intervention is no longer required. In theory and as depicted in some science fiction movies, computer systems could at some point seize control.

JOBS Like the way robotic process automation (RPA) substituted a large factory workforce in recent decades, AI could bring significant substitution to the professional workspace. This would have a major impact on labour markets and potentially make hundreds of thousands of professionals across a broad gamut of industries redundant.

The speed of substitution is also likely to be much faster. Those directly impacted would likely struggle with livelihood on the one hand and the need for realigning skill sets on the other. However, AI could also create exciting new employment opportunities, particularly in developing such systems.

PRIVACY The inherent nature of artificial intelligence requires data for learning and adaptation. Therefore, increasing the use of AI raises crucial concerns in relation to privacy including how personal data will be used and distributed by these self-reliant systems.

The reduction in human involvement may also dilute responsibility and accountability. One may wonder whether a computer system could be held responsible for its actions or alternatively, would an organisation or individual retain responsibility even though they may have limited control over the artificially intelligent system and its behaviour?

SAFETY Enabling AI systems to control key defence and surveillance raises concerns about safety. Could a system turn rogue, be misguided, be impacted by a computer virus or fail to take corrective action due to an inability to exercise gut instinct?

These weaknesses could lead to potentially disastrous consequences. There’s likely to be a more cautious use of AI in such areas where a substantial proportion of control will be retained with designated persons instead of being delegated to an artificially intelligent computer system.

DEPENDENCE With increased use of AI, humans risk becoming more complacent. The reduction in need to exercise judgement could diminish cognitive function over time and make humans more dependent on artificially intelligent computer systems.

In a basic sense, this would be like how many individuals have learnt to rely on a calculator instead of applying their own maths skills to perform calculations. Professionals across a broad range of industries have become considerably reliant on systems to help them with daily job functions. Reliance on AI could extend this dependence.

OWNERSHIP Typically, ownership of content remains with the content creator, the organisation for which he or she works, or a third party according to defined contractual agreements.

Questions begin to arise as to who would own content created through artificial intelligence. The complexity and involvement of many algorithms, relying on multiple data sources and self-sufficiency attained through machine learning makes it very difficult to pinpoint ownership.

One may wonder whether there could be a future with a flood of content without clear ownership and how ownership would be defined in such situations.

ADAPTABILITY As AI becomes far more advanced than in the past, there are still challenges in adaptability. These systems perform well in familiar circumstances but have trouble when confronted with situations that vary significantly from a defined normal.

This contrasts with a human expert where gut instinct sets in to compensate for a shortfall in information or familiarity. With advances in machine learning, it’s likely that this drawback will narrow as artificially intelligent systems become far more adaptable.

INTUITION Humans can understand certain situations instinctively without the need for conscious reasoning. AI on the other hand, relies on algorithms where underlying reasoning is fundamental to understanding and interpreting various situations.

This limits the scope of artificial intelligence and is an area that’s likely to demand much more effort to address – the challenge being that intuition
is extremely difficult to define and emulate.

Overall, while further advances in AI and such applications are inevitable, controlled adoption with due precautions can help minimise some of the adverse impacts. The challenge with artificial intelligence is that conseque