BUILDING ETHICAL SYSTEMS

Universal standards for AI are a sine qua non – Sanjeewaka Kulathunga

AI is now indispensable as it drives innovation across various sectors such as healthcare, finance, transport and more. And as artificial intelligence continues to evolve, the need for universal standards to ensure its ethical and responsible development becomes increasingly critical.

These standards are essential not only to protect individual rights and privacy but also to mitigate potential risks associated with AI systems.

Without universal standards, artificial intelligence can perpetuate existing biases and discrimination. In 2018 for example, Amazon scrapped its AI recruiting tool after discovering that it was biased against women. The tool, trained on resumes submitted over a decade, favoured male candidates because the tech industry is male dominated.

Universal standards would require rigorous testing for biases and the implementation of measures to ensure fairness and non-discrimination in AI systems.

And the importance of privacy can’t be overstated especially with AI systems that are capable of processing vast amounts of personal data.

The Cambridge Analytica scandal in 2018, where the personal data of millions of Facebook users was harvested without consent for political advertising, highlighted the severe consequences of inadequate data protection.

Universal standards must enforce stringent data protection regulations so that AI systems collect, store and use personal data, both responsibly and transparently.

The lack of transparency was a significant issue in the case of COMPAS, which was used in the US to predict recidivism. Investigations revealed that the system disproportionately labelled African-American defendants as high risk, raising concerns about transparency and accountability.

Such standards would require AI developers to provide detailed information on how their algorithms work and the data they use, to foster trust and promote accountability.

The systems must be developed with safety and security as paramount concerns. The 2016 incident involving Microsoft’s chatbot Tay, which began tweeting offensive and inappropriate content after being manipulated by users, underscores the potential risks of inadequate safety measures.

It is imperative to mandate comprehensive testing and robust cybersecurity protocols to prevent unauthorised access and the malicious use of AI systems.

These standards must ensure that artificial intelligence systems are designed to be fair and unbiased. Developers should eliminate any discriminatory practices or biases that could result from the data used to train AI models. This involves diverse and representative datasets, continuous monitoring and adjustments to mitigate biases.

Users should have access to information on how AI algorithms work and the data they use. The EU’s General Data Protection Regulation (GDPR) requires organisations to explain automated decisions that greatly affect individuals.

Mechanisms must be established to allow individuals to have control over their data and awareness of its utilisation, similar to provisions in the California Consumer Privacy Act (CCPA).

Universal standards should require rigorous testing and evaluation, to identify and mitigate potential risks. Proper cybersecurity measures must be implemented to safeguard AI systems against unauthorised access or malicious use, akin to the practices recommended by the National Institute of Standards and Technology (NIST) in its AI risk management framework.

Such standards must prioritise human wellbeing and ensure that artificial intelligence sys­tems augment human capabilities rather than replace them. AI should benefit individuals and society, by considering ethical issues and societal impact. For instance, AI driven healthcare applications such as IBM Watson for Oncology aim to assist doctors in diagnosing and treating cancer. They enhance human capabilities rather than replace them.

Standards should embody fundamental principles such as fairness, transparency, privacy, safety and human-centric design. By adhering to them, we can unlock the true potential of AI while ensuring that individuals’ rights are protected and potential risks minimised.

While unethical artificial intelligence practices recently reported in healthcare, auto­nomous vehicles and social media underscore the pressing need for standards, it is essential to recognise that their imple­men­tation goes beyond specific incidents.

It’s the collective responsibility of governments, industries, researchers and individuals, to collaboratively establish and adhere to these standards. By doing so, we can foster an environment where AI techno­logy is used to build a better future for all – and promote innovation while upholding ethical values and societal wellbeing.

AI should benefit individuals and society