Technology must have inherent ethical frameworks in place – Gloria Spittel

Technology isn’t neutral. It is developed, used and managed for specific purposes, although misuse and reuse between what is intended and what materialises manifests abundantly. Discussions regarding who is accountable for the misuse of technology is rife  with various positions from blame resting with end users to manufacturers.

But in a world of connected devices such as IoT and AI, and the rampant use of social media platforms – especially in places where such sources serve as primary sources of news – it is increasingly evident that technology needs inbuilt ethical frameworks.

For instance, consider Twitter’s decision to flag tweets from US President Donald Trump in May. Tweets from the president were either hidden with warnings about the content or a link for fact checking provided so that readers could access the information.

Twitter’s policy of hiding rather than deleting content originating from public figures that is against the platform’s rules was announced in mid-2019. Prior to this, it appears that such figures had their tweets and Twitter accounts treated in the same manner as the ordinary citizen.

If the content is seen as violating the platform’s policy regulations, it is usually deleted and the account could also be suspended depending on the content. This continues to be the practice for ordinary users of the platform.

Following the ‘censorship’ by Twitter, attention moved to Facebook where the same content from the president remained unflagged or untouched.

But Twitter’s actions lend weight to discussions on the limits on freedom of expression and the rights of social media companies to moderate content.

A topic of discussion that has famously involved Facebook in the context of violence is the spread of misinformation or fake news. In May, Facebook apologised for its role in Sri Lanka’s anti-Muslim violence, which occurred in 2018. The company noted its lack of response to posts seemingly violating its policies, which may have gone on to help incite and spread violence.

Beyond discussions of misinformation and the propagation of dangerous content on social media, technology has other ethical dilemmas too. Some of these are simply because of the unconscious biases that humans code into technology or build into machinery so that it is not usable by all people.

In her book Algorithms of Oppression, Safiya Noble tackles an uncommon subject: how search engines reinforce racism and sexism. This may seem incredulous since search engines operate ‘automatically’ and on available data. The problem lies in the available data and algorithms written to favour through selection processes of that very data.

The prevalence of this skewed data is fuelled by private and capitalist interests, and the monopolisation of services (such as search engines) on the internet by a few big players.

Furthermore, the perpetuation of prejudicial markers as attributes of the identity of individuals from any oppressed group remains a certainty under the current status because oppressed people – whether based on religion, race, gender or sexual orientation – remain so in real life as well.

To break this cycle, either governments or tech manufacturers need to be held accountable and revise the ways in which data is presented as information, while admitting that something as innocuous as a search engine possesses the power to frame beliefs and thoughts, as well as perpetuate prejudices, biases and validate dangerous viewpoints.

You don’t believe it?

Try some searches on various search engines and evaluate the results. Using a subject you’re familiar with is a good starting point.

But the effects of algorithms on life do not stop with search engines and can have far-reaching consequences from the advertisements both commercial and political that are seen, to how job applications are screened (dependent on if a résumé screening tool is utilised, which is the case in most companies).

A quick fix might be in building better and more inclusive data sets but this cannot be done blindly, and would require expertise to train, define and operationalise inclusivity, and the end goals of an intended programme.

With increasing impetus to incorporate AI systems to mine, process and provide better results, artificial intelligence is also not untouched by ethical issues.

Far from AI systems taking over the world from humans, they’re mimicking the world as it was lived yesterday. While humans can endeavour to work and live in a better world of equality, equity and justice, AI systems will only hold up a mirror to our worst selves and policies.

For example, amid the continued Black Lives Matter protests in the US (and across the world) in June, IBM announced that it would stop offering facial recognition software for ‘mass surveillance or racial profiling’ because it needed to conduct testing for biases.

Think about that. Tech systems that are supposedly created to secure populations actively work against a portion of the very populations! Technology isn’t neutral and thus, it needs schooling in ethics.