TECH IN AN ETHICAL LIGHT

Why innovative technologies must be scrutinised – Sanjeewaka Kulathunga

With the increased use of AI, it’s more important than ever to evaluate the ethical implications of emerging innovative technologies. This is especially true for language models such as ChatGPT, which can create human-like writing and converse with its users.

ChatGPT is a revolutionary artificial intelligence chatbot launched by OpenAI in November last year. It is run on basic large language models (LLMs), and has been fine-tuned with supervised and reinforcement learning approaches. The chatbot uses a technique known as reinforcement learning from human feedback (RLHF).

When it comes to fine-tuning through supervised and reinforcement learning techniques, human trainers are used to improve the performance of the model. In supervised learning, the model was fed interactions where the trainers took on the role of the user and AI assistant. During the reinforcement learning phase, the human trainers first evaluated the responses generated by the model in a previous conversation.

Due to the proliferation of AI, chatbots and language models such as GPT (Generative Pre-trained Transformer), are being increasingly integrated into our daily lives. While ChatGPT has the potential to offer several benefits, it also raises ethical concerns that should be addressed.

One of the main ethical concerns is the possibility of chatbot being used to mislead users about the authenticity of data and information. It can be difficult for people to distinguish between responses created by a machine and those of a real person since ChatGPT is intended to provide responses that appear human.

This can lead to the spread of false or misleading information, which can have disastrous consequences. If a chatbot has been trained on biased or outdated data, it may provide a response that reflects prejudice and can lead to the spread of disinformation.

Another concern is the possibility of it being used to impersonate someone. ChatGPT can be trained in the writing style and speech patterns of a particular person so that someone can construct a chatbot that sounds exactly like him or her. This could be used to deceive or otherwise influence people.

For instance, a chatbot could be used to imitate a trustworthy person to gain access to sensitive information or trick people into making decisions they would not otherwise make.

Apart from these considerations, there’s concern that chatbots may spread prejudice. If a chatbot has been trained on biased data, it could give answers that reflect that bias. This is particularly problematic if the chatbot is used for customer service or support, as it could generate responses that are discriminatory or disrespectful towards certain groups of people.

To reduce the possibility of bias, it’s important to carefully consider the data used to train chatbots.

One approach to addressing these ethical issues is for ChatGPT to be open and honest about its powers and limitations. Chatbots could be provided with a disclaimer stating that they’re not humans and can’t always provide objective responses. This will limit the possibility of consumers being deceived or misled by ChatGPT.

Regardless of these ethical considerations, it’s important to recognise that ChatGPT can improve communication. It can assist with language learning by providing personalised feedback, and help students practise their speaking and writing skills.

And it can also improve customer service by responding quickly and accurately to customer queries, enabling human staff to focus on more difficult tasks.

In addition, ChatGPT has the potential to help corporate professionals with a variety of other tasks such as summarising long articles and transcribing audio recordings. It can also increase productivity and minimise workloads for people by automating certain organisational processes.

Given the possible benefits and ethical issues of ChatGPT, it is important to think about how it’s used and regulated. It should be used responsibly and ethically, and steps must be taken to mitigate the potential dangers.

Finally, ChatGPT has the potential to offer great benefits for a wide range of sectors and applications. However, its ethical implications must be studied and precautions taken to avoid any likely risks and complexities.

Corporate entities must ensure that it is used responsibly and ethically, by disclosing its capabilities and limitations – and regulating its use in certain areas.

It is also important to continue to monitor and explore the ethical implications of ChatGPT as it evolves and integrates itself more into our lives. Accordingly, companies must ensure that their ChatGPT is free of bias and systemic errors, to ensure accurate and reliable results for the benefit of the world.