facebooktwittertelegramwhatsapp
copy short urlprintemail
+ A
A -
webmaster
Tribune News Network
Doha
A new study on Sustainable Development published by Dr Jon Truby, associate professor of Law and director of the Centre for Law and Development at the College of Law, Qatar University (QU), explains how Artificial Intelligence (AI), if not regulated, can be a threat to sustainable development.
The study talks about how unregulated AI is a threat to the Sustainable Development Goals (SDGs) -- a set of guidelines created by the United Nations (UN) for the sustainable development of all countries.
Dr Truby has pointed out that this threat is especially prevalent in developing countries, which often relax AI regulations to attract investments from the Big Tech.
Dr Truby said, “In this study, I propose the need for proactive regulatory measures in AI development, which would help to ensure that AI operates to benefit sustainable development.”
In his study, Dr Truby discussed three examples to show how unregulated AI can be detrimental to SDGs. To begin with, he focused on SDG 16, a goal that was developed to tackle corruption, organised crime and terrorism.
He said because AI is commonly used in national security databases, it can be misused by criminals to launder money or organise crime. This is especially relevant in developing countries, where input data may be easily accessible because of poor protective measures.
Dr Truby suggested that, to prevent this, there should be a risk assessment at each stage of AI development. Moreover, the AI software should be designed such that it is inaccessible when there is a threat of it being hacked. Such restrictions can minimise the risk of hackers obtaining access to the software.
Then, Dr Truby took the example of SDG 8, a goal that seeks to increase public access to financial services. AI is regularly used in financial institutions to make banking simpler and more efficient. But, while learning, AI might inadvertently develop certain biases, such as reducing financial opportunities for certain minorities.
Dr Truby suggested that to avoid such biases, there is a need for transparency in AI-driven processes. Human review and intervention at each step can ensure that such discrimination does not go unnoticed. Moreover, it is necessary to train software developers to recognise the harmful implications of biases, so that it can be regulated more efficiently.
Finally, Dr Truby explaind how AI is a threat to SDG 10, a goal that focuses on equal opportunity.
He explained how AI can be used by big firms to generate employment opportunities in developing countries and that this might threaten smaller businesses and local companies. However, if designed with sustainable development in mind, AI can create better job opportunities and increase productivity by removing labour-intensive jobs.
AI is a powerful technology that needs to be used carefully and efficiently. Although Dr Truby is optimistic about the future implications of AI, he believed that developers and legislators should exercise caution through effective governance.
He said, “The risks of AI to the society and the possible detriments to sustainable development can be severe if not managed correctly. On the flip side, regulating AI can be immensely beneficial to development, leading to people being more productive and more satisfied with their employment and opportunities.”
Artificial Intelligence (AI) has generated considerable interest over the past few decades, owing to its promising applications across a wide range of fields. But, it has also sparked an ongoing debate on whether the risks of using AI outweigh its benefits. The biggest concern with AI is a lack of governance, which gives large companies, popularly called as the “Big Tech”, unlimited access to private data.
Multiple scandals in the recent past have confirmed the threats of this - such as the infamous Cambridge Analytica scandal of 2018, in which Facebook conducted a major privacy breach by leaking confidential user information to a data-mining company.
Moreover, although AI should be developed in a socially responsible way, governments often do not impose strict legislations on AI development, which may be detrimental - rather than beneficial - to the society.
copy short url   Copy
13/08/2020
756