April 08, 2024

From Code to Conscience: Nurturing Ethics in AI

The term AI Ethics has been in talk since the last few decades. “Run Around” was a short story in 1942 by Isaac Asimov in which the three laws of robotics were introduced. Before that AI Ethics was just a fiction. But today it is the need of the hour that AI ethics should be taken as a matter of concern as it is affecting almost nine out of ten businesses.

Before discussing the ethical issues that are created by AI ethics, let’s first discuss what actually is meant by AI Ethics and who are the stakeholders.

Artificial Intelligence and Ethics

In a layman language, artificial intelligence is the process of incorporating intelligence into the machines with an objective of simulating human intelligence and decision making. Initially, it looked very nice and helpful but with the passage of time, it results in many risks to human safety also. So, we can say that with the advancement of AI, ethical issues will also come into being.

Ethical issues are basically a set of rules, guidelines and principles which should be considered by the stakeholders while developing or using artificial intelligence. All the organizations working on AI should make “AI ethics policies” based on the guidelines provided and train their manpower to develop systems according to these policies. This not only minimizes the risks involved with the development of AI but will also work towards attaining its actual objective of improving human life.

Who are the stakeholders?

There are many fields in which people need to collaborate to develop ethical principles for Responsible AI. Researchers and professors in Government agencies, academics, international entities like UNO, many NGOs and private companies can depute their representatives to discuss and develop AI Ethics. These all can be considered as the stakeholders for AI Ethics. It is their fundamental responsibility that they must examine how artificial intelligent machines and humans can coexist with harmony.

Key Principles of AI Ethics

There are mainly five key principles that need to be considered for developing a strong AI ethics Policies. These are Transparency, Accountability, Impartiality, Reliability, Security and Privacy.

Transparency:- AI needs to be transparent in terms of its algorithms and decisions. A common man should know how an algorithm works and why a decision has been made by AI. For example, a person is denied a loan by a bank’s online system. Now the system should be so transparent that the person should know why the algorithm has denied the loan and what he can do to get it sanctioned in future.

Accountability: As the algorithms are run by AI, the question arises as to who should be held responsible and accountable in case of wrong decisions? Well, the duty lies with people and teams developing AI systems. They should ensure that the algorithms are developed properly. It should be their responsibility to monitor that high-quality data is fed into the system. In case of any ambiguity, they should be held responsible.

Impartiality: AI should not be biased and all human beings should be treated equally by AI. Unbiased and high-quality data should be used to train AI systems so that their decisions must not be biased at any developmental stages.

Reliability: The AI systems should be reliable. The results should match with the outcomes for which the system is basically designed. This is very useful when we use AI, especially in healthcare and financial services.

Security and Privacy: Sensitive data should be the topmost concern while developing AI systems. There should be clear security policies to deal with data security and privacy.

Real-Life Ethical Concerns of AI:

Case Study 1: Healthcare

AI algorithms are being used by many healthcare providers to help them make judgments about patient care, including which patients need special attention.  Researchers Obermeyer et al. find evidence of racial prejudice in algorithms. It is shown that black patients are identified with lower risks by the algorithm, despite being sicker than white patients. As a result, white patients get chosen for additional care because they have higher risk scores.

Case study 2: Banking and finance

Apple's AI algorithm, "Apple card," has a prejudice against women. The interest rates and credit limitations it offered to different genders varied dramatically. Men were being granted larger credit limits than women. It would be difficult for a bank to examine and determine the source of this bias with conventional "black box" AI systems.

Case Study 3: Hiring Process

Amazon made an effort to use its AI hiring and recruiting tool. It enables companies to choose the top 5 resumes from thousands of submissions. Every company desires this setup.
However, it was discovered in 2015 that the system's evaluation of applicants for technical jobs, such as software development, has a bias against women.
The data utilized to train the system was the cause of this bias. It was trained using applications from the previous ten years, so it could recognize the format of resumes submitted. The majority of those resumes were from men, which suggested that there was a male-dominated IT sector.


From the above case studies, we can conclude that Ethical AI hinges on high-quality data. Ultimately, it's all about data to bring AI ethics into practice. Biased and poor-quality data will result in poor outcomes. In a report published by Forbes, AI is patched up by a gang of unethical people thus making it a biggest challenge to achieve the expected levels of security, reliability and accountability in ethical AI.  Ethical issues in any system can only be recognized if data and algorithms are understood thoroughly. Explainable AI is the core component to understand the data and algorithm. Explainable AI transforms a black box model of Ai into white box model to achieve transparency in AI systems. A unified approach is needed to develop across the entire lifespan of an AI system so as to save us from its harmful implications.

Being one of the stakeholders in AI ethics, we at CU Online follow all AI Ethics in terms of research and publications. Workshops are conducted to make all the faculty and students aware about using AI while adhering to AI ethics. Thus, we strive to fulfil our social responsibilities while progressing towards an AI-driven world.

Author:- Shuchi Sharma - Assistant Professor
                  Computer Applications - (CDOE)