Artificial intelligence (AI) has recently found its way into our lives through modern technological devices and internet services. Many smartphones utilize biometric technology, like fingerprint and facial recognition, to help us log into our devices quickly and easily. Additionally, algorithms created by computer scientists have made it easier for people to obtain loans and have made it possible for companies to hire employees faster and more efficiently.
While AI has made our lives easier, it's also rife with issues. For starters, bias and discrimination in AI are becoming growing concerns. AI, designed to automate processes and reduce decision fatigue in humans, also runs the risk of perpetuating and amplifying existing societal biases and discrimination.
In their explainer on the Artificial Intelligence Bill of Rights in the U.S., cybersecurity company and virtual private network provider ExpressVPN shared that while bias in AI is not deliberate, it's very much a real problem.
Before we can understand how bias and discrimination in AI occur, it's first worth understanding how AI systems are created in the first place. Humans generally make AI systems through machine learning, natural language processing, computer vision, and robotics. As humans design these AI systems, they'll likely bring their biases and preferences into the creation process.
Before an AI system is created, its human creator will need to look for and process significant amounts of data to teach the system what it needs to do, and this is where some biases can occur. For example, if a facial recognition system is trained on data that mainly includes images of lighter-skinned individuals, the system may have difficulty recognizing individuals with darker skin tones. These biases can lead to racial profiling and discrimination.
For example, it is stated in the Physics World, biased facial recognition systems could flag individuals with darker skin tones at airports and train stations for suspicious behavior, even if they're not doing anything in particular.
Biased AI systems might deny fundamental rights and opportunities, such as employment, housing, and education, to specific groups of people. A machine learning model predicts which job candidates best fit a particular role, and the model is biased against certain groups. In that case, those individuals may be unfairly excluded from job opportunities. Similarly, if an AI system is used to determine which individuals are eligible for loans or credit, a biased model could result in unfair lending practices that discriminate against certain applicants. At the same time, Harvard Business Review study shows that AI can make bank loans more fair.
To prevent biased AI systems from negatively impacting the lives of various individuals, AI creators will have to create systems based on solid ethical frameworks and consider the different sorts of data when designing a system. Having an AI system audited externally by impartial third parties can help AI creators design fairer and more inclusive systems. An impartial third party can help companies highlight issues they may have otherwise failed to see.
Ultimately, it has to be said that bias and discrimination represent big problems in AI and the tech industry and will require plenty of work and commitment to fix and improve. To address this issue, it is essential to prioritize diversity and inclusivity in the development of AI systems and conduct thorough audits and evaluations to identify and mitigate biases in the data, algorithms, and deployment processes.