Artificial intelligence: Made in the image of its makers
“It is the end of winter for AI. After more than 60 years of research, we are discovering the real-life applications of AI.”
Axelle Lemaire – Global Head at Terra Numerata, a tech-hub focused on boosting Europe’s presence in the digital economy – uttered these words at an event in Paris hosted by Advantage Austria and the Global Drucker Forum, discussing the modern implications of artificial intelligence. This statement leads us to believe that the dawn of artificial intelligence has begun. However, does AI even exist today in the mainstream?
It is believed that the greatest extent of AI is one where the machine is an objective decision maker and logical thought processor. Until that point, we aren’t really talking about AI. Instead, we’re talking about the data given to machines to run through problems and discover solutions to proposed problems or goals. Therefore, it seems that all we really have is machine learning: technology that adapts to the thought process of its maker. This technology can learn, but it cannot truly think independently; at least not yet, and definitely not in any mainstream applications. It is then a plausible assumption that the behavior of a machine is largely driven by the data programmed into it, which holds the subliminal cognitive biases of humanity.
Machines do not have an opinion or an agenda. Their purpose is to interpret data. The problem is that cognitive bias exists in machine learning from the inception of an algorithm to the way the machine processes the data. As Rahaf Harfoush put it, “Humanity’s vision for the world is baked into the AI it creates.” What this produces is a system that feeds users content that the creators of that system deem relevant. This can be seen in targeted advertisements – when you look at clothing brands online you begin to see ads for those brands on other sites – as well as apps like Apple News and Google News that feed you the media they think you want, when in reality it may be giving you the exact opposite.
Cultural adaptation
It has been discovered that AI is able to pick up on racial, gender, and social biases by analyzing the wording of written pieces and the data given to it. An AI system that has realized these biases can then make decisions based on data regarding those groups. The system is therefore not an objective decision maker, problem solver, or provider of information. It is made in the image of the programmer.
Let’s take a look at a couple of examples. Joy Buolamwini of the MIT Media Lab released a paper detailing a study she performed to test the facial recognition systems made by IBM, Microsoft, and Megvii. Pulling together a set of 1,270 faces, Buolamwini ran them through these systems and the results were less than favorable. Gender identification was a point of error depending on skin color. Error rates for white males was 0.8% while they reached as high as 34.7% for darker-skinned women. We can clearly see that adaptive learning has produced systems that stigmatize individuals based on skin color. This is not just an issue for gender identification, but also for law enforcement. After studying 100 police departments, Buolamwini discovered that law enforcement is more inclined to stop African-Americans as suspects and put them through a facial recognition system. Computers are then more likely to suggest African-Americans as criminals.
We have also seen bias riddling Google Search results. In the past, searching the term “black girls” would result in some very vulgar, crude, and inappropriate results. Google has since fixed that issue, but new ones are popping up all the time. One individual noticed Google Photos labelling his black friends as gorillas, another huge problem that Google has recently fixed. Clearly, the individuals who have programmed Google’s search results and image recognition tools are either blatantly biased or have subliminal opinions that appear in the data that the machine acutely picks up on. Technology is very visibly adopting the opinions and prejudices of what has come before – good and bad.
Educational implications
Adoption, could, of course, be a complicated issue for the rising number of AI-based education platforms. At the moment, technology that can grade written assignments in place of teachers is being developed. Bias is also a potential danger here: a system could grade an assignment poorly or highly based on what it is programmed to believe about the profile of the person or the correctness of the content. Yes, this will reduce the workload for the human administrators, yet it could negatively affect students by grading their assignment based on what a programmer hundreds of miles away believes to be correct, not what is actually the truth or relevant to the course.
Adaptive learning systems pose a similar threat. If a system is programmed to recognize certain types of personas, it may serve you courses that it believes you would find relevant because they are similar to other courses you have taken. Yet, this may not actually be helpful to you. Your needs might be the complete opposite, but if you do not match what the machine is programmed to recognize, you could be receiving education contrary to what would benefit you the most. This is comparative to a nutritionist who recommends the same basic diet to everyone because health professionals have deemed it best. Such a diet may not be suitable for everyone, as every individual is different and responds differently to the same foods.
The same goes for curriculum creation. Technology exists that can take a basic course outline and pull together learning plans that fit the outline. There is a significant amount of risk here. A system that, through the processing of data, has picked up on biases may collect content that shares in its programmed opinion, ultimately providing students with a very specific worldview that is anything but objective.
What can be done?
Currently, there is no definitive solution, or legislation per se, for cognitive bias being weaved into artificial intelligence and machine learning platforms. So far we have only seen the removal of bias after it has been discovered. There surely needs to be wider oversight and governance around this technology. We need to become more transparent about the data used to develop intelligent machines by sharing it with the organizations who use the system or even the customers to whom we provide the service. Performing our due diligence to make sure we search for and weed out hidden biases as they are found is also key. There are some hypotheses on how to remove cognitive bias; for example, one could change the output of algorithms to favor natural numbers over ratios to reduce the misinterpretation of data, however, even this is just hypothetical.
To prevent bias from infiltrating AI-based systems, we must consider how we program our systems and the culture of our organizations. We need to create company cultures that look for objective solutions to significantly reduce the risk of creating stigmatized systems and content that could potentially shape the worldview of our customers – although that is the plan of some organizations. In a world filled with people who hold cognitive biases, agendas, and opinions, we need to make sure we are putting in the work to promote a community of diverse thinkers who can work together to solve complex problems from multiple angles. Our systems need to provide users with data that has no correlation to the perspective of the coder but is purely targeted towards the users. Now considering everyone is biased in some way, this may be impossible, but we need to try.
A connected ethical issue is that of job loss. Intelligent technology is believed to replace a great many jobs, however, they will only replace functions that are automatable, repeatable, or require large data sets to analyze. Contrary to popular belief, that is a good thing, but as we’ve seen, the capability for machines to step up and provide reasoning, judgment, inference, and creativity isn’t there. Machines cannot perform the analog functions of the human brain and can only produce remotely similar results through the training provided by humanity. Once machines have been trained, they can help us perform those very human tasks, but they cannot replace us. For this reason, we need to pay special attention to what we put into our technology to salvage the future of work and education. As technology becomes ever more intelligent, we need to become increasingly more intuitive in how we go about managing it.