As one of the most powerful technologies ever developed, artificial intelligence (AI) is already influencing human life in multiple ways and promises to do so even more in the future. AI is now used in a variety of business communication applications, from message testing to employee recruiting and evaluation.
Although many of these developments are positive, AI shares the two-sided nature of every major technology: The power that enables it to be a positive force can also gives it the potential to become a negative force. Moreover, even with good intentions, it is impossible to foresee and control all the consequences that AI could unleash.
Two issues of particular concern from an ethical perspective are embedded biases and a lack of transparency and accountability.
Human Biases Embedded in AI Systems
Like all human creations, AI reflects the intentions and beliefs of its creators—sometimes consciously and sometimes subconsciously. Simplifying greatly, AI systems incorporate algorithms, or instructions, and data that those instructions operate upon. If either the algorithms or the data reflect human biases, the AI system will likely exhibit those same biases.
For instance, facial recognition systems, which are increasingly being used for security and identification purposes, are “trained” using large collections of photographs. What they learn depends to a large degree on the photos in those collections. When African American AI researcher Joy Buolamwini (see photo) discovered that some of the most widely used facial recognition systems had much higher error rates on female and nonwhite faces, she traced the problem to the photo sets they were trained on, which were composed of mostly white, male faces. The only way she could get some of the systems to recognize her face as a face at all was to wear a white mask.
The developers of these systems are making improvements, but the fact that the problems existed in the first place could reflect a lack of diversity in AI research. As Buolamwini put it, “You can’t have ethical AI that’s not inclusive. And whoever is creating the technology is setting the standards.”
Another area in which AI systems can exhibit bias is language processing, because they learn from human language usage, which can have patterns of bias that range from overt to deeply buried. For instance, in a test where otherwise identical résumés were presented to some employers displaying a European American name and to other employers displaying an African American name, the résumé with the European American name drew 50 percent more interview invitations. If AI systems take on biased behaviors from language usage, their ability to automate decision-making at lightning speed can propagate biases throughout business and society as a whole.
Other areas where AI systems can potentially exhibit bias include risk-assessment systems that purport to predict an individual’s likelihood of committing a crime and automated applicant-evaluation systems used to make lending and hiring decisions. However, these automated approaches have the potential to be less biased than human decision makers if they are programmed to focus on objective factors. In an important sense, we don’t want AI that can think like humans; we want AI that can think better than humans do.
Lack of Transparency and Accountability
One of the most unnerving aspects of some advanced decision-making systems is the inability of even their creators—much less the general public—to understand why the systems make some of the decisions they do. For example, an AI system called Deep Patient is uncannily effective at predicting diseases by studying patients’ medical data. In some instances, doctors don’t know how it reached its decisions, and the system can’t tell them, either.
This lack of insight has troubling implications for law enforcement, medicine, hiring, and just about any other field where AI might be used. For instance, if a risk-assessment system says that a prisoner is likely to reoffend and therefore shouldn’t be paroled, should the prisoner’s lawyers be able to cross-examine the AI? What if even the AI system can’t explain how it reached that decision?
The Efforts to Make AI a Force for Good
AI unquestionably has the potential to benefit humankind in many ways, but only if it is directed toward beneficial applications and applied in ethical ways. How can society make sure that the decisions made and the actions taken by AI systems reflect the values and priorities of the people who are affected? How can we ensure that people retain individual dignity and autonomy even as intelligent systems take over many tasks and decisions? And how can we make sure that the benefits of AI aren’t limited to those who have access to the science and technology behind it? For example, a high percentage of the available AI talent is currently concentrated in a handful of huge tech companies that have the money necessary to buy up promising AI start-ups. While this benefits Google, Amazon, and Facebook in their business pursuits, potential applications in other industries, agriculture, medicine, and other fields might be lagging behind for want of talent.
Recognizing how important it is to get out in front of these questions before the technology outpaces our ability to control it, a number of organizations are wrestling with these issues. One of the largest is the Partnership on AI, whose membership includes many of the major corporate players in AI and dozens of smaller companies, research centers, and advocacy organizations. Its areas of focus include ensuring the integrity of safety-critical AI in transportation and health care; making AI fair, transparent, and accountable; minimizing the disruptive effect of AI on the workforce; and collaborating with a wide range of organizations to maximize the social benefits of AI.
Individual companies are also helping in significant ways. Microsoft, for instance, is directing millions of dollars and some of its considerable AI talent to AI for Earth, a program that uses AI to improve outcomes in agriculture, water resources, education, and other important areas.
The spread of AI throughout business highlights the importance of ethical awareness and ethical decision-making. Only by building ethical principles into these systems can we expect them to generate ethically acceptable outputs.
Adapted from Courtland L. Bovée and John V. Thill, Business in Action, 9th ed (Pearson: 2020), 110–112.