Machine learning and AI have undoubtedly become the most disruptive technological developments in tech today. Because their capabilities are being increasingly aided by the Big Data revolution, companies are scrambling to extract value from these phenomena for their own business processes and offerings. Machine learning applications are becoming integrated into business models across the board—from systems, such as IBM’s Watson rummaging through troves of data to make highly accurate recommendations in the treatment of certain types of cancers, to financial institutions such as Goldman Sachs crunching big data and advanced quant algorithms to enhance performance of their trading department. As companies seek to reap the benefits of machine learning, however, the dangers of algorithmic biases and their often overlooked impacts can trigger costly errors and, left unchecked, pull projects and organizations in entirely wrong directions.

If machine learning–based decision making is to be trusted, it needs to be free of bias. A machine will be biased if the data we train it with is biased. For example, consider the ProPublica’s “Machine Bias” essay, which talked about software being used across the country to predict future criminals. This software was found to use risk assessment algorithms that contained racial bias. When an 18-year-old black female was caught stealing a bike and scooter, she was arrested and charged with burglary. While being booked in jail, a computer program spat out a score predicting the likelihood of her committing a future crime. The black female, who had a few misdemeanors as a juvenile, was rated high risk, a score that would be later used for sentencing decisions. In contrast, a 41-year old white male who was arrested for shoplifting $80 worth of materials from Home Depot who had a previous conviction for armed robbery for which he’d served five years in prison was rated low risk. In this case, the source data from which the model was trained consisted of higher arrests and conviction rates for black Americans, causing risk assessment that relied on racially compromised criminal history data to unfairly rate a black defendant as riskier than a white defendant.

While that is just one example of how bias in the criminal justice system can increase the likelihood of unfair and incorrect predictions, there have been issues with a combination of cultural, educational, gender, race, and other biases in cases as wide as hiring practices, medical diagnoses, object descriptions, and grading, just to name a few. On LinkedIn, for instance, it was discovered that high-paying jobs were not displayed as frequently for women as they were for men. The problem of bias in machine learning is likely to become more significant as technology spreads to critical areas, such as medicine and law, and as more people without a deep technical understanding are tasked with deploying it.

As a result, it will become increasingly important that companies become transparent about the training data they use and that they consistently look for hidden biases within this data.  To root out potential machine learning bias, the first step is to question what preconceptions exist within an organizations’ processes and how those biases can manifest themselves in data. For instance, a credit card company that builds a model to address loan defaults with data including zip codes and first names can have data that is highly correlated with race or gender, skewing results.

Questioning whether the data being used for prediction is directly relevant to the task at hand is paramount so data that holds bias can be removed from the model. Exercising healthy skepticism about the data being used and not taking data sets at face value are further steps to reducing the effect of machine learning bias. Furthermore, to save time, energy, and resources, companies should take proactive measures to avoid bias in the first place. This includes measures such as finding comprehensive data, experimenting with different data sets and metrics, increased representation in the technical workforce, external validity testing, and auditing.

Lastly, oversight is required for a machine learning model even after it has been trained. The environment in which a model operates is constantly changing and data scientists need to re-train their model on new data periodically to ensure swift adaptation. The technical capabilities of machine learning and AI will continue to offer great business value. Moreover, as big data technology and cloud computing capabilities also improve, their usage in models will increase exponentially. With careful planning, healthy skepticism, and consistent oversight, companies can help ensure that the emergence of biased models that negatively affect their decision-making abilities are reduced.

References

  1. https://www.techrepublic.com/article/bias-in-machine-learning-and-how-to-stop-it/
  2. https://sloanreview.mit.edu/article/the-risk-of-machine-learning-bias-and-how-to-prevent-it/