Sector News

Why businesses need explainable AI—and how to deliver it

October 8, 2022
Borderless Future

Businesses increasingly rely on artificial intelligence (AI) systems to make decisions that can significantly affect individual rights, human safety, and critical business operations. But how do these models derive their conclusions? What data do they use? And can we trust the results?

Addressing these questions is the essence of “explainability,” and getting it right is becoming essential. While many companies have begun adopting basic tools to understand how and why AI models render their insights, unlocking the full value of AI requires a comprehensive strategy. Our research finds that companies seeing the biggest bottom-line returns from AI—those that attribute at least 20 percent of EBIT to their use of AI—are more likely than others to follow best practices that enable explainability.1 Further, organizations that establish digital trust among consumers through practices such as making AI explainable are more likely to see their annual revenue and EBIT grow at rates of 10 percent or more.2
Even as explainability gains importance, it is becoming significantly harder. Modeling techniques that today power many AI applications, such as deep learning and neural networks, are inherently more difficult for humans to understand. For all the predictive insights AI can deliver, advanced machine learning engines often remain a black box. The solution isn’t simply finding better ways to convey how a system works; rather, it’s about creating tools and processes that can help even the deep expert understand the outcome and then explain it to others.

To shed light on these systems and meet the needs of customers, employees, and regulators, organizations need to master the fundamentals of explainability. Gaining that mastery requires establishing a governance framework, putting in place the right practices, and investing in the right set of tools.

What makes explainability challenging
Explainability is the capacity to express why an AI system reached a particular decision, recommendation, or prediction. Developing this capability requires understanding how the AI model operates and the types of data used to train it. That sounds simple enough, but the more sophisticated an AI system becomes, the harder it is to pinpoint exactly how it derived a particular insight. AI engines get “smarter” over time by continually ingesting data, gauging the predictive power of different algorithmic combinations, and updating the resulting model. They do all this at blazing speeds, sometimes delivering outputs within fractions of a second.

Disentangling a first-order insight and explaining how the AI went from A to B might be relatively easy. But as AI engines interpolate and reinterpolate data, the insight audit trail becomes harder to follow.

Complicating matters, different consumers of the AI system’s data have different explainability needs. A bank that uses an AI engine to support credit decisions will need to provide consumers who are denied a loan with a reason for that outcome. Loan officers and AI practitioners might need even more granular information to help them understand the risk factors and weightings used in rendering the decision to ensure the model is tuned optimally. And the risk function or diversity office may need to confirm that the data used in the AI engine are not biased against certain applicants. Regulators and other stakeholders also will have specific needs and interests. READ MORE

By Liz Grennan, Andreas Kremer, Alex Singla, and Peter Zipparo

Source: mckinsey.com

comments closed

Related News

November 27, 2022

A green hydrogen bet, GM expects EV profits by 2025 and the world’s population hits 8 billion

Borderless Future

The United Nations estimated that the world’s population hit 8 billion people. That’s just 11 years after the global population hit 7 billion. The U.N. estimates that the rate of growth has started to slow down, and is only expected to hit about 10.4 billion people by the end of the century.

November 19, 2022

COP27: It will take visionary pragmatism to accelerate decarbonization in the downturn

Borderless Future

At a time when they are plotting their downturn strategy, many corporations that set ambitious decarbonization targets are wrestling with what they can now afford to do to accelerate decarbonization and monetize it with customers. Getting ahead of peers will be those that embrace visionary pragmatism and follow through during the downturn.

November 13, 2022

Solar panels must cover large parking lots, rules French Senate

Borderless Future

French senators have ruled that parking lots with 80 or more cars must have at least half of the spaces covered with solar panels. The decision—passed on November 4—still has to gain assent in the Assemblée Nationale upper house.