Sector News

Are you overestimating your responsible AI maturity?

April 17, 2021
Sustainability

A new BCG survey of large organizations found that almost half of those that believe they have a mature implementation of a responsible artificial intelligence (RAI) program are, in reality, lagging behind. Even organizations that reported rolling out AI at scale overestimated their RAI progress: less than half have a fully mature RAI program. This finding is particularly important because an organization cannot achieve true AI at scale without ensuring that it is developing AI systems responsibly.

The Four Stages of RAI Maturity
To assess organizations’ progress in implementing RAI programs—the structures, processes, and tools that help organizations ensure their AI systems work in the service of good while transforming their businesses—we collected and analyzed data from senior executives at more than 1,000 large organizations. (See the sidebar “Our Survey Methodology.”) We then categorized these organizations into four distinct stages of RAI maturity: lagging (14%), developing (34%), advanced (31%), and leading (21%). An organization’s stage reflects its progress in reaching maturity across seven generally accepted dimensions of RAI. These dimensions include fairness and equity, data and privacy governance, and human plus AI. The latter one is to ensure that AI systems are designed to empower people, preserve their authority over AI systems, and safeguard their well-being.

The organizations that are in the leading stage have reached maturity across all the dimensions. These organizations have defined RAI principles as well as achieved enterprise-wide adoption of RAI policies and processes. These organizations are clearly making the most of their relationship with AI.

As organizations progress from lagging to leading, each stage is marked by substantial accomplishments, particularly in the areas of fairness and equity as well as human plus AI. This finding is important because organizations’ RAI programs don’t tend to initially focus on these dimensions, and they are the most difficult to address. Accomplishments in these areas are therefore highly indicative of broader maturation in RAI, and they signal that an organization is ready to transition to the next stage of maturity. Meanwhile, organizations consistently focus first on the area of data and privacy governance. This is a logical result, given that regulations and policies often mandate this focus.

When looking across industries and regions, in turn, we found that an organization’s region is a better predictor of its maturity than its industry: Europe and North America, respectively, have the highest average RAI maturity. In contrast, we found few significant differences in maturity across industries, although a higher concentration of RAI leaders can be found in the technology, media, and telecommunications industry and in industrial goods.

Organizations’ Perceptions Often Do Not Match Reality
The survey reveals that many organizations overestimate their RAI progress. We asked the executives how they would define their organization’s progress on its RAI journey, whether it had made no progress (2% of respondents), had defined RAI principles (11%), had partially implemented RAI (52%), or had fully implemented RAI (35%). We then compared each executive’s response with our assessment of the organization’s maturity. Our evaluation was based on respondents’ answers to 21 questions about their implementation across the seven dimensions.

The results are surprising. We found that about 55% of all organizations—from laggers to leaders—are less advanced than they believe. Importantly, more than half (54%) of those that believe they have fully implemented RAI programs overestimated their progress. This group, in particular, is concerning. Because they believe they have fully implemented RAI programs, they are not likely to make further investments, although gaps clearly remain.

We also found that many organizations with advanced AI capabilities are behind in implementing RAI programs. Of the organizations that reported they have developed and implemented AI at scale, less than half have RAI capabilities on a par with that deployment. Achieving AI at scale not only requires building robust technical and human-enabling capabilities but also fully implementing an RAI program. For these organizations, falling short of full maturity across all RAI dimensions means that they have still not achieved their perceived level of at-scale AI deployment.

RAI Is Much More Than Risk Mitigation
Although C-suite executives and boards of directors are concerned with the organizational risks posed by a lapse of an AI system, we have argued that businesses should not pursue RAI simply to mitigate risk. Instead, organizations should view RAI as an opportunity to strengthen relationships with stakeholders and realize significant business benefits.

It seems that most organizations agree. When asked to select the primary reason for pursuing RAI, more than 40% chose its potential business benefits—more than twice the percentage that selected risk mitigation. Moreover, we found that as organizations’ RAI maturity grows, so does their motivation to capture business benefits through RAI. Simultaneously, the focus on risk mitigation decreases.

Best Practices for Reaching RAI Maturity
RAI leaders consistently have policies and processes that are fully deployed across their organizations covering all seven RAI dimensions. At these leading organizations, we found several key markers that are indicative of broader RAI maturity.

  • Both the individuals responsible for AI systems and the business processes that use these systems adhere to their organization’s principles of RAI.
  • The requirements and documentation of AI systems’ design and development are managed according to industry best practices.
  • Biases in historical data are systematically tracked, and mitigating actions are proactively deployed in case issues are detected.
  • Security vulnerabilities in AI systems are evaluated and monitored in a rigorous manner.
  • The privacy of users and other people is systematically preserved in accordance with data use agreements.
  • The environmental impact of AI systems is regularly assessed and minimized.
  • All AI systems are designed to foster collaboration between humans and machines while minimizing the risk of adverse impact.

Organizations that do not follow these practices or do not have them fully deployed are most likely not leading in RAI and should dig more deeply into their RAI efforts. Even for those that do, digging deeper into their efforts to look for further opportunities to improve is important.

By Boston Consulting Group

Source: bcg.com

comments closed

Related News

March 27, 2024

Neste’s efforts toward sustainable circularity in the chemical industry

Sustainability

LinkedIn Twitter Xing Email In this episode of Borderless Executive Live, our host Andrew Kris, a founding partner at Borderless, welcomes Valerio Coppini, Vice President of Business Development at Neste, […]

March 24, 2024

EY asks: can reporting encourage sustainability investing?

Sustainability

78% of investors surveyed think companies should make investments that address sustainability issues relevant to their business – even if it reduces profits in the short term. EY reports and thought leadership shed the light on how corporate sustainability reporting is critical in driving value and boosting investment.

March 15, 2024

Reality Check: Energy and Natural Resource Executive Pulse 2024

Sustainability

About 62% of executives expect the world to reach net-zero emissions by 2060 or later, up from 54% in last year’s Bain survey. Most remain committed to investing in their transition-oriented growth businesses, but ROI challenges are intensifying. North America is viewed as the most attractive region for investment, despite concerns about policy stability.

How can we help you?

We're easy to reach