CEOs often live by the numbers—profit, earnings before interest and taxes, shareholder returns. These data often serve as hard evidence of CEO success or failure, but they’re certainly not the only measures.
Among the softer, but equally important, success factors: making sound decisions that not only lead to the creation of value but also “do no harm.”
While artificial intelligence (AI) is quickly becoming a new tool in the CEO tool belt to drive revenues and profitability, it has also become clear that deploying AI requires careful management to prevent unintentional but significant damage, not only to brand reputation but, more important, to workers, individuals, and society as a whole.
Legions of businesses, governments, and nonprofits are starting to cash in on the value AI can deliver. Between 2017 and 2018, McKinsey research found the percentage of companies embedding at least one AI capability in their business processes more than doubled, with nearly all companies using AI reporting achieving some level of value.
Not surprisingly, though, as AI supercharges business and society, CEOs are under the spotlight to ensure their company’s responsible use of AI systems beyond complying with the spirit and letter of applicable laws. Ethical debates are well underway about what’s “right” and “wrong” when it comes to high-stakes AI applications such as autonomous weapons and surveillance systems. And there’s an outpouring of concern and skepticism regarding how we can imbue AI systems with human ethical judgment, when moral values frequently vary by culture and can be difficult to code in software.
While these big moral questions touch a select number of organizations, nearly all companies must grapple with another stratum of ethical considerations, because even seemingly innocuous uses of AI can have grave implications. Numerous instances of AI bias, discrimination, and privacy violations have already littered the news, leaving leaders rightly concerned about how to ensure that nothing bad happens as they deploy their AI systems.
The best solution is almost certainly not to avoid the use of AI altogether—the value at stake can be too significant, and there are advantages to being early to the AI game. Organizations can instead ensure the responsible building and application of AI by taking care to confirm that AI outputs are fair, that new levels of personalization do not translate into discrimination, that data acquisition and use do not occur at the expense of consumer privacy, and that their organizations balance system performance with transparency into how AI systems make their predictions.
It may seem logical to delegate these concerns to data-science leaders and teams, since they are the experts when it comes to understanding how AI works. However, we are finding through our work that the CEO’s role is vital to the consistent delivery of responsible AI systems and that the CEO needs to have at least a strong working knowledge of AI development to ensure he or she is asking the right questions to prevent potential ethical issues. In this article, we’ll provide this knowledge and a pragmatic approach for CEOs to ensure their teams are building AI that the organization can be proud of.
> Read the full article on the McKinsey website
By Roger Burkhardt, Nicolas Hohn, and Chris Wigley
January is a month CIOs often use to look back on the past year to build plans for the next. 2020 was a year like no other, and when Data Executives reflect on the “tech-celeration” their company experienced, they could find it challenging to prioritize opportunities for 2021 and beyond.
From more reusable packaging to more companies taking back used products to more empowered designers, 2021 will be a key year in the development of new, less wasteful systems.
As we enter this new year, I thought it would be helpful to see what people from a full century ago envisioned for us. Newspapers of 1921 were full of predictions—some right, and some very wrong.