New artificial intelligence regulations are imminent, but organizations can get a head-start by implementing responsible artificial intelligence (RAI) today. As companies deploy transformative AI tools, they must ensure that they introduce these solutions in a responsible way, to mitigate any potential risks to their business and protect consumers. Neglecting to address RAI concerns exposes businesses to reputational damage, legal action, and a loss of consumer trust that is hard to regain. But RAI should not be viewed purely as a defensive gambit—it is also a source of positive value for organizations. The business potential of RAI is substantial: it can lead to better products, increased profitability, improved recruitment and retention of staff, sharper decision making, and a more durable culture of innovation.
Despite the promise of RAI and the risks of inaction, many organizations have struggled to put RAI principles into practice in a coherent and comprehensive way. And there is a gap between where organizations think they are in their RAI journey and where they really are: 35% of organizations believe they have fully implemented an RAI program, but, in fact, only 16% have reached maturity. (See Exhibit 1.) With the imminent arrival of the European Union’s AI Act, one of the first broad-ranging regulatory frameworks on AI, the failure to successfully implement RAI will have more serious implications.1
It is not just organizations based in the EU that need to pay attention. The regulation will apply to any provider that implements or develops AI systems in the EU or whose AI systems produce outputs that are used in the EU’s jurisdiction, so it will affect many organizations based elsewhere. Moreover, the regulation, which is expected to come into force in 2023, is likely to bear similarities to rules currently being drawn up by other government authorities throughout the world.2
Given the impending heightened focus on new regulations, as well as the potential financial and reputational damage resulting from noncompliance, organizations urgently need to adopt measures that enable them to comply with the requirements of the emerging EU regulation. A comprehensive RAI program, based on BCG’s Responsible AI Leader Blueprint, will allow them to act in accordance with and adapt to the proposed EU AI Act and other regulations that will inevitably follow (such as the Algorithmic Accountability Act of 2022 in the US).3 An RAI program will also position them to mitigate nonregulatory risks and capture the associated benefits from AI. BCG’s pragmatic and comprehensive framework comprises a number of integrated components: RAI strategy, governance, processes, technology and tools, and culture.
Putting in place a comprehensive program to implement and operationalize RAI throughout an organization takes time, but significant progress can be made with a few basic steps to secure early wins and build confidence in the organization. To position themselves to begin building this framework, organizations should take four key actions: (1) establish responsible AI as a strategic priority with senior-leadership support, (2) set up and empower RAI leadership, (3) foster RAI awareness and culture throughout the organization, and (4) conduct an AI risk assessment. READ MORE
By Tatiana Theoto, Sabrina Küspert, Katharina Hefter, Steven Mills, Jeanne Kwong Bickford, Paras Malik, Abhishek Gupta, Hanjo Seibert, Sean Singer, Grigor Acenov, and Tad Roselund
Source: bcg.com
The Asia Pacific region is strategically vital for the continued growth of the packaging giant Amcor, which was founded in Australia and is headquartered in Switzerland. We speak to Chris Kenneally, Amcor’s director for Asia Pacific, about the company’s work in the region and why it is a “leader” in Australia, according to the new Dow Jones Sustainability Index (DJSI).
Given the importance of these policies in generating momentum for climate action by both the private sector and households, it’s worthwhile to learn about key climate policy developments to watch out for in 2025, what they are, and why they are essential.
Research from EY, Capgemini and InfluenceMap has revealed double standards, greenwashing and flimsy commitments in corporate and investor ESG policies. There is an endemic problem in the world of corporate sustainability known as the say-do gap. This is the disparity between the sustainable commitments companies make and the actions they take.