Sector News

Seven very simple principles for designing more ethical AI

August 8, 2019
Sustainability

No matter how powerful, all technology is neutral.Electricity can be designed to kill (the electric chair) or save lives (a home on the grid in an inhospitable climate).

The same is true for artificial intelligence (AI), which is an enabling layer of technology much like electricity.

AI systems have already been designed to help or hurt humans. A group at UCSF recently built an algorithm to save lives through improved suicide prevention, while China has deployed facial recognition AI systems to subjugate ethnic minorities and political dissenters. Therefore, it’s impossible to assign valence to AI broadly. It depends entirely on how it’s designed. To date, that’s been careless.

AI blossomed with companies like Google and Facebook, which, in order to give away free stuff, had to find other ways for their AI to make money. They did this by selling ads. Advertising has long been in the business of manipulating human emotions. Big data and AI merely allowed this to be done much more effectively and insidiously than before.

AI disasters, such as Facebook’s algorithms being co-opted by foreign political actors to influence elections, could and should have been predicted from this careless use of AI. They have highlighted the need for more careful design, including by AI pioneers like Stuart Russell (often called the father of AI), who now advocates that “standard model AI” should be replaced with beneficial AI.

Organizations ranging from the World Economic Forum to Stanford to the New York Times are convening groups of experts to develop design principles for beneficial AI. As a contributor to these initiatives, I believe the following principles are key.

MAKE IT EASY FOR USERS TO UNDERSTAND DATA COLLECTION

The user must know data is being collected and what it will be used for. Technologists must ensure informed consent on data. Too many platforms, across a whole host of applications, rely on surreptitious data collection or use data that was collected for other purposes. Initiatives to stop this are cropping up everywhere, as with the Illinois law requiring that video hiring platforms tell people that AI may be used to analyze their video recording and how the resulting data will be used.

DATA PRIVACY AND OWNERSHIP

Users must own and control their data. This is counter to the prevailing modus operandi of many tech companies, which have terms of service designed to exploit user data for the benefit of the company. For example, a tool called FaceApp is collecting millions of user photos, without any disclosure of what data is collected and for what purpose. More alarming, the user interface blurs the fact photos leave the user’s local storage. Users must be empowered, not overpowered, by technology. Users should always know what data is collected, for what purpose, and where it’s collected from.

USE UNBIASED TRAINING DATA

AI must use unbiased data. Any biased data used to train algorithms will be multiplied and enhanced by AI’s power. AI developers have a responsibility to examine the data they feed into the algorithms and validate their objectivity to confirm that they do not include any known bias.

For example, it’s been well established that data gleaned from résumés is biased against women and minority groups, so let’s use other types of data in hiring algorithms. The San Francisco DA’s office and Stanford created a “blind sentencing” AI tool, which removes ethnic info from data used in criminal-justice sentencing. This is just one example of using AI to eliminate, rather than double down on, bias.

AUDIT ALGORITHMS

It’s not enough to use unbiased data. A math quirk known as Simpson’s paradox shows how unbiased inputs can yield biased results. It is also critical to check your algorithms for bias. Don’t let skeptics misinform you. It is possible to audit an algorithm’s results to test for unequal outcomes across gender, race, age, or any other axis where discrimination could occur. An external AI audit serves the same purpose as safety-testing a vehicle to ensure it passes safety regulations. If the audit fails, the design flaw causing it must be found and removed.

AIM FOR FULL TRANSPARENCY

White-box AI means there is full transparency of the data that goes into the algorithms and the outcomes. You can only audit an algorithm to potentially reconfigure its biased output if it’s white-box. There can be a trade-off between explainability and performance. However, in fields like human resources, criminal sentencing, healthcare, and others, explainability may always win over pure performance because transparency is key when technology impacts people’s lives. If your model isn’t fully transparent, there are open-sourced methods to help partially explain decisions.

USE OPEN-SOURCE METHODS

Open-source methods should be utilized, either by releasing key aspects of the code as open-source or using well-established and peer-tested existing code. The visibility it offers allows for quality assurance. With the case of algorithm auditing, it is essential to understand the process by which companies are auditing (i.e. safety-testing) their algorithms. Initiatives to open-source this auditing technology are already underway.

INVOLVE EXTERNAL COUNCILS TO CREATE GUARDRAILS

An active community of industry leaders and subject-matter experts should be involved in cementing the rules of engagement for building new AI ethically and responsibly. An open discussion should offer a full accounting of the different implications of AI technology as well as specific standards to follow.

As history has shown, innovation invites fear and initial failures of usage. However, with the right design and guardrails, innovation can be harnessed for a positive impact on society. And so it is with AI. With careful forethought and deliberate efforts to push back on human bias, AI can be a powerful tool not just to mitigate bias, but to actually remove it in a way that is not possible with humans. Imagine life without electricity: a world of darkness. Let’s not deprive ourselves of the positive impact of ethical AI.

By: Frida Polli

Source: Fast Company

comments closed

Related News

January 29, 2023

How playgrounds are becoming a secret weapon in the fight against climate change

Sustainability

Schoolyards can do more than absorb rainwater and cool neighborhoods. They can also help close the park equity gap nationwide: One hundred million Americans, including 28 million kids, do not live within a 10-minute walk from a park or green space. Communities of color and low-income neighborhoods have even less access to green spaces.

January 22, 2023

BCG-WEF Project: The Net-Zero Challenge

Sustainability

The race to net-zero emissions will forever change the way many companies do business. The immediacy, pace, and extent of change are still widely underestimated. Early movers can seize significant advantage. In this report, coauthored with the WEF Alliance of CEO Climate Leaders, authors explore how other companies can take a similar path by identifying, creating, and scaling green businesses. 

January 14, 2023

Sustainability and ESG in 2023

Sustainability

The current debate over ESG and sustainable investing is noisy and sometimes rancorous, and the temptation is strong to just tune it out until it’s better resolved. But, in the end, leaders must resist this urge and accept that it’s a relevant discussion.

How can we help you?

We're easy to reach