Sector News

Seven very simple principles for designing more ethical AI

August 8, 2019
Borderless Future

No matter how powerful, all technology is neutral.Electricity can be designed to kill (the electric chair) or save lives (a home on the grid in an inhospitable climate).

The same is true for artificial intelligence (AI), which is an enabling layer of technology much like electricity.

AI systems have already been designed to help or hurt humans. A group at UCSF recently built an algorithm to save lives through improved suicide prevention, while China has deployed facial recognition AI systems to subjugate ethnic minorities and political dissenters. Therefore, it’s impossible to assign valence to AI broadly. It depends entirely on how it’s designed. To date, that’s been careless.

AI blossomed with companies like Google and Facebook, which, in order to give away free stuff, had to find other ways for their AI to make money. They did this by selling ads. Advertising has long been in the business of manipulating human emotions. Big data and AI merely allowed this to be done much more effectively and insidiously than before.

AI disasters, such as Facebook’s algorithms being co-opted by foreign political actors to influence elections, could and should have been predicted from this careless use of AI. They have highlighted the need for more careful design, including by AI pioneers like Stuart Russell (often called the father of AI), who now advocates that “standard model AI” should be replaced with beneficial AI.

Organizations ranging from the World Economic Forum to Stanford to the New York Times are convening groups of experts to develop design principles for beneficial AI. As a contributor to these initiatives, I believe the following principles are key.


The user must know data is being collected and what it will be used for. Technologists must ensure informed consent on data. Too many platforms, across a whole host of applications, rely on surreptitious data collection or use data that was collected for other purposes. Initiatives to stop this are cropping up everywhere, as with the Illinois law requiring that video hiring platforms tell people that AI may be used to analyze their video recording and how the resulting data will be used.


Users must own and control their data. This is counter to the prevailing modus operandi of many tech companies, which have terms of service designed to exploit user data for the benefit of the company. For example, a tool called FaceApp is collecting millions of user photos, without any disclosure of what data is collected and for what purpose. More alarming, the user interface blurs the fact photos leave the user’s local storage. Users must be empowered, not overpowered, by technology. Users should always know what data is collected, for what purpose, and where it’s collected from.


AI must use unbiased data. Any biased data used to train algorithms will be multiplied and enhanced by AI’s power. AI developers have a responsibility to examine the data they feed into the algorithms and validate their objectivity to confirm that they do not include any known bias.

For example, it’s been well established that data gleaned from résumés is biased against women and minority groups, so let’s use other types of data in hiring algorithms. The San Francisco DA’s office and Stanford created a “blind sentencing” AI tool, which removes ethnic info from data used in criminal-justice sentencing. This is just one example of using AI to eliminate, rather than double down on, bias.


It’s not enough to use unbiased data. A math quirk known as Simpson’s paradox shows how unbiased inputs can yield biased results. It is also critical to check your algorithms for bias. Don’t let skeptics misinform you. It is possible to audit an algorithm’s results to test for unequal outcomes across gender, race, age, or any other axis where discrimination could occur. An external AI audit serves the same purpose as safety-testing a vehicle to ensure it passes safety regulations. If the audit fails, the design flaw causing it must be found and removed.


White-box AI means there is full transparency of the data that goes into the algorithms and the outcomes. You can only audit an algorithm to potentially reconfigure its biased output if it’s white-box. There can be a trade-off between explainability and performance. However, in fields like human resources, criminal sentencing, healthcare, and others, explainability may always win over pure performance because transparency is key when technology impacts people’s lives. If your model isn’t fully transparent, there are open-sourced methods to help partially explain decisions.


Open-source methods should be utilized, either by releasing key aspects of the code as open-source or using well-established and peer-tested existing code. The visibility it offers allows for quality assurance. With the case of algorithm auditing, it is essential to understand the process by which companies are auditing (i.e. safety-testing) their algorithms. Initiatives to open-source this auditing technology are already underway.


An active community of industry leaders and subject-matter experts should be involved in cementing the rules of engagement for building new AI ethically and responsibly. An open discussion should offer a full accounting of the different implications of AI technology as well as specific standards to follow.

As history has shown, innovation invites fear and initial failures of usage. However, with the right design and guardrails, innovation can be harnessed for a positive impact on society. And so it is with AI. With careful forethought and deliberate efforts to push back on human bias, AI can be a powerful tool not just to mitigate bias, but to actually remove it in a way that is not possible with humans. Imagine life without electricity: a world of darkness. Let’s not deprive ourselves of the positive impact of ethical AI.

By: Frida Polli

Source: Fast Company

comments closed

Related News

October 2, 2022

Why AI-Managed supply chains have fallen short and how to fix them

Borderless Future

Why hasn’t artificial intelligence fully transformed supply chains? Several years ago, some of us predicted that AI-powered automation would lead to “the death of supply chain management.” However, despite heavy investments, companies have not realized the vision of AI-managed supply chains.

September 25, 2022

Motivations for work are changing

Borderless Future

According to our survey, only 22% of workers globally rank compensation as the thing that matters most to them in a job. This isn’t to say that people will accept a job without fair pay: Compensation still ranks higher than all other job attributes. But it’s evident that a coin-operated view of workers, where firm leaders see employment as a purely financial transaction, underestimates the deeper human motivations for work.

September 17, 2022

The Future of Work now: Pharmacists and the robotic pharmacy at Stanford Health Care

Borderless Future

In November 2019 Stanford Health Care moved into a new hospital building. With seven stories and 824,000 square feet, the hospital required over a decade and two billion dollars to plan and construct. Most descriptions of the hospital focus on the airy private patient rooms or the state-of-the-art operating rooms, but one of the most technologically sophisticated aspects of the building is found in the basement.