The most pressing moral, political and social issues of our time converge in and at Google. As the largest and most successful “big data” company, Google has the unfettered power not just to shape – but to declare (in the words of Shoshana Zuboff) – our futures. The data that the company harvests, sells and uses to create automation systems literally determine our life chances: how we are interpreted and understood by police and military technologies, our credit scores, our access to healthcare, our professional reputations, how much we pay for goods and services, and our ability to obtain work.
Unlike in the 20th century, when competitive capitalism both spawned and responded to the power of labor unions, social movements and business regulation, in today’s political world, Google – and its ilk, Amazon and Facebook –operate in a seemingly impenetrable vortex of power. Government regulators, despite clarion calls by scientists and social scientists, have repeatedly failed to even properly investigate what the company’s immense and fast-growing power may mean – much less to rein it in. The inscrutable “black box” within which Google operates shows little sign of cracking in the face of new and old privacy and anti-trust laws.
Who, then, will represent the public interest as new technologies are developed and deployed? In this brave new digital world, the most effective friction and oversight have come from an unlikely and surprising source: white-collar tech workers within Google. In the past few months, researchers, engineers, scientists and policy and communication specialists (among others) at the tech firm have protested and objected (at great personal cost) to protect us from the dystopian effects of unregulated AI. Undermining the stereotype of self-absorbed, Silicon Valley computer scientists, Google workers have acted both individually as conscientious objectors and collectively, banding together in concerted activity and mutual aid.
These organized tech workers, through courageous acts of protest, have made a revolutionary leap: they have explicitly connected Google’s workplace practices to the broader public interest. They have argued that practices like forced arbitration and the use of temporary workers (more than 50% of the company’s workforce) make Google a difficult and sometimes harrowing place for women and people of color. And they have insisted that if the people most likely to be hurt by new technologies are not present and empowered to shape them, then biased and harmful outcomes are likely for us all.
The business management and “pipeline” literature overwhelmingly support this positive connection between workplace diversity and corporate ethics. But the Google worker-organizers have taken the traditional argument a step further. In a recently released, seminal report from the AI Now Institute at NYU, Discriminating Systems: Gender, Race, and Power in AI, the artificial intelligence ethics expert and the Google walkout organizer Meredith Whittaker argues (with scholars Sarah Myers West and Kate Crawford) that when diversity is “stripped of the histories and lived experiences of systemic discrimination”, it becomes an “empty signifier”. They emphasize that the diversity crisis in tech is being addressed through a “worker-driven movement” engaged in fighting bias and inequity both in the workplace and in the technology it produces. Such a movement, the report maintains, is one of the primary forces working to rein in the dystopian impacts of widespread, unregulated automation.
Whittaker and the Google worker-organizers (including those involved with the Tech Workers Coalition) have proven both their prescience and their effectiveness. For example, shortly after Google announced the creation of an AI ethics panel called the “Advanced Technology External Advisory Council”, which included the Heritage Foundation president, Kay Coles James (widely criticized for her transphobia and climate denial), thousands of Googlers and external supporters banded together in a protest petition. How, they asked, could this council address pressing AI governance issues with members who would exclude those historically marginalized, and thus most at risk of harm from AI? In a rapid response to the public petition and the controversies it engendered, Google disbanded the panel in days.
To those of us on the outside, Google appeared to be serving its former motto – doing no evil and taking the concerns of its workforce seriously. But for those on the inside of the firm, something else was brewing. Taking a cue from the age-old, anti-worker corporate playbook, Google individually targeted the leading worker-organizers. Whittaker – a prominent AI ethics researcher – was told that in order to keep her job, she must abandon her work at the AI Now Institute at NYU, which she co-founded three years ago and from which the ground-breaking Discriminating Systems report was launched. She was also told that her job at Google would change dramatically – moving away from her expertise on AI ethics. Claire Stapleton – another one of the seven leading Google walkout organizers and a longtime Googler – was both demoted and told to take “sick leave”. She was not sick, and her demotion was rolled back only after she hired an attorney.
For the past few months, in my capacity as a law professor and researcher, I have interviewed Google worker-organizers and discussed their experiences during and after protests. My research reveals the extent to which the acts of retaliation faced by these women are not isolated. While Whittaker and Stapleton are among the most well-known Google worker-organizers – with their names included in many of the public Google walkout statements – many less-well-known Googlers have been retaliated against and even fired when they, in the words of one former manager who I interviewed, “push back at all – even when it is our job to push back”. A pattern, I have been told by insiders, is that when a worker raises either ethical issues or labor issues, she is “managed out”. This sometimes means being made so unhappy through reassignments and everyday mistreatment that the worker feels compelled to leave.
What Google has done to Whittaker and Stapleton flies in the face of US work laws. Retaliating against a worker who is engaged in concerted activity is illegal under the National Labor Relations Act. Additionally, since both worker-organizers have been fighting gender and racial discrimination in the workplace, their demotions and reassignments may also be unlawful under Title VII of the Civil Rights Act. But perhaps even more promising is the tremendous support that these women – and others like them – have been receiving from their co-workers who pledge to support them and stand alongside them as they fight retaliation. In a rapidly organized, unprecedented “sit-in” just this week, Googlers in 15 offices across the world shared their own stories with harassment and retaliation at the company – ensuring that we understood that Whittaker and Stapleton were not alone.
How should those of us who do not work at Google or in the tech industry view Google’s actions and their overall workplace culture? How should we as Google users and consumers respond to the company’s clear attempt to thwart collective organizing in the workplace? How much should we care?
A whole lot, in my view. In the absence of much-needed government regulation of this powerful tech company, workers like Whittaker and Stapleton are not just standing up for their own livelihoods, but for our collective futures. Without the tremendous pushback of these organizers – and others in the tech world – we would have very little leverage to make sense of emerging data practices and their impacts on our everyday lives. We owe these workers not just our sympathies, but also our solidarities.
By Veena Dubal, associate professor of law at the University of California, Hastings
Source: The Guardian
Wind turbines generate electricity without using fossil fuels or producing particulate matter pollution, but they do create waste. Though they can last as long as 25 years, turbine blades cannot be recycled, piling up in landfills at the end of their life. Siemens Gamesa’s RecyclableBlade can be broken down into its raw materials at the end of its life.
AI holds great promise to increase the quality and reduce the cost of health care in developed and developing countries. But one obstacle to using it is patients don’t trust it.
Evidence suggests that most companies are still struggling to build data literacy, even after they’ve identified it as critically important: just a quarter of employees report feeling confident in their data skills. Here are five strategies to help companies improve.