Google made AI language the centerpiece of I/O while ignoring its troubled past at the company


Yesterday at Google’s I/O developer conference, the company outlined ambitious plans for its future built on a foundation of advanced language AI. These systems, said Google CEO Sundar Pichai, will let users find information and organize their lives by having natural conversations with computers. All you need to do is speak, and the machine will answer.
But for many in the AI community, there was a notable absence in this conversation: Google’s response to its own research examining the dangers of such systems.
In December 2020 and February 2021, Google first fired Timnit Gebru and then Margaret Mitchell, co-leads of its Ethical AI team. The story of their departure is complex but was triggered by a paper the pair co-authored (with researchers outside Google) examining risks associated with the language models Google now presents as key to its future. As the paper and other critiques note, these AI systems are prone to a number of faults, including the generation of abusive and racist language; the encoding of racial and gender bias through speech; and a general inability to sort fact from fiction. For many in the AI world, Google’s firing of Gebru and Mitchell amounted to censorship of their work.
For some viewers, as Pichai outlined how Google’s AI models would always be designed with “fairness, accuracy, safety, and privacy” at heart, the disparity between the company’s words and actions raised questions about its ability to safeguard this technology.
“Google just featured LaMDA a new large language model at I/O,” tweeted Meredith Whittaker, an AI fairness researcher and co-founder of the AI Now Institute. “This is an indicator of its strategic importance to the Co. Teams spend months preping these announcements. Tl;dr this plan was in place when Google fired Timnit + tried to stifle her+ research critiquing this approach.”
Google just featured LaMDA a new large language model at I/O. This is an indicator of its strategic importance to the Co. Teams spend months preping these announcements. Tl;dr this plan was in place when Google fired Timnit + tried to stifle her+ research critiquing this approach https://t.co/6VObPJ1ebo
— Meredith Whittaker (@mer__edith) May 18, 2021
Gebru herself tweeted, “This is what is called ethics washing” — referring to the tech industry’s tendency to trumpet ethical concerns while ignoring findings that hinder companies’ ability to make a profit.
Speaking to The Verge, Emily Bender, a professor at the University of Washington who co-authored the paper with Gebru and Mitchell, said Google’s presentation didn’t in any way assuage her concerns about the company’s ability to make such technology safe.
“From the blog post [discussing LaMDA] and given the history, I do not have confidence that Google is actually being careful about any of the risks we raised in the paper,” said Bender. “For one thing, they fired two of the authors of that paper, nominally over the paper. If the issues we raise were ones they were facing head on, then they deliberately deprived themselves of highly relevant expertise towards that task.”
In its blog post on LaMDA, Google highlights a number of these issues and stresses that its work needs more development. “Language might be one of humanity’s greatest tools, but like all tools it can be misused,” writes senior research director Zoubin Ghahramani and product management VP Eli Collins. “Models trained on language can propagate that misuse — for instance, by internalizing biases, mirroring hateful speech, or replicating misleading information.”
But Bender says the company is obfuscating the problems and needs to be clearer about how it’s tackling them. For example, she notes that Google refers to vetting the language used to train models like LaMDA but doesn’t give any detail about what this process looks like. “I’d very much like to know about the vetting process (or lack thereof),” says Bender.
It was only after the presentation that Google made any reference to its AI ethics unit at all, in a CNET interview with Google AI chief Jeff Dean. Dean noted that Google had suffered a real “reputational hit” from the firings — something The Verge has previously reported — but that the company had to “move past” these events. “We are not shy of criticism of our own products,” Dean told CNET. “As long as it’s done with a lens towards facts and appropriate treatment of the broad set of work we’re doing in this space, but also to address some of these issues.”
For critics of the company, though, the conversation needs to be much more open than this.
Yesterday at Google’s I/O developer conference, the company outlined ambitious plans for its future built on a foundation of advanced language AI. These systems, said Google CEO Sundar Pichai, will let users find information and organize their lives by having natural conversations with computers. All you need to do is…
Recent Posts
- Google may be close to launching YouTube Premium Lite
- Someone wants to sell you a digital version of the antiquated typewriter but without a glued-on keyboard (no really)
- Fitbit’s got a battery problem
- Adidas plugs its website and app into Amazon’s ‘Buy with Prime’ program
- An iOS update will give iPhone 15 Pro owners Visual Intelligence
Archives
- February 2025
- January 2025
- December 2024
- November 2024
- October 2024
- September 2024
- August 2024
- July 2024
- June 2024
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- November 2023
- October 2023
- September 2023
- August 2023
- July 2023
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022
- June 2022
- May 2022
- April 2022
- March 2022
- February 2022
- January 2022
- December 2021
- November 2021
- October 2021
- September 2021
- August 2021
- July 2021
- June 2021
- May 2021
- April 2021
- March 2021
- February 2021
- January 2021
- December 2020
- November 2020
- October 2020
- September 2020
- August 2020
- July 2020
- June 2020
- May 2020
- April 2020
- March 2020
- February 2020
- January 2020
- December 2019
- November 2019
- September 2018
- October 2017
- December 2011
- August 2010