Join today’s leading executives online at the Data Summit on March 9th. Register here.
This week in AI, Eric Schmidt, former CEO of Google, announced plans to invest $125 million into AI research through his philanthropic venture, Schmidt Futures. Part of an initiative dubbed “AI2050,” the funding will aim to support academics working on “hard problems” in AI, including how to prepare for the unintended consequences that come along with highly capable AI systems, Schmidt said in a statement.
“AI will cause us to rethink what it means to be human,” Schmidt continued. “Artificial intelligence can be a massive force for good in society, but now is the time to ensure that the AI we build has human interests at its core.”
AI2050 — which will be co-chaired by Schmidt and Google head of technology and society James Manyika — will make payments to individual academics over the next five years. While it’s generated some controversy — one of the first recipients, Berkeley professor Rediet Abebe, said she hadn’t received an award and asked for her name to be removed from consideration — Schmidt’s effort points to a growing concern within the AI community that not enough funding is being set aside for AI research outside of wealthy corporations.
The academic “brain drain” in AI throughout the past decade has been thoroughly documented. One paper found that ties to corporations — either funding or affiliation — in AI research doubled to 79% from 2008 and 2009 to 2018 and 2019. Another study showed that Google parent company Alphabet, as well as Amazon and Microsoft, hired 52 tenure-track AI professors between 2004 and 2018. And from 2006 to 2014, the proportion of AI publications with a corporate-affiliated author increased from about 0% to 40%, reflecting the growing movement of researchers from institutions to the enterprise.
The academic process isn’t without its flaws. There’s a concentration of compute power at elite universities, for example. And research has found that only a dozen universities are responsible for creating the datasets used the majority of the time in AI. But it’s been shown that commercial projects tend to — unsurprisingly — underemphasize values such as beneficence, justice, and inclusion.
In a recent, stark example, Google fired leading AI researcher Timnit Gebru in what she claimed was retaliation for sending colleagues an email critical of the company’s managerial practices. The flashpoint was reportedly a paper Gebru coauthored that questioned the wisdom of building large AI-powered language models and examined who benefits from them — and who is disadvantaged. Google AI lead Jeff Dean wrote in an email to employees following Gebru’s departure that the paper didn’t meet Google’s criteria for publication because it lacked reference to recent research. But from all appearances, Gebru’s work simply spotlighted well-understood problems with models like those deployed by Google, OpenAI, Facebook, Microsoft, and others. A draft of the paper discusses risks including the impact of models’ carbon footprint on marginalized communities and their tendency to perpetuate abusive language, hate speech, microaggressions, stereotypes, and other dehumanizing language aimed at specific groups of people.
Google — which infamously dissolved an AI advisory board in 2019 just one week after forming it — has attempted to limit other internal research that might portray its technologies in a bad light. Reports have described many other AI ethics teams at large corporations, like Meta (formerly Facebook), as largely toothless and ineffective.
A number of experts, speaking to Wired for a 2020 piece, point out that corporate AI research can lead to an “unscientific fixation” on projects only possible for — or relevant to — people with access to powerful datacenters. Work on enormous, costly language models, for example, have arguably limited scientific value because the resources involved make it impossible for few but a large corporation to replicate.
Beyond Schmidt’s AI2050, the U.S. government recently took steps toward increased investments in fundamental AI research with the National Science Foundation’s (NSF) financing of 11 new National Artificial Intelligence (AI) Research Institutes. The NSF will set aside upwards of $220 million for initiatives including the AI Institute for Foundations of Machine Learning and the AI Institute for Artificial Intelligence and Fundamental Interactions, which will investigate theoretical AI challenges like neural architecture optimization and incorporate workforce development, digital learning, outreach, and knowledge transfer programs to develop AI that integrates the laws of physics.
Gebru’s global nonprofit for AI research — Distributed AI Research (DAIR) — as well as projects like Hugging Face’s BigScience and EleutherAI are strong examples of what can be achieved in AI beyond the bounds of corporate support. But the resources pale in comparison to enterprise; while DAIR has raised millions of dollars, OpenAI alone secured a $1 billion investment from Microsoft several years ago.
While funds like AI2050 are a start, it’ll take a lot more to put academic AI research on a level playing field with corporate-aligned works.
For AI coverage, send news tips to Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.
Thank you for reading,
Kyle Wiggers
AI Senior Staff Writer
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More