Google: want to publish AI papers? Please make sure it’s positive energy


Google has always been a benchmark for the combination of industry, University and research in the field of artificial intelligence, but recently the relationship between the company and its scientists began to crack.
A recent Reuters report said that it learned from Google’s internal communication and interviews with relevant researchers that Google launched a “sensitive topic” review this year, hoping to strengthen its control over the papers published by its scientists. Google has asked relevant authors to avoid negative comments on its technology at least three times.
This means that Google claims on its website that its scientists enjoy “substantial” freedom, but the opposite may be true.
The report mentions timnit gebru, a former Google employee who, together with Margaret Mitchell, led a 12 person team to conduct ethical research on AI software. But just this month, she suddenly left Google, sparking discussion about the relationship between Google and its researchers.
According to gebru, Google dismissed her because it had banned her from publishing research on the possible harm of language imitation AI to marginalized people, which she questioned. Google, on the other hand, claimed that gebulu had resigned herself, and the company had accepted and accelerated her resignation. For the time being, it is difficult to judge whether gebulu’s paper has ever been censored for “sensitive topics”.
Google executives also responded to the incident. In a statement this month, Jeff Dean, Google’s senior vice president, said that gebru’s paper only discusses potential hazards, not the efforts being made to address them. At the same time, Dean added that Google supports AI ethics scholarships and “is actively improving our paper review process because we know that too many checks and balances can make it cumbersome.”
But Google’s attempt to interfere in academic research is not alone. According to reports, a paper by a Google researcher was asked to “take a positive attitude.”. This paper focuses on intelligent recommendation technology. For example, YouTube uses this technology to provide personalized content recommendation for users.
The draft paper seen by Reuters mentioned some “concerns” about the technology, such as that it may promote “false information, discrimination or other unfair results” and “insufficient content diversity” and lead to “political polarization”. However, the final version says that intelligent recommendation system can promote “accuracy, fairness and diversity of information”.
The paper ended with “what are you optimizing? Making recommender systems consistent with human values “was published under the title, without mentioning the contribution of Google researchers. It’s hard to explain why this happened.
In recent years, the research and development of AI in the whole technology industry has increased rapidly, which has prompted the competent authorities in the United States and other places to put forward corresponding regulatory requirements. Some scientific studies have shown that facial analysis software and other AI can perpetuate bias or invade privacy. In recent years, Google has used a lot of AI technology in its whole service, including using AI to analyze complex search requirements, determining recommended content on youtube and automatically completing sentences in Gmail.
Google must be very aware of these concerns and doubts from the outside world, so it hopes to try its best to create a positive impression of AI system. Dean said that last year, Google researchers published more than 200 papers on responsible development of AI, and there were more than 1000 related projects in total.
But on the other hand, the study of bias in Google services is prohibited by the company’s “sensitive topics” policy.
Meanwhile, Google’s new censorship process requires researchers to consult the company’s legal, policy and public relations teams before engaging in research topics such as face and emotion analysis, race, gender and political background classification, the report said.
“Technological progress and our increasingly complex external environment are gradually leading to moral, reputation, regulatory or legal issues arising from seemingly innocuous projects,” the relevant document explains the reasons for the company’s new regulations. The exact timing of the release of the document is unclear, but three Google employees revealed that the “sensitive topics” policy began in June this year.
Under various restrictions, it is no longer easy for Google employees to publish papers. According to an internal letter, a Google researcher exchanged more than 100 emails with reviewers to publish a paper last week – a process he likened to the “Long March.”. And I don’t know how it ends.