Google asks its scientists to ‘strike in a positive tone’ in display of AI research, documents

Oakland – Alphabet’s Google went on to tighten control over its scientists’ papers this year by launching a review of “sensitive topics”, and requested in at least three cases that the authors report a negative, according to internal communications and interviews by researchers In light avoided choosing its technique. Work Engaged.

Google’s new review process asks that researchers consult with legal, policy and public relations teams before pursuing topics such as face and emotion analysis and race, gender, or political affiliation, according to internal webpages that explain policy.

One of the pages for staff of research is “Advances in technology and the increasing complexity of our external environments are moving rapidly toward situations where seemingly ineffective projects raise ethical, reputational, regulatory, or legal issues.” Reuters could not determine the date of the post, though three current employees said the policy began in June.

Google declined to comment for this story.

The “Sensitive Topics” process adds a round of scrutiny to disclose trade secrets, eight current and former employees said, to standard review papers for damages.

For some projects, Google officials have intervened in later stages. Shortly before publishing this summer, a senior Google manager reviewed a study on the content recommendation technique, according to internal correspondence read to Reuters, with the authors “taking great care to strike a positive tone” “Said.”

The manager said, “This does not mean that we should hide from the real challenges presented by the software”.

Later correspondence from a researcher to a reviewer shows the author “updated to remove all references to Google products”. Google-owned YouTube was mentioned in a draft seen by Reuters.

Four staff researchers, including senior scientist Margaret Mitchell, said they believe Google is beginning to intervene in important studies of potential technology losses.

Mitchell said, “If we are doing the proper research given our expertise, and we are not allowed to publish on a basis that is not consistent with high-quality peer review, then we are facing a serious problem of censorship Are battling. “

Google says on its public-facing website that its scientists have “enough” freedom.

Tensions arose between Google and some of its employees this month following the sudden exit of scientist Timneet Gabru, who led a 12-man team with Mitchell focusing on ethics in artificial intelligence software (AI).

Gabru says that Google fired him after questioning the order not to publish research on AI, as the memics speech could harm the marginalized population. Google said that he accepted and hastened his resignation. It cannot be determined that Gabru’s paper reviewed “sensitive topics”.

Google’s senior vice president Jeff Dean said in a statement this month that Gebru’s paper went on to address potential hiccups without discussing them.

Dean said that Google supports AI ethics scholarships and is “actively working on improving our paper review processes, because we know that a lot of checks and balances can be cumbersome.”

‘Sensitive subject’

The explosion in AI’s research and development in the tech industry has prompted authorities in the United States and elsewhere to propose regulations for its use. Some have cited scientific studies as saying that facial analysis software and other AIs can erase bias or destroy privacy.

In recent years Google has incorporated AI into its entire services, using technology to explain complex search queries, recommendations on YouTube, and autocomplete sentences in Gmail. Dean said its researchers published more than 200 papers last year to develop AI responsibly, totaling more than 1,000 projects.

According to an internal webpage, the study of Google services is one of the “sensitive topics” for bias under the company’s new policy. Dozens of other “sensitive topics” listed included the oil industry, China, Iran, Israel, COVID-19, home security, insurance, location data, religion, self-driving vehicles, telecommunications, and systems that recommended or privatized web content .

The Google paper for which the authors were asked to strike a positive tone discusses the recommendation of AI, which employs services such as YouTube to personalize users’ content feeds. A draft reviewed by Reuters included “concerns” that the technique could lead to “dissolution, discriminatory or otherwise unfair consequences” and “insufficient diversity of content,” as well as “political polarization”.

Instead the final publication states that the system can “promote accurate information, fairness, and variety of content.” The published version, titled “What are you adapting for? Aligning recommended systems with human values, “credit to Google researchers left. Reuters could not determine why.

A paper on AI for understanding foreign language softened this month on how Google was making mistakes following a request from the translation product company reviewers, a source said. The published version states that the authors used Google Translate, and a separate sentence states that part of the research methodology was “reviewing and correcting incorrect translations”.

For a paper published last week, a Google employee described the process as a “long haul” involving more than 100 email exchanges between researchers and reviewers, according to internal correspondence.

Researchers found that AI could eat personal data and copyrighted material – including a page from the “Harry Potter” novel – that was drawn from the Internet to develop the system.

A draft states that such disclosures could violate copyright or violate European privacy law, a person familiar with the matter said. Following company reviews, the authors removed the legal risks, and Google published the paper.

Leave a Reply

Your email address will not be published. Required fields are marked *