In Focus

Google Unveils AI Principles Updates

Earlier in June this year, Google unveiled a set of AI Principles that codified how artificial intelligence would be used in research and products going forward

Earlier in June this year, Google unveiled a set of AI principles that codified how artificial intelligence would be used in research and products going forward. The company is now detailing a progress update on its effort to implement those guidelines.
 
 
googleAI 
Source: Google 
 
Over the past six months, Kent Walker, SVP of global affairs at Google explained, Google has encouraged teams throughout the company to "consider how and whether our AI Principles affect their projects.” A new training aimed at both technical and non-technical employees hopes to address the multifaceted ethical issues that arise in their work. It’s based on the “Ethics in Technology Practice” developed at Santa Clara University and further tailored for the AI Principles. In a number larger than a hundred, employees from around the globe have tried the course, with Google hoping to make it more accessible in the future.
 
The company has also welcomed external experts as part of an AI Ethics Speaker Series to cover topics like bias in natural language processing and AI in criminal justice. For everyone, the Machine Learning Crash Course this year added a technical module on fairness that focusses on identifying and mitigating bias in training data.
 
According to Kent Walker, a formal review structure to assess new projects, products, and deals have been established. He revealed that more than 100 reviews have been completed so far, and many of them have resulted in decisions to modify research in visual speech recognition and to hold off on commercial offerings of technology like general-purpose facial recognition.
 
Thoughtful decisions require careful and nuanced consideration of how the AI principles … should apply, how to make tradeoffs when principles come into conflict, and how to mitigate risks for a given circumstance, most of these cases … have aligned with the principles.” he wrote.
 
The review structure comprises three core groups.
 
A responsible innovation team of user researchers, social scientists, ethicists, human rights specialists, policy and privacy advisors, that handles day-to-day operations and initial assessments.
 
The second group of senior experts from a “range of disciplines” across Alphabet, Google’s parent company, who provide technological, functional, and application expertise.
 
A council of senior executives to handle the most complex and difficult issues, including decisions that affect multiple products and technologies. Moreover, the company is also planning to constitute an external advisory group of experts from multiple fields to complement the internal governance and processes.
 
To know more, you can read the official blog post here.