“We piloted the course with more than 2,000 Googlers, and it is now available as an online self-study course to all Googlers across the company,” the company said on Thursday.
Google recently released a version of this training as a mandatory course for customer-facing Cloud teams and 5,000 Cloud employees have already taken it.
“Our goal is for Google to be a helpful partner not only to researchers and developers who are building AI applications, but also to the billions of people who use them in everyday products,” said the tech giant.
The company said it has released 14 new tools that help explain how responsible AI works, from simple data visualizations on algorithmic bias for general audiences to ‘Explainable AI’ dashboards and tool suites for enterprise users.
The global efforts this year included new programmes to support non-technical audiences in their understanding of, and participation in, the creation of responsible AI systems, whether they are policymakers, first-time ML (machine learning) practitioners or domain experts, said Google.
“We know no system, whether human or AI powered, will ever be perfect, so we don’t consider the task of improving it to ever be finished. We continue to identify emerging trends and challenges that surface in our AI Principles reviews,” said Google.