Microsoft Launches New Tools For Responsible AI

Microsoft AI introduces the new Error Analysis toolkit; and Synthetic data in SmartNoise.

Recently, Microsft added new capabilities to its AI (RAI) toolkits for debugging inaccuracies in your model with the new Error Analysis toolkit and boost privacy using synthetic data in SmartNoise. Last year, at Microsoft Build conference, the company had launched three responsible AI (RAI) toolkits: InterpretML, Fairlearn, and SmartNoise.

Error Analysis, the newest addition to the responsible AI open-source toolkits uses Ml to partition model errors along meaningful dimensions to help data engineers better understand the patterns in the errors. 

Source: Microsoft

Error Analysis allows you quickly identify subgroups with higher inaccuracy and visually diagnose the root causes behind errors. Error Analysis is already an essential tool in AI development at Microsoft. The project was started as a research initiative in 2018 as a collaboration with Microsoft Research and the AI, Ethics, and Effects in Engineering and Research (AETHER) Committee.

Error Analysis along with other RAI toolkits is also scheduled to come under a larger model assessment dashboard available in both OSS and Azure Machine Learning by mid-2021.

Microsoft said that differential privacy has emerged as the gold standard technology for protecting personal data while allowing data researchers to extract useful statistics and insights from the dataset. Now with the latest release of SmartNoise, data scientists can use differential privacy to not only protect individual’s data but also the full dataset using the new synthetic data capability. Here, a synthetic dataset is a manmade sample derived from the original dataset while retaining most of the statistical characteristics of the original data.

Source: Microsoft


Microsoft said that it continues to expand the capabilities in FairLearn, InterpretML, Error Analysis, and SmartNoise. To learn more you can visit the official announcement here.