Recently, IBM announced a series of AI toolkits to the Linux AI Foundation (LF AI) to help build “fair, secure, and trustworthy” ai-based applications. This time IBM contributed three tools: AI Fairness 360 Toolkit, RobustOness 360 Toolbox, and AI Explainability 360 Toolkit.
The AI Fairness 360 toolkit helps detect and mitigate deviations from machine learning models throughout aWeI application;
The Robustoness 360 Toolbox (ART) is a Python library designed to secure machine learning and protect neural networks from attack;
The AI Explainability 360 toolkit provides a comprehensive set of algorithms that cover the different dimensions of interpretation, as well as the interpretability metrics of the agent, to support the interpretability of machine learning models.
The project, which was voted on by the LFAI Technical Advisory Committee earlier this month, has been formally transferred to the Foundation for trusteeship and incubation.
IBM joined LFAI last year and then helped establish the Trusted AI Committee, which works to define and implement the trusted principles in AI deployments.
“Technology is only part of the equation when building trusted and fair AI,” says IBM. “
In addition to contributing to the open source toolkit, IBM also funds to encourage responsible technology deployment practices. A recent donation from its open source community went to a non-profit organization in Colombia that is helping local women learn coding and providing educational and career development support.