In your Responsible AI Transparency Report, which mainly covers 2023, Microsoft touts its achievements in securely deploying AI products. The annual AI transparency report is one of the commitments made by the company after signing a voluntary agreement with the White House in July last year. Microsoft and other companies have promised to establish responsible AI systems and commit to security.
Microsoft says in the report that it created 30 responsible AI tools last year, grew its responsible AI team, and required teams building generative AI applications to measure and map risks throughout the development cycle. The company notes that it has added content credentials to its imaging platforms, which place a watermark on a photo, marking it as taken by an AI model.
The company says it has given Azure AI customers access to tools that detect problematic content such as hate speech, sexual content and self-harm, as well as tools to assess security risks. This includes new jailbreak detection methods, which were expanded in March of this year to include immediate indirect injections, where the malicious instructions are part of the data ingested by the AI model.
It is also expanding its red team efforts, including internal red teams that deliberately try to ignore security features in their AI models, as well as red teams applications to allow third-party testing before releasing new models.
However, your red team units have their work cut out for them. The company’s AI rollouts have not been immune to controversy.
Natasha Crampton, chief AI officer at Microsoft, said in an email sent to On the edge that the company understands that AI is still a work in progress and that responsible AI is as well.
“Responsible AI has no finish line, so we will never consider our work done under voluntary AI commitments. But we have made great progress since signing the agreements and we hope to build on our momentum this year,” says Crampton.