AI Watch: Five Strategies for Building Responsible AI at Scale
This week I'm attending the Microsoft Global MVP Summit. The event provides MVP awardees with an annual opportunity to connect with each other and with Microsoft Product Leadership to talk about the future of technology and developer experience at the company.
While much of what was discussed is still under NDA (non-disclosure), I can share some themes that I found interesting as the race to responsibly use AI at scale continues. I have summarized 5 key messages from my week with like-minded experts and innovators leading product development.
Year of the Agent
2023 was the year of the Chatbot (renewed) and 2024 is the year of the Agent. An Agent is a purpose built, autonomous AI solution that operates within the bounds we give it. Last month, Microsoft announced the public preview of the Assistants API that enables the creation of high-quality copilot/generative experiences inside applications. This is interesting because it's the first of a series of capabilities aimed at giving developers the ability to integrate generative AI into any application more easily. Imagine building an insurance application with this new API, and then being able to offer customers a real-time conversational interface that can call and execute actions autonomously. You could set rules and thresholds that allow certain types of transactions to be facilitated by an agent that can call other agents and APIs and orchestrate an entire workflow with natural language.
Multi-task Orchestration
Over the past 6 months I've been working with executive teams to evolve their thinking around responsible AI at scale. Rather than focusing on building that killer app, or a single use case, we need to shift our thinking to how to orchestrate models together. I call this multi-task orchestration. In a recent executive workshop, a client shared that they have more than 900 unique use cases that they need to consider and prioritize. I led them through an exercise that helped them pick the right project at the right time with the right model. We also focused on selecting use cases that would create opportunities to unlock more value over time, rather than focusing on a single use case that couldn’t be built upon. The next year will bring more sophistication in how we use both large and small language models to solve the right problem at the right time for the right customer as cost effectively as possible.
Responsible AI
Another key theme of the summit was advancements in the tooling offered by the Responsible AI team at Microsoft. There are several ways to think about your strategy to build responsible AI solutions at scale. Here are some principles to consider: Transparency, Fairness, Trust, Reliability, and Explainability. I'll have more to say about these in future posts. If you can’t wait to dive in, you can check out this article from Microsoft on the ABCs of Responsible AI.
Safe Deployments are All About Risk
In the world of generative AI, the quickest way to fail is to blindly launch your solutions without weighing the risks involved. The list of risks you should consider includes grounded output errors (hallucinations), jail breaking attempts, prompt injection, copyright infringement, and manipulation of human behavior (deep fakes, for example). With these risks in mind, you'll want to establish a red team, something that my company offers as a service, or creating one for your own AI efforts. This can help you identify risks, measure their potential impact (for prioritization), and mitigate risks in the system. You can expect to see more and more tools designed to help these team become more effective this year.
It's All About the Builders
It was very exciting to see the tools and resources that are being released for AI builders. I've had to navigate across so many Microsoft studio products over the years, that I have a separate folder dedicated to the bookmarks to keep them all straight. Thankfully, Microsoft felt our collective pain and launched a single AI studio this year. Azure AI Studio, currently in beta, combines the development tools needed to build generative AI applications with a unified studio for all the other Azure AI services, such as speech and computer vision.
Another tool that's available now is Microsoft's Human-AI Experience (HAX) Toolkit. Developed through a collaboration between Microsoft Research and Aether, Microsoft's advisory body on AI Ethics and Effects in Engineering and Research, the Toolkit was designed for teams building user-facing AI products. It was developed specifically to help developers conceptualize what an AI system will do and how it will behave. I recommend using it as early in your design process as you can.
In this fast-moving world of generative AI, it’s critical to be mindful of our approach. Remember the three keys to any responsible AI solution, build it so it can be Helpful, Honest, and Harmless.
It's been a fun week in Redmond, and I can’t wait for Microsoft BUILD in May!
Posted by Noelle Russell on 03/14/2024