AI Watch

Blog archive

AI Watch: Leading Safe AI System Development--Research, Policy, and Engineering

This past week I had the opportunity to witness the progression of AI excitement across three separate and distinct pathways.

First, in my time with the executives at the ACHE 2024 Congress on Healthcare Leadership in Chicago, I delivered one of the opening keynotes and shared some of the exciting advancements that are making the lives of healthcare practitioners better and more efficient. As many of these healthcare companies are beginning to experiment with AI solutions, I reminded them that going from the playground to production would not only require new technology, but also a new mindset. During the Q&A I got two great questions, and I wanted to share my answers with you:

Question: What is the most exciting thing you've experienced in healthcare and AI?

Answer: I've seen rapid advancement of the use of AI in the healthcare industry and one of my favorite use cases is the development of personalized medicine and advancing patient-centered care. Imagine a world where we combine what we're learning from companies like 23andMe with predictive analytics to help us craft highly tailored personalized treatment plans. We have more data about our biological and physiological makeup that we can now use to create better treatment options personalized by demographic, and sociographic elements.

Question: What are you most worried about? 

Answer: I often get this question across industries like finance and healthcare. As a mom of a child with special needs and a caregiver to my dad who suffered a traumatic brain injury, I worry that technologists are beginning to enthusiastically build solutions in healthcare that might unintentionally perpetuate and amplify bias that leads to the creation of systems that generate unfair outcomes. I'm so worried about this, that I now offer services that safeguard generative AI applications as we move from the playground to production. Microsoft recently launched a set of tools that will make this accessible to every organization. I’ll share more about this in a moment.

Also, Adobe held its annual Adobe Summit 2024 in Las Vegas this week. I was an Adobe AI Insider and had a chance not only to get front row seats to the amazing keynotes, but also have direct one-on-one Q&A time with some of the Adobe Digital Executives. Here are a couple announcements I'm most excited about:

Adobe announced GenStudio for creating marketing assets at scale with the help of AI. They revealed new and specialized tooling to help creatives spend more time creating and less time on the routine tasks that often fill a creative’s day as they roll out what they have created to the world.

Microsoft and Adobe announced the integration of Adobe Experience Cloud workflows and insights with Microsoft Copilot for Microsoft 365 to help marketers overcome application and data silos and more efficiently manage everyday workflows. Imagine being able to get Generative AI to help with the creation of your brief for a new campaign and have that brief help you create consistency across executive reviews, assets, reports, and messages. Insight integration across Microsoft Teams, Word and PowerPoint means less hopping around across tools to get the work done, allowing marketers and creatives to do more in less time. Let’s hope the time they save allows them to think and be more creative and we don’t fill those new-found hours with meetings instead.

Finally, I had the unique opportunity to meet with Microsoft's Responsible AI team last week in New York to test drive their new tooling for helping organizations safeguard their AI systems. I was asked to be a part of this exclusive workshop because I'm a Microsoft AI MVP and I offer AI Red Teaming services through my company, the AI Leadership Institute. One of my key takeaways from this event was the way the team was structured, and I think every company can learn something from this.

Three Microsoft executives participated in the workshop: Sarah Bird, Chief Product Officer of Responsible AI; Natasha Crampton, Chief Responsible AI Officer; and Hanna Wallach, Partner Research Manager. Together, these leaders establish a kind of three-legged stool for responsible AI at scale. You need someone who leads engineering, someone who owns policy and governance, and someone who spearheads the research side of the organization. This team announced updates to the tooling to help teams build better and more robust safety systems for their Generative AI applications.

As you begin to go from AI playgrounds to production, remember to consider not only the enthusiastic part of AI, but also how you are safeguarding these solutions for safe and equitable outcomes for everyone involved. AI Red Teaming can help you go from enthusiasm to execution responsibly. Next time we will dive into how to build a safety system for your generative AI applications. As always, connect with me on LinkedIn to learn more about what's happening in responsible AI around the globe.

Posted by Noelle Russell on 04/02/2024


Featured

Upcoming Training Events

0 AM
TechMentor @ Microsoft HQ
August 11-15, 2025