
Session IV:
Tools and Strategies to Operationalize Responsible AI Governance
Guiding Questions
How can companies make credible and impactful commitments to responsible AI?
What are the organizational, technical, or incentive challenges to implementing responsible AI practices in your company?
What strategic goals does your company aim to achieve through operationalizing responsible AI best practices?
📖 Session IV Materials
An Insider’s Guide to Designing and Operationalizing a Responsible AI Governance Framework (EqualAI, 2023) (be on the lookout for our whitepaper from our last AI Summit, co-authored with RAND, due out 2/20)
How companies can take a global approach to AI ethics, (HBR, 2024)
Managing RAI requires a central team (MIT, 2023)
Why most AI benchmarks tell us so little (TechCrunch, 2024)
Foundation models: opportunities, risks, and mitigations (IBM, 2024)
Does A.I. Have an Inherent Governance Problem? (The New York Times, 2023)
Responsible Use of Technology: The Microsoft Case Study (WEF, 2021)
BSA's Framework to Build Trust in AI (BSA, 2021) [Read pp. 8-13 only]
Responsible AI Guidelines
Responsible AI Progress Report (Google, 2025)
The ethics of advanced AI assistants (Google DeepMind, 2024)
Introducing the Frontier Safety Framework (Google DeepMind, 2025)
AWS Responsible AI Policy (AWS, 2025)
Advancing AI trust with new responsible AI tools, capabilities, and resources (AWS, 2024)
AWS achieves ISO/IEC 42001:2023 Artificial Intelligence Management System accredited certification (AWS, 2024)
💭 Food For Thought
How can companies create a commitment to responsible AI within their organization? What are some challenges to operationalizing responsible AI practices? How might companies overcome structural challenges or departmental reluctance to implementing and operationalizing responsible AI?
🔗 Let’s connect here
We would love to hear and incorporate your feedback.