
Session II:
Creating an Ethical Matrix for AI
Guiding Questions
When should you use an ethical matrix to test an AI system?
Who should participate in building and testing an AI system at various stages of the process?
What might be the business trade offs of introducing an ethical matrix, or similar analytical framework, for developing responsible AI?
In the context of your company, what are some key components for an ethical matrix to minimize AI harm and bias for your consumers/clients while maximizing benefits for all?
What are the best methods to evaluate: For whom could this technology fail?
📖 Session II Materials
Ethical Frameworks for AI Aren't Enough (HBR, 2020)
Could Algorithm Audits Curb AI Bias? (LexisNexis, 2022)
Understanding AI Harms: An Overview (CSET, 2023)
AI is Biased. The White House is Working with Hackers to Try to Fix that (NPR, 2023)
A framework to mitigate bias and improve outcomes in the new age of AI (AWS, 2023)
Resolving ethics trade-offs in implementing responsible AI (Arxiv, 2024)
Implementing responsible AI: tensions and trade-offs between ethics aspects (Montreal Ethics, 2023)
EqualAI AI Impact Assessment (EqualAI)
Race and Gender
When the Robot Doesn't See Dark Skin (NYT, 2019)
Biased AI Perpetuates Racial Injustice (TechCrunch, Vogel, 2020)
Facebook Apologizes after Its AI Labels Black Men as “Primates” (NPR, 2021)
The Apple Card Didn't "See" Gender–And That's the Problem (Wired, 2019)
Firing Decisions and Economic Injustice
Fired by Bot at Amazon: "It's You Against the Machine" (Bloomberg, 2021)
Couriers Say Uber's "Racist" Facial Identification Tech Got Them Fired (Wired UK, 2021)
Additional Resources
Please refer to additional useful resources here
📝 To Do Before Next Session
Think about the different types of potential AI biases you face at work. This includes any point of the AI lifecycle, from data collection to monitoring the performance of an AI model.
Are there any common denominators across these different biases? If so, what are they? How can you and your team assess these biases and harms to best position yourselves to mitigate them?
💭 Food For Thought
What does responsible AI mean within the context of your company? How might we identify weak points in the development and deployment of AI and related technologies?
Share your thoughts on the session here
We would love to hear and incorporate your feedback