Session II:

Creating an Ethical Matrix for AI

Guiding Questions

  1. When should you use an ethical matrix to test an AI system?

  2. Who should participate in building and testing an AI system at various stages of the process?

  3. What might be the business trade offs of introducing an ethical matrix, or similar analytical framework, for developing responsible AI? 

  4. In the context of your company, what are some key components for an ethical matrix to minimize AI harm and bias for your consumers/clients while maximizing benefits for all? 

  5. What are the best methods to evaluate: For whom could this technology fail?



📝 To Do Before Next Session

  • Think about the different types of potential AI biases you face at work. This includes any point of the AI lifecycle, from data collection to monitoring the performance of an AI model.

  • Are there any common denominators across these different biases? If so, what are they? How can you and your team assess these biases and harms to best position yourselves to mitigate them?

💭 Food For Thought

  • What does responsible AI mean within the context of your company? How might we identify weak points in the development and deployment of AI and related technologies?


Share your thoughts on the session here

We would love to hear and incorporate your feedback