Session I:

How Bias in Our World Translates into AI

 

Guiding Questions

  1. Why is responsible AI important to my role and company overall? 

  2. What does it mean to develop and deploy AI responsibly? 

  3. How and why do algorithms reflect real-world biases? 

  4. What new biases and risks has generative AI introduced in recent years?

  5. What does due diligence look like when working with AI tools, especially as the technology rapidly evolves and advances? 

  6. How can I, as an individual, and my company combat AI-enabled discrimination or other related harms?

 

📖 Introductory Materials


📖 Session I Materials (select at least 3 from the following):

Racial Bias


📝 To Do Before Next Session

  • Begin to outline how to improve AI within organization; key stakeholders; examples of how you use AI and how it has had benefits/drawbacks

  • What are 3 main takeaways from these readings?

  • Why do some algorithms reflect real-world biases? Why are communities of stakeholders important for combatting them?

💭 Food For Thought

  • What are 3 goals you hope to achieve by taking this course?

  • What is responsible AI? Why does it matter?


Share your thoughts on the Badge Program here

We would love to hear and incorporate your feedback