Skip to content

Harms

Tech, Culture and AI Design

What impact can value disparities between and with designers/developers have on tech?

Learning Objective: At the end of this session, students should be keenly aware of the value disparity that exists between developers/AI practitioners and other stakeholder groups. Using AI systems as a case of radical design, students should also understand how decisions around ethical issues in design are assumed to be made.

  • Maurice Jakesch, Zana Buçinca, Saleema Amershi, and Alexandra Olteanu. 2022. How Different Groups Prioritize Ethical Values for Responsible AI. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT '22). Association for Computing Machinery, New York, NY, USA, 310–323. https://doi.org/10.1145/3531146.3533097

  • Van Gorp, A., Van de Poel, I. (2008). Deciding on Ethical Issues in Engineering Design. In: Philosophy and Design. Springer, Dordrecht. https://doi.org/10.1007/978-1-4020-6591-0_6

What kind of culture is driving the tech industry?

Learning Objective: Taking Silicon Valley as an example, students should be aware of the cultural & social context within which one of the most powerful stakeholders of AI systems -- developers/corporations -- operate and how it relates to the values and decisions that underpin AI products/services.

How does tech affect our workplace environment/culture?

Learning Objective: At the end of the session, students should have a better understanding of the impact deployment of a tech system can have in society -- especially in shaping our workplace environment/culture.

  • K. E. C. Levy, ‘Digital surveillance in the hypermasculine workplace’, Feminist Media Studies, vol. 16, no. 2, pp. 361–365, 2016, https://doi.org/10.1080/14680777.2016.1138607

  • A. Rosenblat and T. Hwang, ‘Regional Diversity in Autonomy and Work: A Case Study from Uber and Lyft Drivers’, no. October, 2016.|https://datasociety.net/pubs/ia/Rosenblat-Hwang_Regional_Diversity-10-13.pdf

Transparency and Explainable AI

Understanding the different ways we articulate the need for transparency

Learning Objective: At the end of the session, students should have a technical understanding of the problem of interpretability and explainability as it pertains to ML systems. The next speaker will build on this presentation toward a landscape discussion of solutions.

  • Up to and includig Section 4 of R. Guidotti, A. Monreale, S. Ruggieri, F. Turini, F. Giannotti, and D. Pedreschi, ‘A Survey of Methods for Explaining Black Box Models’, ACM Comput. Surv., vol. 51, no. 5, pp. 1–42, Sep. 2019, doi: 10.1145/3236009|https://doi.org/10.1145/3236009

How we try to solve the problem of explainability

Learning Objective: At the end of the session, students should have a high-level understanding of the type of technical approaches to addressing model explanation, outcome explaination, and black box inspection problems.

  • Sections 5 to 8 of R. Guidotti, A. Monreale, S. Ruggieri, F. Turini, F. Giannotti, and D. Pedreschi, ‘A Survey of Methods for Explaining Black Box Models’, ACM Comput. Surv., vol. 51, no. 5, pp. 1–42, Sep. 2019, doi: 10.1145/3236009.|https://doi.org/10.1145/3236009

Reflecting on ML transparency aproaches to human explainations

Learning Objective: Previous speakers will have discussed about explainable AI approaches already. Building on this, this session should aim to demonstrate what a systematic assessment of explainable systems looks like, and how to use the Explainability Fact Sheet.

  • Kacper Sokol and Peter Flach. 2020. Explainability fact sheets: a framework for systematic assessment of explainable approaches. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT* '20). Association for Computing Machinery, New York, NY, USA, 56–67. https://doi.org/10.1145/3351095.3372870

Are there tradeoffs to building explainable systems?

Learning Objective: At the end of the session, students should have an more nuanced understanding of the trade-off relationship explainable models have with other values.

  • Andrew Bell, Ian Solano-Kamaiko, Oded Nov, and Julia Stoyanovich. 2022. It’s Just Not That Simple: An Empirical Study of the Accuracy-Explainability Trade-off in Machine Learning for Public Policy. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT '22). Association for Computing Machinery, New York, NY, USA, 248–266. https://doi.org/10.1145/3531146.3533090

  • Aparna Balagopalan, Haoran Zhang, Kimia Hamidieh, Thomas Hartvigsen, Frank Rudzicz, and Marzyeh Ghassemi. 2022. The Road to Explainability is Paved with Bias: Measuring the Fairness of Explanations. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT '22). Association for Computing Machinery, New York, NY, USA, 1194–1206. https://doi.org/10.1145/3531146.3533179

Transparency as documentation practice

Learning Objective: At the end of the session, students should know what datasheets for datasets and model cards are, and how to create them for existing systems

  • Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Daumé, H., & Crawford, K. (2018). Datasheets for Datasets. http://arxiv.org/abs/1803.09010

  • Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I. D., & Gebru, T. (2019). Model Cards for Model Reporting. Proceedings of the Conference on Fairness, Accountability, and Transparency, 220–229. https://doi.org/10.1145/3287560.3287596

Model/Domain specific issues

Images

Learning Objective: At the end of the session, students should have a broad understanding of the risks and harms associated with image related ML tasks.

  • Angelina Wang, Solon Barocas, Kristen Laird, and Hanna Wallach. 2022. Measuring Representational Harms in Image Captioning. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT '22). Association for Computing Machinery, New York, NY, USA, 324–335. https://doi.org/10.1145/3531146.3533099

  • Charlotte Bird, Eddie Ungless, and Atoosa Kasirzadeh. 2023. Typology of Risks of Generative Text-to-Image Models. In Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society (AIES ’23), 396–410. https://doi.org/10.1145/3600211.3604722

Audio

Learning Objective: At the end of the session, students should have an understanding of what kind of ethical risks are associated with generative audio models, and discuss risks/harms not covered by the author.

  • Julia Barnett. 2023. The Ethical Implications of Generative Audio Models: A Systematic Literature Review. In Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society (AIES '23). Association for Computing Machinery, New York, NY, USA, 146–161. https://doi.org/10.1145/3600211.3604686

Large Language Models

Learning Objective: At the end of the session, students should have ann understanding of what kind of risks/harms have been discussed in the literature about LLMs before, and how the community is grappling with the need to better evaluate the systems.

  • Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://doi.org/10.1145/3442188.3445922

  • Ethan Perez, Saffron Huang, Francis Song, Trevor Cai, Roman Ring, John Aslanides, Amelia Glaese, Nat McAleese, and Geoffrey Irving. 2022. Red Teaming Language Models with Language Models. https://doi.org/10.48550/arXiv.2202.03286