Skip to content

Human-AI Interaction

User burdens

  • Park, H., Ahn, D., Hosanagar, K., & Lee, J. (2021). Human-AI Interaction in Human Resource Management: Understanding Why Employees Resist Algorithmic Evaluation at Workplaces and How to Mitigate Burdens. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 1–15. https://doi.org/10.1145/3411764.3445304

  • Suh, H., Shahriaree, N., Hekler, E. B., & Kientz, J. A. (2016). Developing and Validating the User Burden Scale: A Tool for Assessing User Burden in Computing Systems. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 3988–3999. https://doi.org/10.1145/2858036.2858448

Impact of writing assistants on users

  • Jakesch, M., Bhat, A., Buschek, D., Zalmanson, L., & Naaman, M. (2023). Co-Writing with Opinionated Language Models Affects Users’ Views. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 1–15. https://doi.org/10.1145/3544548.3581196

  • Bhat, A., Agashe, S., Mohile, N., Oberoi, P., Jangir, R., & Joshi, A. (2022, August 1). Interacting with next-phrase suggestions: How suggestion systems aid and influence the cognitive processes of writing. arXiv.Org. https://doi.org/10.1145/3581641.3584060

Evaluating language model interaction

  • Lee, M., Srivastava, M., Hardy, A., Thickstun, J., Durmus, E., Paranjape, A., Gerard-Ursin, I., Li, X. L., Ladhak, F., Rong, F., Wang, R. E., Kwon, M., Park, J. S., Cao, H., Lee, T., Bommasani, R., Bernstein, M., & Liang, P. (2022). Evaluating Human-Language Model Interaction (arXiv:2212.09746). arXiv. https://doi.org/10.48550/arXiv.2212.09746

HAI Guideline

  • Amershi, S., Weld, D., Vorvoreanu, M., Fourney, A., Nushi, B., Collisson, P., Suh, J., Iqbal, S., Bennett, P. N., Inkpen, K., Teevan, J., Kikin-Gil, R., & Horvitz, E. (2019). Guidelines for Human-AI Interaction. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1–13. https://doi.org/10.1145/3290605.3300233

Does HAI Guideline Change Anything?

  • Li, T., Vorvoreanu, M., Debellis, D., & Amershi, S. (2023). Assessing Human-AI Interaction Early through Factorial Surveys: A Study on the Guidelines for Human-AI Interaction. ACM Transactions on Computer-Human Interaction, 30(5), 69:1-69:45. https://doi.org/10.1145/3511605

Trauma-informed computing framework

  • Chen, J. X., McDonald, A., Zou, Y., Tseng, E., Roundy, K. A., Tamersoy, A., Schaub, F., Ristenpart, T., & Dell, N. (2022). Trauma-Informed Computing: Towards Safer Technology Experiences for All. Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, 1–20. https://doi.org/10.1145/3491102.3517475

Privacy perception

  • Chapter 4 from Hidalgo, C. A., Orghian, D., Albo-Canals, J., Almeida, F. de, & Martín Cantero, N. (2021). How humans judge machines. The MIT Press. https://www.judgingmachines.com/

  • Willems, J., Schmid, M. J., Vanderelst, D., Vogel, D., & Ebinger, F. (2022). AI-driven public services and the privacy paradox: Do citizens really care about their privacy? Public Management Review, 0(0), 1–19. https://doi.org/10.1080/14719037.2022.2063934

Sociotechnical evaluation of generative AI

  • Weidinger, L., Rauh, M., Marchal, N., Manzini, A., Hendricks, L. A., Mateos-Garcia, J., Bergman, S., Kay, J., Griffin, C., Bariach, B., Gabriel, I., Rieser, V., & Isaac, W. (2023). Sociotechnical Safety Evaluation of Generative AI Systems (arXiv:2310.11986). arXiv. https://doi.org/10.48550/arXiv.2310.11986