skip to main content
research-article
Open Access

Seamful XAI: Operationalizing Seamful Design in Explainable AI

Published:26 April 2024Publication History
Skip Abstract Section

Abstract

Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps. While black-boxing AI systems can make the user experience seamless, hiding the seams risks disempowering users to mitigate fallouts from AI mistakes. Instead of hiding these AI imperfections, can we leverage them to help the user? While Explainable AI (XAI) has predominantly tackled algorithmic opaqueness, we propose that seamful design can foster AI explainability by revealing and leveraging sociotechnical and infrastructural mismatches. We introduce the concept of Seamful XAI by (1) conceptually transferring "seams" to the AI context and (2) developing a design process that helps stakeholders anticipate and design with seams. We explore this process with 43 AI practitioners and real end-users, using a scenario-based co-design activity informed by real-world use cases. We found that the Seamful XAI design process helped users foresee AI harms, identify underlying reasons (seams), locate them in the AI's lifecycle, learn how to leverage seamful information to improve XAI and user agency. We share empirical insights, implications, and reflections on how this process can help practitioners anticipate and craft seams in AI, how seamfulness can improve explainability, empower end-users, and facilitate Responsible AI.

References

  1. Mark S. Ackerman. 2000. The Intellectual Challenge of CSCW: The Gap between Social Requirements and Technical Feasibility. Hum.-Comput. Interact. , Vol. 15, 2 (sep 2000), 179--203. https://doi.org/10.1207/S15327051HCI1523_5Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. P Agre. 1997. Toward a critical technical practice: Lessons learned in trying to reform AI in Bowker. G., Star, S., Turner, W., and Gasser, L., eds, Social Science, Technical Systems and Cooperative Work: Beyond the Great Divide, Erlbaum (1997).Google ScholarGoogle Scholar
  3. Evgeni Aizenberg and Jeroen Van Den Hoven. 2020. Designing for human rights in AI. Big Data & Society, Vol. 7, 2 (2020), 2053951720949566.Google ScholarGoogle ScholarCross RefCross Ref
  4. Saleema Amershi. 2020. Toward Responsible AI by Planning to Fail. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (Virtual Event, CA, USA) (KDD '20). Association for Computing Machinery, New York, NY, USA, 3607. https://doi.org/10.1145/3394486.3409557Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Alejandro Barredo Arrieta, Natalia D'iaz-Rodr'iguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador Garc'ia, Sergio Gil-López, Daniel Molina, Richard Benjamins, et al. 2020. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information fusion , Vol. 58 (2020), 82--115.Google ScholarGoogle Scholar
  6. Gagan Bansal, Besmira Nushi, Ece Kamar, Walter S. Lasecki, Daniel S. Weld, and Eric Horvitz. 2019. Beyond Accuracy: The Role of Mental Models in Human-AI Team Performance. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 7, 1 (Oct. 2019), 2--11. https://doi.org/10.1609/hcomp.v7i1.5285Google ScholarGoogle ScholarCross RefCross Ref
  7. Gagan Bansal, Tongshuang Wu, Joyce Zhou, Raymond Fok, Besmira Nushi, Ece Kamar, Marco Tulio Ribeiro, and Daniel Weld. 2021. Does the whole exceed its parts? the effect of ai explanations on complementary team performance. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1--16.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Richard Benjamins, Alberto Barbado, and Daniel Sierra. 2019. Responsible AI by design in practice. arXiv preprint arXiv:1909.12838 (2019).Google ScholarGoogle Scholar
  9. Philip AE Brey. 2012. Anticipatory ethics for emerging technologies. NanoEthics, Vol. 6, 1 (2012), 1--13.Google ScholarGoogle ScholarCross RefCross Ref
  10. Gregor Broll and Steve Benford. 2005. Seamful Design for Location-Based Mobile Games. In Entertainment Computing - ICEC 2005, Fumio Kishino, Yoshifumi Kitamura, Hirokazu Kato, and Noriko Nagata (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 155--166.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Matthew Chalmers. 2003. Seamful Design and Ubicomp Infrastructure.Google ScholarGoogle Scholar
  12. Matthew Chalmers, Andreas Dieberger, Kristina Höök, and Åsa Rudström. 2004. Social Navigation and Seamful Design. In In Japanese Journal of Cognitive Science, Special Issue on Social Navigation. 171--181.Google ScholarGoogle Scholar
  13. M. Chalmers, I. MacColl, and M. Bell. 2003. Seamful design: showing the seams in wearable computing. In 2003 IEE Eurowearable. 11--16. https://doi.org/10.1049/ic:20030140Google ScholarGoogle ScholarCross RefCross Ref
  14. Kathy Charmaz. 2014. Constructing Grounded Theory (Introducing Qualitative Methods series) 2nd Edition. Sage, London.Google ScholarGoogle Scholar
  15. Leszek Chybowski, Katarzyna Gawdzi'nska, and Valeri Souchkov. 2018. Applying the anticipatory failure determination at a very early stage of a system's development: overview and case study. Multidisciplinary Aspects of Production Engineering , Vol. 1 (2018).Google ScholarGoogle ScholarCross RefCross Ref
  16. Fernando Delgado, Stephen Yang, Michael Madaio, and Qian Yang. 2021. Stakeholder Participation in AI: Beyond" Add Diverse Stakeholders and Stir". arXiv preprint arXiv:2111.01122 (2021).Google ScholarGoogle Scholar
  17. Wesley Hanwen Deng, Manish Nagireddy, Michelle Seng Ah Lee, Jatinder Singh, Zhiwei Steven Wu, Kenneth Holstein, and Haiyi Zhu. 2022. Exploring How Machine Learning Practitioners (Try To) Use Fairness Toolkits. arXiv preprint arXiv:2205.06922 (2022).Google ScholarGoogle Scholar
  18. Shipi Dhanorkar, Christine T Wolf, Kun Qian, Anbang Xu, Lucian Popa, and Yunyao Li. 2021. Who needs to know what, when?: Broadening the Explainable AI (XAI) Design Space by Looking at Explanations Across the AI Lifecycle. In Designing Interactive Systems Conference 2021. 1591--1602.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Jonathan Dodge, Q Vera Liao, Yunfeng Zhang, Rachel KE Bellamy, and Casey Dugan. 2019. Explaining models: an empirical study of how explanations impact fairness judgment. In Proceedings of the 24th international conference on intelligent user interfaces. 275--285.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Graham Dove, Kim Halskov, Jodi Forlizzi, and John Zimmerman. 2017. UX Design Innovation: Challenges for Working with Machine Learning as a Design Material. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (Denver, Colorado, USA) (CHI '17). Association for Computing Machinery, New York, NY, USA, 278--288. https://doi.org/10.1145/3025453.3025739Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Upol Ehsan, Q Vera Liao, Michael Muller, Mark O Riedl, and Justin D Weisz. 2021a. Expanding explainability: Towards social transparency in ai systems. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1--19.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Upol Ehsan and Mark O Riedl. 2020. Human-centered explainable ai: Towards a reflective sociotechnical approach. In International Conference on Human-Computer Interaction. Springer, 449--466.Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Upol Ehsan and Mark O Riedl. 2021. Explainability pitfalls: Beyond dark patterns in explainable AI. arXiv preprint arXiv:2109.12480 (2021).Google ScholarGoogle Scholar
  24. Upol Ehsan, Koustuv Saha, Munmun De Choudhury, and Mark O Riedl. 2023. Charting the Sociotechnical Gap in Explainable AI: A Framework to Address the Gap in XAI. Proceedings of the ACM on Human-Computer Interaction, Vol. 7, CSCW1 (2023), 1--32.Google ScholarGoogle Scholar
  25. Upol Ehsan, Ranjit Singh, Jacob Metcalf, and Mark Riedl. 2022. The Algorithmic Imprint. In 2022 ACM Conference on Fairness, Accountability, and Transparency. 1305--1317.Google ScholarGoogle Scholar
  26. Upol Ehsan, Philipp Wintersberger, Q Vera Liao, Martina Mara, Marc Streit, Sandra Wachter, Andreas Riener, and Mark O Riedl. 2021b. Operationalizing human-centered perspectives in explainable AI. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems. 1--6.Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. William W Gaver, Jacob Beaver, and Steve Benford. 2003. Ambiguity as a resource for design. In Proceedings of the SIGCHI conference on Human factors in computing systems. 233--240.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé Iii, and Kate Crawford. 2021. Datasheets for datasets. Commun. ACM, Vol. 64, 12 (2021), 86--92.Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Bhavya Ghai, Q Vera Liao, Yunfeng Zhang, Rachel Bellamy, and Klaus Mueller. 2021. Explainable active learning (xal) toward ai explanations as interfaces for machine teachers. Proceedings of the ACM on Human-Computer Interaction, Vol. 4, CSCW3 (2021), 1--28.Google ScholarGoogle Scholar
  30. Leilani H Gilpin, David Bau, Ben Z Yuan, Ayesha Bajwa, Michael Specter, and Lalana Kagal. 2018. Explaining explanations: An overview of interpretability of machine learning. In 2018 IEEE 5th International Conference on data science and advanced analytics (DSAA). IEEE, 80--89.Google ScholarGoogle ScholarCross RefCross Ref
  31. Ana Valeria González, Gagan Bansal, Angela Fan, Yashar Mehdad, Robin Jia, and Srinivasan Iyer. 2021. Do Explanations Help Users Detect Errors in Open-Domain QA? An Evaluation of Spoken vs. Visual Explanations. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. Association for Computational Linguistics, Online, 1103--1116. https://doi.org/10.18653/v1/2021.findings-acl.95Google ScholarGoogle ScholarCross RefCross Ref
  32. Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and Dino Pedreschi. 2018. A survey of methods for explaining black box models. ACM computing surveys (CSUR), Vol. 51, 5 (2018), 1--42.Google ScholarGoogle Scholar
  33. David Gunning, Mark Stefik, Jaesik Choi, Timothy Miller, Simone Stumpf, and Guang-Zhong Yang. 2019. XAI-Explainable artificial intelligence. Science Robotics, Vol. 4, 37 (2019).Google ScholarGoogle ScholarCross RefCross Ref
  34. Aaron Halfaker and R Stuart Geiger. 2020. Ores: Lowering barriers with participatory machine learning in wikipedia. Proceedings of the ACM on Human-Computer Interaction, Vol. 4, CSCW2 (2020), 1--37.Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Amy Heger, Liz B. Marquis, Mihaela Vorvoreanu, Hanna Wallach, and Jennifer W. Vaughan. 2022. Understanding Machine Learning Practitioners' Data Documentation Perceptions, Needs, Challenges, and Desiderata. Proceedings of the ACM on Human-Computer Interaction, Vol. 6, CSCW2 (2022), 1--30.Google ScholarGoogle Scholar
  36. Nicole Hengesbach. 2022. Undoing Seamlessness: Exploring Seams for Critical Visualization. In Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI EA '22). Association for Computing Machinery, New York, NY, USA, Article 364, bibinfonumpages7 pages. https://doi.org/10.1145/3491101.3519703Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Lars Erik Holmquist. 2017. Intelligence on Tap: Artificial Intelligence as a New Design Material. Interactions, Vol. 24, 4 (jun 2017), 28--33. https://doi.org/10.1145/3085571Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Kenneth Holstein, Jennifer Wortman Vaughan, Hal Daumé III, Miro Dudik, and Hanna Wallach. 2019. Improving fairness in machine learning systems: What do industry practitioners need?. In Proceedings of the 2019 CHI conference on human factors in computing systems. 1--16.Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Matthew K Hong, Adam Fourney, Derek DeBellis, and Saleema Amershi. 2021. Planning for natural language failures with the ai playbook. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1--11.Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Sungsoo Ray Hong, Jessica Hullman, and Enrico Bertini. 2020. Human factors in model interpretability: Industry practices, challenges, and needs. Proceedings of the ACM on Human-Computer Interaction, Vol. 4, CSCW1 (2020), 1--26.Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Imoh M Ilevbare, David Probert, and Robert Phaal. 2013. A review of TRIZ, and its benefits and challenges in practice. Technovation, Vol. 33, 2--3 (2013), 30--37.Google ScholarGoogle ScholarCross RefCross Ref
  42. Sarah Inman and David Ribes. 2019. "Beautiful Seams": Strategic Revelations and Concealments. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI '19). Association for Computing Machinery, New York, NY, USA, 1--14. https://doi.org/10.1145/3290605.3300508Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Maia Jacobs, Jeffrey He, Melanie F. Pradier, Barbara Lam, Andrew C Ahn, Thomas H McCoy, Roy H Perlis, Finale Doshi-Velez, and Krzysztof Z Gajos. 2021. Designing AI for trust and collaboration in time-constrained medical decisions: a sociotechnical lens. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1--14.Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Amir-Hossein Karimi, Bernhard Schölkopf, and Isabel Valera. 2021. Algorithmic recourse: from counterfactual explanations to interventions. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency. 353--362.Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Pariya Kashfi, Agneta Nilsson, and Robert Feldt. 2017. Integrating User eXperience practices into software development processes: implications of the UX characteristics. PeerJ Computer Science , Vol. 3 (10 2017), e130. https://doi.org/10.7717/peerj-cs.130Google ScholarGoogle ScholarCross RefCross Ref
  46. Harmanpreet Kaur, Eytan Adar, Eric Gilbert, and Cliff Lampe. 2022. Sensible AI: Re-imagining Interpretability and Explainability using Sensemaking Theory. arXiv preprint arXiv:2205.05057 (2022).Google ScholarGoogle Scholar
  47. Claire Kayacik, Sherol Chen, Signe Noerly, Jess Holbrook, Adam Roberts, and Douglas Eck. 2019. Identifying the Intersections: User Experience Research Scientist Collaboration in a Generative Machine Learning Interface. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI EA '19). Association for Computing Machinery, New York, NY, USA, 1--8. https://doi.org/10.1145/3290607.3299059Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. Mark T. Keane and Barry Smyth. 2020. Good Counterfactuals and Where to Find Them: A Case-Based Technique for Generating Counterfactuals for Explainable AI (XAI). In Case-Based Reasoning Research and Development (Lecture Notes in Computer Science ), Ian Watson and Rosina Weber (Eds.). Springer International Publishing, Cham, 163--178. https://doi.org/10.1007/978--3-030--58342--2_11Google ScholarGoogle ScholarCross RefCross Ref
  49. Paul M Leonardi. 2013. Theoretical foundations for the study of sociomateriality. Information and organization, Vol. 23, 2 (2013), 59--76. Publisher: Elsevier.Google ScholarGoogle Scholar
  50. Q Vera Liao, Daniel Gruen, and Sarah Miller. 2020. Questioning the AI: informing design practices for explainable AI user experiences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1--15.Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. Q Vera Liao and Kush R Varshney. 2021. Human-Centered Explainable AI (XAI): From Algorithms to User Experiences. arXiv preprint arXiv:2110.10790 (2021).Google ScholarGoogle Scholar
  52. Q Vera Liao, Yunfeng Zhang, Ronny Luss, Finale Doshi-Velez, and Amit Dhurandhar. 2022. Connecting Algorithmic Research and Usage Contexts: A Perspective of Contextualized Evaluation for Explainable AI. arXiv preprint arXiv:2206.10847 (2022).Google ScholarGoogle Scholar
  53. Zachary C Lipton. 2016. The mythos of model interpretability. arXiv preprint arXiv:1606.03490 (2016).Google ScholarGoogle Scholar
  54. Henrietta Lyons, Eduardo Velloso, and Tim Miller. 2021. Conceptualising contestability: Perspectives on contesting algorithmic decisions. Proceedings of the ACM on Human-Computer Interaction, Vol. 5, CSCW1 (2021), 1--25.Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. Michael Madaio, Lisa Egede, Hariharan Subramonyam, Jennifer Wortman Vaughan, and Hanna Wallach. 2022. Assessing the Fairness of AI Systems: AI Practitioners' Processes, Challenges, and Needs for Support. Proceedings of the ACM on Human-Computer Interaction, Vol. 6, CSCW1 (2022).Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. Michael A. Madaio, Luke Stark, Jennifer Wortman Vaughan, and Hanna Wallach. 2020a. Co-Designing Checklists to Understand Organizational Challenges and Opportunities around Fairness in AI. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI '20). Association for Computing Machinery, New York, NY, USA, 1--14. https://doi.org/10.1145/3313831.3376445Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. Michael A Madaio, Luke Stark, Jennifer Wortman Vaughan, and Hanna Wallach. 2020b. Co-designing checklists to understand organizational challenges and opportunities around fairness in AI. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1--14.Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. Jacob Metcalf, Emanuel Moss, Elizabeth Anne Watkins, Ranjit Singh, and Madeleine Clare Elish. 2021. Algorithmic Impact Assessments and Accountability: The Co-Construction of Impacts. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (Virtual Event, Canada) (FAccT '21). Association for Computing Machinery, New York, NY, USA, 735--746. https://doi.org/10.1145/3442188.3445935Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. 2019. Model cards for model reporting. In Proceedings of the conference on fairness, accountability, and transparency. 220--229.Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. Shweta Narkar, Yunfeng Zhang, Q Vera Liao, Dakuo Wang, and Justin D Weisz. 2021. Model LineUpper: Supporting Interactive Model Comparison at Multiple Levels for AutoML. In 26th International Conference on Intelligent User Interfaces. 170--174.Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. Tommy Nilsson, Carl Hogsden, Charith Perera, Saeed Aghaee, David J Scruton, Andreas Lund, and Alan F Blackwell. 2016. Applying seamful design in location-based mobile museum applications. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), Vol. 12, 4 (2016), 1--23.Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. Samir Passi. 2021. Making Data Work: The Human and Organizational Lifeworlds of Data Science Practices. Ph.,D. Dissertation.Google ScholarGoogle Scholar
  63. Samir Passi and Solon Barocas. 2019. Problem Formulation and Fairness. In Proceedings of the Conference on Fairness, Accountability, and Transparency (Atlanta, GA, USA) (FAT* '19). Association for Computing Machinery, New York, NY, USA, 39--48. https://doi.org/10.1145/3287560.3287567Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. Samir Passi and Steven J. Jackson. 2018. Trust in Data Science: Collaboration, Translation, and Accountability in Corporate Data Science Projects. Proc. ACM Hum.-Comput. Interact. , Vol. 2, CSCW, Article 136 (nov 2018), bibinfonumpages28 pages. https://doi.org/10.1145/3274405Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. Samir Passi and Phoebe Sengers. 2020. Making data science systems work. Big Data & Society, Vol. 7, 2 (2020), 2053951720939605. https://doi.org/10.1177/2053951720939605Google ScholarGoogle ScholarCross RefCross Ref
  66. Samir Passi and Mihaela Vorvoreanu. 2022. Overreliance on AI: Literature Review. Technical Report MSR-TR-2022--12. Microsoft. https://www.microsoft.com/en-us/research/publication/overreliance-on-ai-literature-review/Google ScholarGoogle Scholar
  67. Dorian Peters, Karina Vold, Diana Robinson, and Rafael A. Calvo. 2020. Responsible AI-Two Frameworks for Ethical Design Practice. IEEE Transactions on Technology and Society, Vol. 1, 1 (2020), 34--47. https://doi.org/10.1109/TTS.2020.2974991Google ScholarGoogle ScholarCross RefCross Ref
  68. Forough Poursabzi-Sangdeh, Daniel G Goldstein, Jake M Hofman, Jennifer Wortman Wortman Vaughan, and Hanna Wallach. 2021. Manipulating and measuring model interpretability. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1--52.Google ScholarGoogle ScholarDigital LibraryDigital Library
  69. Navi Radjou, Jaideep Prabhu, and Simone Ahuja. 2012. Jugaad innovation: Think frugal, be flexible, generate breakthrough growth. John Wiley & Sons.Google ScholarGoogle Scholar
  70. Inioluwa Deborah Raji, Andrew Smart, Rebecca N White, Margaret Mitchell, Timnit Gebru, Ben Hutchinson, Jamila Smith-Loud, Daniel Theron, and Parker Barnes. 2020. Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 33--44.Google ScholarGoogle ScholarDigital LibraryDigital Library
  71. Bogdana Rakova, Jingying Yang, Henriette Cramer, and Rumman Chowdhury. 2021a. Where Responsible AI Meets Reality: Practitioner Perspectives on Enablers for Shifting Organizational Practices. Proc. ACM Hum.-Comput. Interact. , Vol. 5, CSCW1, Article 7 (apr 2021), bibinfonumpages23 pages. https://doi.org/10.1145/3449081Google ScholarGoogle ScholarDigital LibraryDigital Library
  72. Bogdana Rakova, Jingying Yang, Henriette Cramer, and Rumman Chowdhury. 2021b. Where responsible AI meets reality: Practitioner perspectives on enablers for shifting organizational practices. Proceedings of the ACM on Human-Computer Interaction, Vol. 5, CSCW1 (2021), 1--23.Google ScholarGoogle ScholarDigital LibraryDigital Library
  73. Bogdana Ravoka. 2022. Slowing Down AI with Speculative Friction. Branch Magazine (2022).Google ScholarGoogle Scholar
  74. Mary Beth Rosson and John M Carroll. 2009. Scenario based design. Human-computer interaction. boca raton, FL (2009), 145--162.Google ScholarGoogle Scholar
  75. Gloire Rubambiza, Phoebe Sengers, and Hakim Weatherspoon. 2022. Seamless Visions, Seamful Realities: Anticipating Rural Infrastructural Fragility in Early Design of Digital Agriculture. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI '22). Association for Computing Machinery, New York, NY, USA, Article 451, bibinfonumpages15 pages. https://doi.org/10.1145/3491102.3517579Google ScholarGoogle ScholarDigital LibraryDigital Library
  76. Andrew D. Selbst, Danah Boyd, Sorelle A. Friedler, Suresh Venkatasubramanian, and Janet Vertesi. 2019. Fairness and Abstraction in Sociotechnical Systems. In Proceedings of the Conference on Fairness, Accountability, and Transparency. ACM, Atlanta GA USA, 59--68. https://doi.org/10.1145/3287560.3287598Google ScholarGoogle ScholarDigital LibraryDigital Library
  77. Renee Shelby, Shalaleh Rismani, Kathryn Henne, AJung Moon, Negar Rostamzadeh, Paul Nicholas, N'Mah Yilla, Jess Gallegos, Andrew Smart, Emilio Garcia, and Gurleen Virk. 2023. Identifying Sociotechnical Harms of Algorithmic Systems: Scoping a Taxonomy for Harm Reduction. arxiv: 2210.05791 [cs.HC]Google ScholarGoogle Scholar
  78. Ben Shneiderman. 2021. Responsible AI: Bridging from ethics to practice. Commun. ACM, Vol. 64, 8 (2021), 32--35.Google ScholarGoogle ScholarDigital LibraryDigital Library
  79. Mona Sloane, Emanuel Moss, Olaitan Awomolo, and Laura Forlano. 2020. Participation is not a design fix for machine learning. arXiv preprint arXiv:2007.02423 (2020).Google ScholarGoogle Scholar
  80. Anselm Strauss and Juliet M. Corbin. 1990. Basics of Qualitative Research: Grounded Theory Techniques and Procedures. Sage, New York.Google ScholarGoogle Scholar
  81. Harini Suresh, Steven R Gomez, Kevin K Nam, and Arvind Satyanarayan. 2021. Beyond Expertise and Roles: A Framework to Characterize the Stakeholders of Interpretable Machine Learning and their Needs. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1--16.Google ScholarGoogle ScholarDigital LibraryDigital Library
  82. Maxwell Szymanski, Martijn Millecamp, and Katrien Verbert. 2021. Visual, textual or hybrid: the effect of user expertise on different explanations. In 26th International Conference on Intelligent User Interfaces. 109--119.Google ScholarGoogle ScholarDigital LibraryDigital Library
  83. Christian M Thurnes, Frank Zeihsel, Svetlana Visnepolschi, and Frank Hallfell. 2015. Using TRIZ to invent failures--concept and application to go beyond traditional FMEA. Procedia engineering , Vol. 131 (2015), 426--450.Google ScholarGoogle ScholarCross RefCross Ref
  84. Arthur B VanGundy. 1984. Brain writing for new product ideas: an alternative to brainstorming. Journal of Consumer Marketing (1984).Google ScholarGoogle Scholar
  85. Janet Vertesi. 2014. Seamful spaces: Heterogeneous infrastructures in interaction. Science, Technology, & Human Values, Vol. 39, 2 (2014), 264--284.Google ScholarGoogle ScholarCross RefCross Ref
  86. Mihaela Vorvoreanu, Amy Heger, Samir Passi, Shipi Dhanorkar, Zoe Kahn, and Ruotong Wang. 2023. Responsible AI Maturity Model. Technical Report MSR-TR-2023--26. Microsoft. https://www.microsoft.com/en-us/research/publication/responsible-ai-maturity-model/Google ScholarGoogle Scholar
  87. Danding Wang, Qian Yang, Ashraf Abdul, and Brian Y Lim. 2019. Designing theory-driven user-centric explainable AI. In Proceedings of the 2019 CHI conference on human factors in computing systems. 1--15.Google ScholarGoogle ScholarDigital LibraryDigital Library
  88. Mark Weiser. 1994. Creating the invisible interface: (invited talk). In Proceedings of the 7th annual ACM symposium on User interface software and technology. 1.Google ScholarGoogle ScholarDigital LibraryDigital Library
  89. Yao Xie, Melody Chen, David Kao, Ge Gao, and Xiang'Anthony' Chen. 2020. CheXplain: enabling physicians to explore and understand data-driven, AI-enabled medical imaging analysis. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1--13.Google ScholarGoogle ScholarDigital LibraryDigital Library
  90. Qian Yang, Aaron Steinfeld, Carolyn Rosé, and John Zimmerman. 2020. Re-examining whether, why, and how human-AI interaction is uniquely difficult to design. In Proceedings of the 2020 chi conference on human factors in computing systems. 1--13.Google ScholarGoogle ScholarDigital LibraryDigital Library
  91. Yunfeng Zhang, Q Vera Liao, and Rachel KE Bellamy. 2020. Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 295--305. ioGoogle ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Seamful XAI: Operationalizing Seamful Design in Explainable AI

            Recommendations

            Comments

            Login options

            Check if you have access through your login credentials or your institution to get full access on this article.

            Sign in

            Full Access

            • Published in

              cover image Proceedings of the ACM on Human-Computer Interaction
              Proceedings of the ACM on Human-Computer Interaction  Volume 8, Issue CSCW1
              CSCW
              April 2024
              6294 pages
              EISSN:2573-0142
              DOI:10.1145/3661497
              Issue’s Table of Contents

              Copyright © 2024 Owner/Author

              Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

              Publisher

              Association for Computing Machinery

              New York, NY, United States

              Publication History

              • Published: 26 April 2024
              Published in pacmhci Volume 8, Issue CSCW1

              Check for updates

              Qualifiers

              • research-article
            • Article Metrics

              • Downloads (Last 12 months)112
              • Downloads (Last 6 weeks)112

              Other Metrics

            PDF Format

            View or Download as a PDF file.

            PDF

            eReader

            View online with eReader.

            eReader