Abstract
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps. While black-boxing AI systems can make the user experience seamless, hiding the seams risks disempowering users to mitigate fallouts from AI mistakes. Instead of hiding these AI imperfections, can we leverage them to help the user? While Explainable AI (XAI) has predominantly tackled algorithmic opaqueness, we propose that seamful design can foster AI explainability by revealing and leveraging sociotechnical and infrastructural mismatches. We introduce the concept of Seamful XAI by (1) conceptually transferring "seams" to the AI context and (2) developing a design process that helps stakeholders anticipate and design with seams. We explore this process with 43 AI practitioners and real end-users, using a scenario-based co-design activity informed by real-world use cases. We found that the Seamful XAI design process helped users foresee AI harms, identify underlying reasons (seams), locate them in the AI's lifecycle, learn how to leverage seamful information to improve XAI and user agency. We share empirical insights, implications, and reflections on how this process can help practitioners anticipate and craft seams in AI, how seamfulness can improve explainability, empower end-users, and facilitate Responsible AI.
- Mark S. Ackerman. 2000. The Intellectual Challenge of CSCW: The Gap between Social Requirements and Technical Feasibility. Hum.-Comput. Interact. , Vol. 15, 2 (sep 2000), 179--203. https://doi.org/10.1207/S15327051HCI1523_5Google ScholarDigital Library
- P Agre. 1997. Toward a critical technical practice: Lessons learned in trying to reform AI in Bowker. G., Star, S., Turner, W., and Gasser, L., eds, Social Science, Technical Systems and Cooperative Work: Beyond the Great Divide, Erlbaum (1997).Google Scholar
- Evgeni Aizenberg and Jeroen Van Den Hoven. 2020. Designing for human rights in AI. Big Data & Society, Vol. 7, 2 (2020), 2053951720949566.Google ScholarCross Ref
- Saleema Amershi. 2020. Toward Responsible AI by Planning to Fail. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (Virtual Event, CA, USA) (KDD '20). Association for Computing Machinery, New York, NY, USA, 3607. https://doi.org/10.1145/3394486.3409557Google ScholarDigital Library
- Alejandro Barredo Arrieta, Natalia D'iaz-Rodr'iguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador Garc'ia, Sergio Gil-López, Daniel Molina, Richard Benjamins, et al. 2020. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information fusion , Vol. 58 (2020), 82--115.Google Scholar
- Gagan Bansal, Besmira Nushi, Ece Kamar, Walter S. Lasecki, Daniel S. Weld, and Eric Horvitz. 2019. Beyond Accuracy: The Role of Mental Models in Human-AI Team Performance. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 7, 1 (Oct. 2019), 2--11. https://doi.org/10.1609/hcomp.v7i1.5285Google ScholarCross Ref
- Gagan Bansal, Tongshuang Wu, Joyce Zhou, Raymond Fok, Besmira Nushi, Ece Kamar, Marco Tulio Ribeiro, and Daniel Weld. 2021. Does the whole exceed its parts? the effect of ai explanations on complementary team performance. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1--16.Google ScholarDigital Library
- Richard Benjamins, Alberto Barbado, and Daniel Sierra. 2019. Responsible AI by design in practice. arXiv preprint arXiv:1909.12838 (2019).Google Scholar
- Philip AE Brey. 2012. Anticipatory ethics for emerging technologies. NanoEthics, Vol. 6, 1 (2012), 1--13.Google ScholarCross Ref
- Gregor Broll and Steve Benford. 2005. Seamful Design for Location-Based Mobile Games. In Entertainment Computing - ICEC 2005, Fumio Kishino, Yoshifumi Kitamura, Hirokazu Kato, and Noriko Nagata (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 155--166.Google ScholarDigital Library
- Matthew Chalmers. 2003. Seamful Design and Ubicomp Infrastructure.Google Scholar
- Matthew Chalmers, Andreas Dieberger, Kristina Höök, and Åsa Rudström. 2004. Social Navigation and Seamful Design. In In Japanese Journal of Cognitive Science, Special Issue on Social Navigation. 171--181.Google Scholar
- M. Chalmers, I. MacColl, and M. Bell. 2003. Seamful design: showing the seams in wearable computing. In 2003 IEE Eurowearable. 11--16. https://doi.org/10.1049/ic:20030140Google ScholarCross Ref
- Kathy Charmaz. 2014. Constructing Grounded Theory (Introducing Qualitative Methods series) 2nd Edition. Sage, London.Google Scholar
- Leszek Chybowski, Katarzyna Gawdzi'nska, and Valeri Souchkov. 2018. Applying the anticipatory failure determination at a very early stage of a system's development: overview and case study. Multidisciplinary Aspects of Production Engineering , Vol. 1 (2018).Google ScholarCross Ref
- Fernando Delgado, Stephen Yang, Michael Madaio, and Qian Yang. 2021. Stakeholder Participation in AI: Beyond" Add Diverse Stakeholders and Stir". arXiv preprint arXiv:2111.01122 (2021).Google Scholar
- Wesley Hanwen Deng, Manish Nagireddy, Michelle Seng Ah Lee, Jatinder Singh, Zhiwei Steven Wu, Kenneth Holstein, and Haiyi Zhu. 2022. Exploring How Machine Learning Practitioners (Try To) Use Fairness Toolkits. arXiv preprint arXiv:2205.06922 (2022).Google Scholar
- Shipi Dhanorkar, Christine T Wolf, Kun Qian, Anbang Xu, Lucian Popa, and Yunyao Li. 2021. Who needs to know what, when?: Broadening the Explainable AI (XAI) Design Space by Looking at Explanations Across the AI Lifecycle. In Designing Interactive Systems Conference 2021. 1591--1602.Google ScholarDigital Library
- Jonathan Dodge, Q Vera Liao, Yunfeng Zhang, Rachel KE Bellamy, and Casey Dugan. 2019. Explaining models: an empirical study of how explanations impact fairness judgment. In Proceedings of the 24th international conference on intelligent user interfaces. 275--285.Google ScholarDigital Library
- Graham Dove, Kim Halskov, Jodi Forlizzi, and John Zimmerman. 2017. UX Design Innovation: Challenges for Working with Machine Learning as a Design Material. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (Denver, Colorado, USA) (CHI '17). Association for Computing Machinery, New York, NY, USA, 278--288. https://doi.org/10.1145/3025453.3025739Google ScholarDigital Library
- Upol Ehsan, Q Vera Liao, Michael Muller, Mark O Riedl, and Justin D Weisz. 2021a. Expanding explainability: Towards social transparency in ai systems. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1--19.Google ScholarDigital Library
- Upol Ehsan and Mark O Riedl. 2020. Human-centered explainable ai: Towards a reflective sociotechnical approach. In International Conference on Human-Computer Interaction. Springer, 449--466.Google ScholarDigital Library
- Upol Ehsan and Mark O Riedl. 2021. Explainability pitfalls: Beyond dark patterns in explainable AI. arXiv preprint arXiv:2109.12480 (2021).Google Scholar
- Upol Ehsan, Koustuv Saha, Munmun De Choudhury, and Mark O Riedl. 2023. Charting the Sociotechnical Gap in Explainable AI: A Framework to Address the Gap in XAI. Proceedings of the ACM on Human-Computer Interaction, Vol. 7, CSCW1 (2023), 1--32.Google Scholar
- Upol Ehsan, Ranjit Singh, Jacob Metcalf, and Mark Riedl. 2022. The Algorithmic Imprint. In 2022 ACM Conference on Fairness, Accountability, and Transparency. 1305--1317.Google Scholar
- Upol Ehsan, Philipp Wintersberger, Q Vera Liao, Martina Mara, Marc Streit, Sandra Wachter, Andreas Riener, and Mark O Riedl. 2021b. Operationalizing human-centered perspectives in explainable AI. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems. 1--6.Google ScholarDigital Library
- William W Gaver, Jacob Beaver, and Steve Benford. 2003. Ambiguity as a resource for design. In Proceedings of the SIGCHI conference on Human factors in computing systems. 233--240.Google ScholarDigital Library
- Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé Iii, and Kate Crawford. 2021. Datasheets for datasets. Commun. ACM, Vol. 64, 12 (2021), 86--92.Google ScholarDigital Library
- Bhavya Ghai, Q Vera Liao, Yunfeng Zhang, Rachel Bellamy, and Klaus Mueller. 2021. Explainable active learning (xal) toward ai explanations as interfaces for machine teachers. Proceedings of the ACM on Human-Computer Interaction, Vol. 4, CSCW3 (2021), 1--28.Google Scholar
- Leilani H Gilpin, David Bau, Ben Z Yuan, Ayesha Bajwa, Michael Specter, and Lalana Kagal. 2018. Explaining explanations: An overview of interpretability of machine learning. In 2018 IEEE 5th International Conference on data science and advanced analytics (DSAA). IEEE, 80--89.Google ScholarCross Ref
- Ana Valeria González, Gagan Bansal, Angela Fan, Yashar Mehdad, Robin Jia, and Srinivasan Iyer. 2021. Do Explanations Help Users Detect Errors in Open-Domain QA? An Evaluation of Spoken vs. Visual Explanations. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. Association for Computational Linguistics, Online, 1103--1116. https://doi.org/10.18653/v1/2021.findings-acl.95Google ScholarCross Ref
- Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and Dino Pedreschi. 2018. A survey of methods for explaining black box models. ACM computing surveys (CSUR), Vol. 51, 5 (2018), 1--42.Google Scholar
- David Gunning, Mark Stefik, Jaesik Choi, Timothy Miller, Simone Stumpf, and Guang-Zhong Yang. 2019. XAI-Explainable artificial intelligence. Science Robotics, Vol. 4, 37 (2019).Google ScholarCross Ref
- Aaron Halfaker and R Stuart Geiger. 2020. Ores: Lowering barriers with participatory machine learning in wikipedia. Proceedings of the ACM on Human-Computer Interaction, Vol. 4, CSCW2 (2020), 1--37.Google ScholarDigital Library
- Amy Heger, Liz B. Marquis, Mihaela Vorvoreanu, Hanna Wallach, and Jennifer W. Vaughan. 2022. Understanding Machine Learning Practitioners' Data Documentation Perceptions, Needs, Challenges, and Desiderata. Proceedings of the ACM on Human-Computer Interaction, Vol. 6, CSCW2 (2022), 1--30.Google Scholar
- Nicole Hengesbach. 2022. Undoing Seamlessness: Exploring Seams for Critical Visualization. In Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI EA '22). Association for Computing Machinery, New York, NY, USA, Article 364, bibinfonumpages7 pages. https://doi.org/10.1145/3491101.3519703Google ScholarDigital Library
- Lars Erik Holmquist. 2017. Intelligence on Tap: Artificial Intelligence as a New Design Material. Interactions, Vol. 24, 4 (jun 2017), 28--33. https://doi.org/10.1145/3085571Google ScholarDigital Library
- Kenneth Holstein, Jennifer Wortman Vaughan, Hal Daumé III, Miro Dudik, and Hanna Wallach. 2019. Improving fairness in machine learning systems: What do industry practitioners need?. In Proceedings of the 2019 CHI conference on human factors in computing systems. 1--16.Google ScholarDigital Library
- Matthew K Hong, Adam Fourney, Derek DeBellis, and Saleema Amershi. 2021. Planning for natural language failures with the ai playbook. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1--11.Google ScholarDigital Library
- Sungsoo Ray Hong, Jessica Hullman, and Enrico Bertini. 2020. Human factors in model interpretability: Industry practices, challenges, and needs. Proceedings of the ACM on Human-Computer Interaction, Vol. 4, CSCW1 (2020), 1--26.Google ScholarDigital Library
- Imoh M Ilevbare, David Probert, and Robert Phaal. 2013. A review of TRIZ, and its benefits and challenges in practice. Technovation, Vol. 33, 2--3 (2013), 30--37.Google ScholarCross Ref
- Sarah Inman and David Ribes. 2019. "Beautiful Seams": Strategic Revelations and Concealments. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI '19). Association for Computing Machinery, New York, NY, USA, 1--14. https://doi.org/10.1145/3290605.3300508Google ScholarDigital Library
- Maia Jacobs, Jeffrey He, Melanie F. Pradier, Barbara Lam, Andrew C Ahn, Thomas H McCoy, Roy H Perlis, Finale Doshi-Velez, and Krzysztof Z Gajos. 2021. Designing AI for trust and collaboration in time-constrained medical decisions: a sociotechnical lens. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1--14.Google ScholarDigital Library
- Amir-Hossein Karimi, Bernhard Schölkopf, and Isabel Valera. 2021. Algorithmic recourse: from counterfactual explanations to interventions. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency. 353--362.Google ScholarDigital Library
- Pariya Kashfi, Agneta Nilsson, and Robert Feldt. 2017. Integrating User eXperience practices into software development processes: implications of the UX characteristics. PeerJ Computer Science , Vol. 3 (10 2017), e130. https://doi.org/10.7717/peerj-cs.130Google ScholarCross Ref
- Harmanpreet Kaur, Eytan Adar, Eric Gilbert, and Cliff Lampe. 2022. Sensible AI: Re-imagining Interpretability and Explainability using Sensemaking Theory. arXiv preprint arXiv:2205.05057 (2022).Google Scholar
- Claire Kayacik, Sherol Chen, Signe Noerly, Jess Holbrook, Adam Roberts, and Douglas Eck. 2019. Identifying the Intersections: User Experience Research Scientist Collaboration in a Generative Machine Learning Interface. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI EA '19). Association for Computing Machinery, New York, NY, USA, 1--8. https://doi.org/10.1145/3290607.3299059Google ScholarDigital Library
- Mark T. Keane and Barry Smyth. 2020. Good Counterfactuals and Where to Find Them: A Case-Based Technique for Generating Counterfactuals for Explainable AI (XAI). In Case-Based Reasoning Research and Development (Lecture Notes in Computer Science ), Ian Watson and Rosina Weber (Eds.). Springer International Publishing, Cham, 163--178. https://doi.org/10.1007/978--3-030--58342--2_11Google ScholarCross Ref
- Paul M Leonardi. 2013. Theoretical foundations for the study of sociomateriality. Information and organization, Vol. 23, 2 (2013), 59--76. Publisher: Elsevier.Google Scholar
- Q Vera Liao, Daniel Gruen, and Sarah Miller. 2020. Questioning the AI: informing design practices for explainable AI user experiences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1--15.Google ScholarDigital Library
- Q Vera Liao and Kush R Varshney. 2021. Human-Centered Explainable AI (XAI): From Algorithms to User Experiences. arXiv preprint arXiv:2110.10790 (2021).Google Scholar
- Q Vera Liao, Yunfeng Zhang, Ronny Luss, Finale Doshi-Velez, and Amit Dhurandhar. 2022. Connecting Algorithmic Research and Usage Contexts: A Perspective of Contextualized Evaluation for Explainable AI. arXiv preprint arXiv:2206.10847 (2022).Google Scholar
- Zachary C Lipton. 2016. The mythos of model interpretability. arXiv preprint arXiv:1606.03490 (2016).Google Scholar
- Henrietta Lyons, Eduardo Velloso, and Tim Miller. 2021. Conceptualising contestability: Perspectives on contesting algorithmic decisions. Proceedings of the ACM on Human-Computer Interaction, Vol. 5, CSCW1 (2021), 1--25.Google ScholarDigital Library
- Michael Madaio, Lisa Egede, Hariharan Subramonyam, Jennifer Wortman Vaughan, and Hanna Wallach. 2022. Assessing the Fairness of AI Systems: AI Practitioners' Processes, Challenges, and Needs for Support. Proceedings of the ACM on Human-Computer Interaction, Vol. 6, CSCW1 (2022).Google ScholarDigital Library
- Michael A. Madaio, Luke Stark, Jennifer Wortman Vaughan, and Hanna Wallach. 2020a. Co-Designing Checklists to Understand Organizational Challenges and Opportunities around Fairness in AI. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI '20). Association for Computing Machinery, New York, NY, USA, 1--14. https://doi.org/10.1145/3313831.3376445Google ScholarDigital Library
- Michael A Madaio, Luke Stark, Jennifer Wortman Vaughan, and Hanna Wallach. 2020b. Co-designing checklists to understand organizational challenges and opportunities around fairness in AI. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1--14.Google ScholarDigital Library
- Jacob Metcalf, Emanuel Moss, Elizabeth Anne Watkins, Ranjit Singh, and Madeleine Clare Elish. 2021. Algorithmic Impact Assessments and Accountability: The Co-Construction of Impacts. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (Virtual Event, Canada) (FAccT '21). Association for Computing Machinery, New York, NY, USA, 735--746. https://doi.org/10.1145/3442188.3445935Google ScholarDigital Library
- Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. 2019. Model cards for model reporting. In Proceedings of the conference on fairness, accountability, and transparency. 220--229.Google ScholarDigital Library
- Shweta Narkar, Yunfeng Zhang, Q Vera Liao, Dakuo Wang, and Justin D Weisz. 2021. Model LineUpper: Supporting Interactive Model Comparison at Multiple Levels for AutoML. In 26th International Conference on Intelligent User Interfaces. 170--174.Google ScholarDigital Library
- Tommy Nilsson, Carl Hogsden, Charith Perera, Saeed Aghaee, David J Scruton, Andreas Lund, and Alan F Blackwell. 2016. Applying seamful design in location-based mobile museum applications. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), Vol. 12, 4 (2016), 1--23.Google ScholarDigital Library
- Samir Passi. 2021. Making Data Work: The Human and Organizational Lifeworlds of Data Science Practices. Ph.,D. Dissertation.Google Scholar
- Samir Passi and Solon Barocas. 2019. Problem Formulation and Fairness. In Proceedings of the Conference on Fairness, Accountability, and Transparency (Atlanta, GA, USA) (FAT* '19). Association for Computing Machinery, New York, NY, USA, 39--48. https://doi.org/10.1145/3287560.3287567Google ScholarDigital Library
- Samir Passi and Steven J. Jackson. 2018. Trust in Data Science: Collaboration, Translation, and Accountability in Corporate Data Science Projects. Proc. ACM Hum.-Comput. Interact. , Vol. 2, CSCW, Article 136 (nov 2018), bibinfonumpages28 pages. https://doi.org/10.1145/3274405Google ScholarDigital Library
- Samir Passi and Phoebe Sengers. 2020. Making data science systems work. Big Data & Society, Vol. 7, 2 (2020), 2053951720939605. https://doi.org/10.1177/2053951720939605Google ScholarCross Ref
- Samir Passi and Mihaela Vorvoreanu. 2022. Overreliance on AI: Literature Review. Technical Report MSR-TR-2022--12. Microsoft. https://www.microsoft.com/en-us/research/publication/overreliance-on-ai-literature-review/Google Scholar
- Dorian Peters, Karina Vold, Diana Robinson, and Rafael A. Calvo. 2020. Responsible AI-Two Frameworks for Ethical Design Practice. IEEE Transactions on Technology and Society, Vol. 1, 1 (2020), 34--47. https://doi.org/10.1109/TTS.2020.2974991Google ScholarCross Ref
- Forough Poursabzi-Sangdeh, Daniel G Goldstein, Jake M Hofman, Jennifer Wortman Wortman Vaughan, and Hanna Wallach. 2021. Manipulating and measuring model interpretability. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1--52.Google ScholarDigital Library
- Navi Radjou, Jaideep Prabhu, and Simone Ahuja. 2012. Jugaad innovation: Think frugal, be flexible, generate breakthrough growth. John Wiley & Sons.Google Scholar
- Inioluwa Deborah Raji, Andrew Smart, Rebecca N White, Margaret Mitchell, Timnit Gebru, Ben Hutchinson, Jamila Smith-Loud, Daniel Theron, and Parker Barnes. 2020. Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 33--44.Google ScholarDigital Library
- Bogdana Rakova, Jingying Yang, Henriette Cramer, and Rumman Chowdhury. 2021a. Where Responsible AI Meets Reality: Practitioner Perspectives on Enablers for Shifting Organizational Practices. Proc. ACM Hum.-Comput. Interact. , Vol. 5, CSCW1, Article 7 (apr 2021), bibinfonumpages23 pages. https://doi.org/10.1145/3449081Google ScholarDigital Library
- Bogdana Rakova, Jingying Yang, Henriette Cramer, and Rumman Chowdhury. 2021b. Where responsible AI meets reality: Practitioner perspectives on enablers for shifting organizational practices. Proceedings of the ACM on Human-Computer Interaction, Vol. 5, CSCW1 (2021), 1--23.Google ScholarDigital Library
- Bogdana Ravoka. 2022. Slowing Down AI with Speculative Friction. Branch Magazine (2022).Google Scholar
- Mary Beth Rosson and John M Carroll. 2009. Scenario based design. Human-computer interaction. boca raton, FL (2009), 145--162.Google Scholar
- Gloire Rubambiza, Phoebe Sengers, and Hakim Weatherspoon. 2022. Seamless Visions, Seamful Realities: Anticipating Rural Infrastructural Fragility in Early Design of Digital Agriculture. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI '22). Association for Computing Machinery, New York, NY, USA, Article 451, bibinfonumpages15 pages. https://doi.org/10.1145/3491102.3517579Google ScholarDigital Library
- Andrew D. Selbst, Danah Boyd, Sorelle A. Friedler, Suresh Venkatasubramanian, and Janet Vertesi. 2019. Fairness and Abstraction in Sociotechnical Systems. In Proceedings of the Conference on Fairness, Accountability, and Transparency. ACM, Atlanta GA USA, 59--68. https://doi.org/10.1145/3287560.3287598Google ScholarDigital Library
- Renee Shelby, Shalaleh Rismani, Kathryn Henne, AJung Moon, Negar Rostamzadeh, Paul Nicholas, N'Mah Yilla, Jess Gallegos, Andrew Smart, Emilio Garcia, and Gurleen Virk. 2023. Identifying Sociotechnical Harms of Algorithmic Systems: Scoping a Taxonomy for Harm Reduction. arxiv: 2210.05791 [cs.HC]Google Scholar
- Ben Shneiderman. 2021. Responsible AI: Bridging from ethics to practice. Commun. ACM, Vol. 64, 8 (2021), 32--35.Google ScholarDigital Library
- Mona Sloane, Emanuel Moss, Olaitan Awomolo, and Laura Forlano. 2020. Participation is not a design fix for machine learning. arXiv preprint arXiv:2007.02423 (2020).Google Scholar
- Anselm Strauss and Juliet M. Corbin. 1990. Basics of Qualitative Research: Grounded Theory Techniques and Procedures. Sage, New York.Google Scholar
- Harini Suresh, Steven R Gomez, Kevin K Nam, and Arvind Satyanarayan. 2021. Beyond Expertise and Roles: A Framework to Characterize the Stakeholders of Interpretable Machine Learning and their Needs. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1--16.Google ScholarDigital Library
- Maxwell Szymanski, Martijn Millecamp, and Katrien Verbert. 2021. Visual, textual or hybrid: the effect of user expertise on different explanations. In 26th International Conference on Intelligent User Interfaces. 109--119.Google ScholarDigital Library
- Christian M Thurnes, Frank Zeihsel, Svetlana Visnepolschi, and Frank Hallfell. 2015. Using TRIZ to invent failures--concept and application to go beyond traditional FMEA. Procedia engineering , Vol. 131 (2015), 426--450.Google ScholarCross Ref
- Arthur B VanGundy. 1984. Brain writing for new product ideas: an alternative to brainstorming. Journal of Consumer Marketing (1984).Google Scholar
- Janet Vertesi. 2014. Seamful spaces: Heterogeneous infrastructures in interaction. Science, Technology, & Human Values, Vol. 39, 2 (2014), 264--284.Google ScholarCross Ref
- Mihaela Vorvoreanu, Amy Heger, Samir Passi, Shipi Dhanorkar, Zoe Kahn, and Ruotong Wang. 2023. Responsible AI Maturity Model. Technical Report MSR-TR-2023--26. Microsoft. https://www.microsoft.com/en-us/research/publication/responsible-ai-maturity-model/Google Scholar
- Danding Wang, Qian Yang, Ashraf Abdul, and Brian Y Lim. 2019. Designing theory-driven user-centric explainable AI. In Proceedings of the 2019 CHI conference on human factors in computing systems. 1--15.Google ScholarDigital Library
- Mark Weiser. 1994. Creating the invisible interface: (invited talk). In Proceedings of the 7th annual ACM symposium on User interface software and technology. 1.Google ScholarDigital Library
- Yao Xie, Melody Chen, David Kao, Ge Gao, and Xiang'Anthony' Chen. 2020. CheXplain: enabling physicians to explore and understand data-driven, AI-enabled medical imaging analysis. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1--13.Google ScholarDigital Library
- Qian Yang, Aaron Steinfeld, Carolyn Rosé, and John Zimmerman. 2020. Re-examining whether, why, and how human-AI interaction is uniquely difficult to design. In Proceedings of the 2020 chi conference on human factors in computing systems. 1--13.Google ScholarDigital Library
- Yunfeng Zhang, Q Vera Liao, and Rachel KE Bellamy. 2020. Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 295--305. ioGoogle ScholarDigital Library
Index Terms
- Seamful XAI: Operationalizing Seamful Design in Explainable AI
Recommendations
Charting the Sociotechnical Gap in Explainable AI: A Framework to Address the Gap in XAI
CSCWExplainable AI (XAI) systems are sociotechnical in nature; thus, they are subject to the sociotechnical gap-divide between the technical affordances and the social needs. However, charting this gap is challenging. In the context of XAI, we argue that ...
Questioning the AI: Informing Design Practices for Explainable AI User Experiences
CHI '20: Proceedings of the 2020 CHI Conference on Human Factors in Computing SystemsA surge of interest in explainable AI (XAI) has led to a vast collection of algorithmic work on the topic. While many recognize the necessity to incorporate explainability features in AI systems, how to address real-world user needs for understanding AI ...
Explainable AI (XAI): A systematic meta-survey of current challenges and future opportunities
AbstractThe past decade has seen significant progress in artificial intelligence (AI), which has resulted in algorithms being adopted for resolving a variety of problems. However, this success has been met by increasing model complexity and ...
Comments