ABSTRACT
In the scope of explainable artificial intelligence, explanation techniques are heavily studied to increase trust in recommender systems. However, studies on explaining recommendations typically target adults in e-commerce or media contexts; e-learning has received less research attention. To address these limits, we investigated how explanations affect adolescents’ initial trust in an e-learning platform that recommends mathematics exercises with collaborative filtering. In a randomized controlled experiment with 37 adolescents, we compared real explanations with placebo and no explanations. Our results show that real explanations significantly increased initial trust when trust was measured as a multidimensional construct of competence, benevolence, integrity, intention to return, and perceived transparency. Yet, this result did not hold when trust was measured one-dimensionally. Furthermore, not all adolescents attached equal importance to explanations and trust scores were high overall. These findings underline the need to tailor explanations and suggest that dynamically learned factors may be more important than explanations for building initial trust. To conclude, we thus reflect upon the need for explanations and recommendations in e-learning in low-stakes and high-stakes situations.
Supplemental Material
Available for Download
Datasets, images, recommender system, statistical analysis
- Solmaz Abdi, Hassan Khosravi, Shazia Sadiq, and Dragan Gasevic. 2019. A Multivariate Elo-based Learner Model for Adaptive Educational Systems. arXiv:1910.12581 [cs] (Oct. 2019). arxiv:1910.12581 [cs]Google Scholar
- Solmaz Abdi, Hassan Khosravi, Shazia Sadiq, and Dragan Gasevic. 2020. Complementing Educational Recommender Systems with Open Learner Models. In Proceedings of the Tenth International Conference on Learning Analytics & Knowledge. Association for Computing Machinery, New York, NY, USA, 360–365.Google ScholarDigital Library
- Ashraf Abdul, Jo Vermeulen, Danding Wang, Brian Y. Lim, and Mohan Kankanhalli. 2018. Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems(CHI ’18). Association for Computing Machinery, New York, NY, USA, 1–18. https://doi.org/10.1145/3173574.3174156Google ScholarDigital Library
- Amina Adadi and Mohammed Berrada. 2018. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access 6(2018), 52138–52160. https://doi.org/10.1109/ACCESS.2018.2870052Google ScholarCross Ref
- Alejandro Barredo Arrieta, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador Garcia, Sergio Gil-Lopez, Daniel Molina, Richard Benjamins, Raja Chatila, and Francisco Herrera. 2020. Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI. Information Fusion 58 (June 2020), 82–115. https://doi.org/10.1016/j.inffus.2019.12.012Google ScholarDigital Library
- Jordan Barria-Pineda. 2020. Exploring the Need for Transparency in Educational Recommender Systems. In Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization. Association for Computing Machinery, New York, NY, USA, 376–379.Google ScholarDigital Library
- Izak Benbasat and Weiquan Wang. 2005. Trust In and Adoption of Online Recommendation Agents. Journal of the Association for Information Systems 6, 3 (March 2005), 72–101. https://doi.org/10.17705/1jais.00065Google ScholarCross Ref
- Shlomo Berkovsky, Ronnie Taib, and Dan Conway. 2017. How to Recommend? User Trust Factors in Movie Recommender Systems. In Proceedings of the 22nd International Conference on Intelligent User Interfaces(IUI ’17). Association for Computing Machinery, New York, NY, USA, 287–300. https://doi.org/10.1145/3025171.3025209Google ScholarDigital Library
- Svetlin Bostandjiev, John O’Donovan, and Tobias Höllerer. 2012. TasteWeights: A Visual Interactive Hybrid Recommender System. In Proceedings of the Sixth ACM Conference on Recommender Systems(RecSys ’12). Association for Computing Machinery, New York, NY, USA, 35–42. https://doi.org/10.1145/2365952.2365964Google ScholarDigital Library
- Susan Bull and Judy Kay. 2010. Open Learner Models. In Advances in Intelligent Tutoring Systems, Janusz Kacprzyk, Roger Nkambou, Jacqueline Bourdeau, and Riichiro Mizoguchi (Eds.). Vol. 308. Springer Berlin Heidelberg, Berlin, Heidelberg, 301–322. https://doi.org/10.1007/978-3-642-14363-2_15Google ScholarCross Ref
- Adrian Bussone, Simone Stumpf, and Dympna O’Sullivan. 2015. The Role of Explanations on Trust and Reliance in Clinical Decision Support Systems. In 2015 International Conference on Healthcare Informatics. IEEE, Dallas, TX, USA, 160–169. https://doi.org/10.1109/ICHI.2015.26Google ScholarDigital Library
- Li Chen (Ed.). 2008. User Decision Improvement and Trust Building in Product Recommender Systems. EPFL, Lausanne. https://doi.org/10.5075/epfl-thesis-4140Google ScholarCross Ref
- K. Chopra and W.A. Wallace. 2003. Trust in Electronic Environments. In Proceedings of the 36th Annual Hawaii International Conference on System Sciences, 2003.IEEE, Big Island, HI, USA, 10 pp.–. https://doi.org/10.1109/HICSS.2003.1174902Google ScholarCross Ref
- Henriette Cramer, Vanessa Evers, Satyan Ramlal, Maarten van Someren, Lloyd Rutledge, Natalia Stash, Lora Aroyo, and Bob Wielinga. 2008. The Effects of Transparency on Trust in and Acceptance of a Content-Based Art Recommender. User Modeling and User-Adapted Interaction 18, 5 (Nov. 2008), 455–496. https://doi.org/10.1007/s11257-008-9051-3Google ScholarDigital Library
- Julie Bu Daher, Armelle Brun, and Anne Boyer. 2017. A Review on Explanations in Recommender Systems. Technical Report. LORIA - Université de Lorraine.Google Scholar
- Ole Halvor Dahl and Olav Fykse. 2018. Combining Elo Rating and Collaborative Filtering to Improve Learner Ability Estimation in an E-Learning Context. Master’s thesis. NTNU.Google Scholar
- Brittany Davis, Maria Glenski, William Sealy, and Dustin Arendt. 2020. Measure Utility, Gain Trust: Practical Advice for XAI Researchers. In 2020 IEEE Workshop on TRust and EXpertise in Visual Analytics (TREX). IEEE, Salt Lake City, UT, USA, 1–8. https://doi.org/10.1109/TREX51495.2020.00005Google ScholarCross Ref
- Shipi Dhanorkar, Christine T. Wolf, Kun Qian, Anbang Xu, Lucian Popa, and Yunyao Li. 2021. Who Needs to Know What, When?: Broadening the Explainable AI (XAI) Design Space by Looking at Explanations Across the AI Lifecycle. In Designing Interactive Systems Conference 2021. Association for Computing Machinery, New York, NY, USA, 1591–1602.Google ScholarDigital Library
- Tim Donkers, Timm Kleemann, and Jürgen Ziegler. 2020. Explaining Recommendations by Means of Aspect-Based Transparent Memories. In Proceedings of the 25th International Conference on Intelligent User Interfaces. ACM, Cagliari Italy, 166–176. https://doi.org/10.1145/3377325.3377520Google ScholarDigital Library
- Finale Doshi-Velez and Been Kim. 2017. Towards A Rigorous Science of Interpretable Machine Learning. arXiv:1702.08608 [cs, stat] (March 2017). arxiv:1702.08608 [cs, stat]Google Scholar
- Upol Ehsan and Mark O. Riedl. 2020. Human-Centered Explainable AI: Towards a Reflective Sociotechnical Approach. In HCI International 2020 - Late Breaking Papers: Multimodality and Intelligence, Constantine Stephanidis, Masaaki Kurosu, Helmut Degen, and Lauren Reinerman-Jones (Eds.). Vol. 12424. Springer International Publishing, Cham, 449–466. https://doi.org/10.1007/978-3-030-60117-1_33Google ScholarDigital Library
- Malin Eiband, Daniel Buschek, Alexander Kremer, and Heinrich Hussmann. 2019. The Impact of Placebic Explanations on Trust in Intelligent Systems. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems(CHI EA ’19). Association for Computing Machinery, New York, NY, USA, 1–6. https://doi.org/10.1145/3290607.3312787Google ScholarDigital Library
- Malin Eiband, Hanna Schneider, Mark Bilandzic, Julian Fazekas-Con, Mareike Haug, and Heinrich Hussmann. 2018. Bringing Transparency Design into Practice. In 23rd International Conference on Intelligent User Interfaces. ACM, Tokyo Japan, 211–223. https://doi.org/10.1145/3172944.3172961Google ScholarDigital Library
- Arpad E. Elo. 1978. The Rating of Chessplayers, Past and Present. Arco Pub, New York.Google Scholar
- Daniel Fitton, Janet C C. Read, and Matthew Horton. 2013. The Challenge of Working with Teens as Participants in Interaction Design. In CHI ’13 Extended Abstracts on Human Factors in Computing Systems on - CHI EA ’13. ACM Press, Paris, France, 205. https://doi.org/10.1145/2468356.2468394Google ScholarDigital Library
- Steve Fox, Kuldeep Karnawat, Mark Mydland, Susan Dumais, and Thomas White. 2005. Evaluating Implicit Measures to Improve Web Search. ACM Transactions on Information Systems 23, 2 (April 2005), 147–168. https://doi.org/10.1145/1059981.1059982Google ScholarDigital Library
- Fatih Gedikli, Dietmar Jannach, and Mouzhi Ge. 2014. How Should I Explain? A Comparison of Different Explanation Types for Recommender Systems. International Journal of Human-Computer Studies 72, 4 (April 2014), 367–382. https://doi.org/10.1016/j.ijhcs.2013.12.007Google ScholarDigital Library
- Leilani H. Gilpin, David Bau, Ben Z. Yuan, Ayesha Bajwa, Michael Specter, and Lalana Kagal. 2018. Explaining Explanations: An Overview of Interpretability of Machine Learning. In 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA). IEEE, Turin, Italy, 80–89. https://doi.org/10.1109/DSAA.2018.00018Google ScholarCross Ref
- Rachel Glennerster and Kudzai Takavarasha. 2013. Running Randomized Evaluations: A Practical Guide. Princeton University Press, Princeton, New Jersey. https://doi.org/10.1515/9781400848447Google ScholarCross Ref
- Tyrone Grandison and Morris Sloman. 2000. A Survey of Trust in Internet Applications. IEEE Communications Surveys Tutorials 3, 4 (2000), 2–16. https://doi.org/10.1109/COMST.2000.5340804Google ScholarDigital Library
- Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and Dino Pedreschi. 2019. A Survey of Methods for Explaining Black Box Models. Comput. Surveys 51, 5 (Jan. 2019), 1–42. https://doi.org/10.1145/3236009Google ScholarDigital Library
- David Gunning and David Aha. 2019. DARPA’s Explainable Artificial Intelligence (XAI) Program. AI Magazine 40, 2 (June 2019), 44–58. https://doi.org/10.1609/aimag.v40i2.2850Google ScholarDigital Library
- Jonathan L. Herlocker, Joseph A. Konstan, and John Riedl. 2000. Explaining Collaborative Filtering Recommendations. In Proceedings of the 2000 ACM Conference on Computer Supported Cooperative Work(CSCW ’00). Association for Computing Machinery, New York, NY, USA, 241–250. https://doi.org/10.1145/358916.358995Google ScholarDigital Library
- Kevin Anthony Hoff and Masooda Bashir. 2015. Trust in Automation: Integrating Empirical Evidence on Factors That Influence Trust. Human Factors: The Journal of the Human Factors and Ergonomics Society 57, 3 (May 2015), 407–434. https://doi.org/10.1177/0018720814547570Google ScholarCross Ref
- Robert R. Hoffman, Shane T. Mueller, Gary Klein, and Jordan Litman. 2019. Metrics for Explainable AI: Challenges and Prospects. arXiv:1812.04608 [cs] (Feb. 2019). arxiv:1812.04608 [cs]Google Scholar
- Daniel Holliday, Stephanie Wilson, and Simone Stumpf. 2016. User Trust in Intelligent Systems: A Journey Over Time. In Proceedings of the 21st International Conference on Intelligent User Interfaces. ACM, Sonoma California USA, 164–168. https://doi.org/10.1145/2856767.2856811Google ScholarDigital Library
- Y. Jin, N. Tintarev, and K. Verbert. 2018. Effects of Personal Characteristics on Music Recommender Systems with Different Levels of Controllability. In RecSys 2018 - 12th ACM Conference on Recommender Systems. Association for Computing Machinery, Vancouver, British Columbia, Canada, 13–21. https://doi.org/10.1145/3240323.3240358Google ScholarDigital Library
- Shotallo Kato. 2021. Practicing the Right Math: Enhancing Trust in an E-Learning Platform Using an Explainable Recommender System. Master’s thesis. KU Leuven, Faculteit Ingenieurswetenschappen.Google Scholar
- S. Klinkenberg, M. Straatemeier, and H. L. J. van der Maas. 2011. Computer Adaptive Practice of Maths Ability Using a New Item Response Model for on the Fly Ability and Difficulty Estimation. Computers & Education 57, 2 (Sept. 2011), 1813–1824. https://doi.org/10.1016/j.compedu.2011.02.003Google ScholarDigital Library
- Pigi Kouki, James Schaffer, Jay Pujara, John O’Donovan, and Lise Getoor. 2019. Personalized Explanations for Hybrid Recommender Systems. In Proceedings of the 24th International Conference on Intelligent User Interfaces. ACM, Marina del Ray California, 379–390. https://doi.org/10.1145/3301275.3302306Google ScholarDigital Library
- Todd Kulesza, Simone Stumpf, Margaret Burnett, Sherry Yang, Irwin Kwan, and Weng-Keen Wong. 2013. Too Much, Too Little, or Just Right? Ways Explanations Impact End Users’ Mental Models. In 2013 IEEE Symposium on Visual Languages and Human Centric Computing. IEEE, San Jose, CA, USA, 3–10. https://doi.org/10.1109/VLHCC.2013.6645235Google ScholarCross Ref
- Johannes Kunkel, Tim Donkers, Lisa Michael, Catalin-Mihai Barbu, and Jürgen Ziegler. 2019. Let Me Explain: Impact of Personal and Impersonal Explanations on Trust in Recommender Systems. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, Glasgow Scotland Uk, 1–12. https://doi.org/10.1145/3290605.3300717Google ScholarDigital Library
- Ellen Langer, Arthur Blank, and Benzion Chanowitz. 1978. The Mindlessness of Ostensibly Thoughtful Action: The Role of ”Placebic” Information in Interpersonal Interaction. Journal of Personality and Social Psychology 36, 6(1978), 635–642. https://doi.org/10.1037/0022-3514.36.6.635Google ScholarCross Ref
- John D. Lee and Katrina A. See. 2004. Trust in Automation: Designing for Appropriate Reliance. Human Factors 46, 1 (March 2004), 50–80. https://doi.org/10.1518/hfes.46.1.50_30392Google ScholarCross Ref
- Zachary C. Lipton. 2018. The Mythos of Model Interpretability: In Machine Learning, the Concept of Interpretability Is Both Important and Slippery.Queue 16, 3 (June 2018), 31–57. https://doi.org/10.1145/3236386.3241340Google ScholarDigital Library
- Maria Madsen and Shirley Gregor. 2000. Measuring Human-Computer Trust. In Proceedings of the 11th Australasian Conference on Information Systems, Vol. 53. Australasian Association for Information Systems, Brisbane, Australia, 6–8.Google Scholar
- Nikos Manouselis, Hendrik Drachsler, Katrien Verbert, and Olga C. Santos (Eds.). 2014. Recommender Systems for Technology Enhanced Learning. Springer New York, New York, NY. https://doi.org/10.1007/978-1-4939-0530-0Google ScholarCross Ref
- D. Harrison McKnight, Vivek Choudhury, and Charles Kacmar. 2002. Developing and Validating Trust Measures for E-Commerce: An Integrative Typology. Information Systems Research 13, 3 (Sept. 2002), 334–359. https://doi.org/10.1287/isre.13.3.334.81Google ScholarDigital Library
- Sean M. McNee, Shyong K. Lam, Joseph A. Konstan, and John Riedl. 2003. Interfaces for Eliciting New User Preferences in Recommender Systems. In User Modeling 2003, Peter Brusilovsky, Albert Corbett, and Fiorella de Rosis (Eds.). Vol. 2702. Springer Berlin Heidelberg, Berlin, Heidelberg, 178–187. https://doi.org/10.1007/3-540-44963-9_24Google ScholarCross Ref
- Stephanie M. Merritt, Heather Heimbaugh, Jennifer LaChapell, and Deborah Lee. 2013. I Trust It, but I Don’t Know Why: Effects of Implicit Attitudes Toward Automation on Trust in an Automated System. Human Factors 55, 3 (June 2013), 520–534. https://doi.org/10.1177/0018720812465081Google ScholarCross Ref
- Martijn Millecamp, Nyi Nyi Htun, Cristina Conati, and Katrien Verbert. 2019. To Explain or Not to Explain: The Effects of Personal Characteristics When Explaining Music Recommendations. In Proceedings of the 24th International Conference on Intelligent User Interfaces. ACM, Marina del Ray California, 397–407. https://doi.org/10.1145/3301275.3302313Google ScholarDigital Library
- Tim Miller. 2019. Explanation in Artificial Intelligence: Insights from the Social Sciences. Artificial Intelligence 267 (Feb. 2019), 1–38. https://doi.org/10.1016/j.artint.2018.07.007Google ScholarCross Ref
- Sina Mohseni, Niloofar Zarei, and Eric D. Ragan. 2021. A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems. ACM Transactions on Interactive Intelligent Systems 11, 3-4 (Aug. 2021), 24:1–24:45. https://doi.org/10.1145/3387166Google ScholarDigital Library
- Bonnie M. Muir. 1987. Trust between Humans and Machines, and the Design of Decision Aids. International Journal of Man-Machine Studies 27, 5-6 (Nov. 1987), 527–539. https://doi.org/10.1016/S0020-7373(87)80013-5Google ScholarDigital Library
- Mahsan Nourani, Samia Kabir, Sina Mohseni, and Eric D. Ragan. 2019. The Effects of Meaningful and Meaningless Explanations on Trust and Perceived System Accuracy in Intelligent Systems. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 7 (Oct. 2019), 97–105.Google ScholarCross Ref
- Mahsan Nourani, Joanie King, and Eric Ragan. 2020. The Role of Domain Expertise in User Trust and the Impact of First Impressions with Intelligent Systems. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 8 (Oct. 2020), 112–121.Google ScholarCross Ref
- Ingrid Nunes, this link will open in a new window Link to external site, and Dietmar Jannach. 2017. A Systematic Review and Taxonomy of Explanations in Decision Support and Recommender Systems. User Modeling and User - Adapted Interaction 27, 3-5 (Dec. 2017), 393–444. https://doi.org/10.1007/s11257-017-9195-0Google ScholarDigital Library
- Jeroen Ooge. 2019. Het personaliseren van motivationele strategieën en gamificationtechnieken m.b.v. recommendersystemen. Master’s thesis. KU Leuven, Faculteit Wetenschappen.Google Scholar
- Jeroen Ooge and Katrien Verbert. 2021. Trust in Prediction Models: A Mixed-Methods Pilot Study on the Impact of Domain Expertise. In 2021 IEEE Workshop on TRust and EXpertise in Visual Analytics (TREX). IEEE, New Orleans, LA, USA, 8–13. https://doi.org/10.1109/TREX53765.2021.00007Google ScholarCross Ref
- Umberto Panniello, Michele Gorgoglione, and Alexander Tuzhilin. 2016. In CARSs We Trust: How Context-Aware Recommendations Affect Customers’ Trust and Other Business Performance Measures of Recommender Systems. Information Systems Research 27, 1 (2016), 182–196.Google ScholarCross Ref
- Pearl Pu and Li Chen. 2006. Trust Building with Explanation Interfaces. In Proceedings of the 11th International Conference on Intelligent User Interfaces - IUI ’06. ACM Press, Sydney, Australia, 93. https://doi.org/10.1145/1111449.1111475Google ScholarDigital Library
- Pearl Pu and Li Chen. 2007. Trust-Inspiring Explanation Interfaces for Recommender Systems. Knowledge-Based Systems 20, 6 (Aug. 2007), 542–556. https://doi.org/10.1016/j.knosys.2007.04.004Google ScholarDigital Library
- Maxwell Szymanski, Martijn Millecamp, and Katrien Verbert. 2021. Visual, Textual or Hybrid: The Effect of User Expertise on Different Explanations. In 26th International Conference on Intelligent User Interfaces. ACM, College Station TX USA, 109–119. https://doi.org/10.1145/3397481.3450662Google ScholarDigital Library
- Nava Tintarev and Judith Masthoff. 2007. Effective Explanations of Recommendations: User-Centered Design. In Proceedings of the 2007 ACM Conference on Recommender Systems(RecSys ’07). Association for Computing Machinery, New York, NY, USA, 153–156. https://doi.org/10.1145/1297231.1297259Google ScholarDigital Library
- Nava Tintarev and Judith Masthoff. 2007. A Survey of Explanations in Recommender Systems. In 2007 IEEE 23rd International Conference on Data Engineering Workshop. IEEE, Istanbul, Turkey, 801–810. https://doi.org/10.1109/ICDEW.2007.4401070Google ScholarDigital Library
- Nava Tintarev and Judith Masthoff. 2011. Designing and Evaluating Explanations for Recommender Systems. In Recommender Systems Handbook, Francesco Ricci, Lior Rokach, Bracha Shapira, and Paul B. Kantor (Eds.). Springer US, Boston, MA, 479–510. https://doi.org/10.1007/978-0-387-85820-3_15Google ScholarCross Ref
- Nava Tintarev and Judith Masthoff. 2012. Evaluating the Effectiveness of Explanations for Recommender Systems. User Modeling and User-Adapted Interaction 22, 4 (Oct. 2012), 399–439. https://doi.org/10.1007/s11257-011-9117-5Google ScholarDigital Library
- Chun-Hua Tsai and Peter Brusilovsky. 2019. Evaluating Visual Explanations for Similarity-Based Recommendations: User Perception and Performance. In Proceedings of the 27th ACM Conference on User Modeling, Adaptation and Personalization. Association for Computing Machinery, New York, NY, USA, 22–30.Google ScholarDigital Library
- Chun-Hua Tsai and Peter Brusilovsky. 2019. Explaining Recommendations in an Interactive Hybrid Social Recommender. In Proceedings of the 24th International Conference on Intelligent User Interfaces. ACM, Marina del Ray California, 391–396. https://doi.org/10.1145/3301275.3302318Google ScholarDigital Library
- Raphael Vallat. 2018. Pingouin: Statistics in Python. Journal of Open Source Software 3, 31 (Nov. 2018), 1026. https://doi.org/10.21105/joss.01026Google ScholarCross Ref
- Alfredo Vellido. 2020. The Importance of Interpretability and Visualization in Machine Learning for Applications in Medicine and Health Care. Neural Computing and Applications 32, 24 (Dec. 2020), 18069–18083. https://doi.org/10.1007/s00521-019-04051-wGoogle ScholarDigital Library
- Katrien Verbert, Nikos Manouselis, Xavier Ochoa, Martin Wolpers, Hendrik Drachsler, Ivana Bosnic, and Erik Duval. 2012. Context-Aware Recommender Systems for Learning: A Survey and Future Challenges. IEEE Transactions on Learning Technologies 5, 4 (Oct. 2012), 318–335. https://doi.org/10.1109/TLT.2012.11Google ScholarDigital Library
- Giulio Vidotto, Davide Massidda, Stefano Noventa, and Marco Vicentini. 2012. Trusting Beliefs: A Functional Measurement Study. Psicologica: International Journal of Methodology and Experimental Psychology 33, 3 (2012), 575–590.Google Scholar
- Danding Wang, Qian Yang, Ashraf Abdul, and Brian Y. Lim. 2019. Designing Theory-Driven User-Centric Explainable AI. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1–15.Google ScholarDigital Library
- Y. Diana Wang. 2014. Building Trust in E-Learning. ATHENS JOURNAL OF EDUCATION 1, 1 (Jan. 2014), 9–18. https://doi.org/10.30958/aje.1-1-1Google ScholarCross Ref
- Kelly Wauters, Piet Desmet, and Wim Van Den Noortgate. 2012. Item Difficulty Estimation: An Auspicious Collaboration between Data and Judgment. Computers & Education 58, 4 (May 2012), 1183–1193. https://doi.org/10.1016/j.compedu.2011.11.020Google ScholarDigital Library
- Yongfeng Zhang and Xu Chen. 2020. Explainable Recommendation: A Survey and New Perspectives. Foundations and Trends® in Information Retrieval 14, 1(2020), 1–101. https://doi.org/10.1561/1500000066Google ScholarDigital Library
Index Terms
- Explaining Recommendations in E-Learning: Effects on Adolescents' Trust
Recommendations
Steering Recommendations and Visualising Its Impact: Effects on Adolescents’ Trust in E-Learning Platforms
IUI '23: Proceedings of the 28th International Conference on Intelligent User InterfacesResearchers have widely acknowledged the potential of control mechanisms with which end-users of recommender systems can better tailor recommendations. However, few e-learning environments so far incorporate such mechanisms, for example for steering ...
Being Trustworthy is Not Enough: How Untrustworthy Artificial Intelligence (AI) Can Deceive the End-Users and Gain Their Trust
CSCWTrustworthy Artificial Intelligence (AI) is characterized, among other things, by: 1) competence, 2) transparency, and 3) fairness. However, end-users may fail to recognize incompetent AI, allowing untrustworthy AI to exaggerate its competence under the ...
The Impact of Placebic Explanations on Trust in Intelligent Systems
CHI EA '19: Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing SystemsWork in social psychology on interpersonal interaction has demonstrated that people are more likely to comply to a request if they are presented with a justification - even if this justification conveys no information. In the light of the many calls for ...
Comments