skip to main content
10.1145/3490099.3511140acmconferencesArticle/Chapter ViewAbstractPublication PagesiuiConference Proceedingsconference-collections
research-article

Explaining Recommendations in E-Learning: Effects on Adolescents' Trust

Published:22 March 2022Publication History

ABSTRACT

In the scope of explainable artificial intelligence, explanation techniques are heavily studied to increase trust in recommender systems. However, studies on explaining recommendations typically target adults in e-commerce or media contexts; e-learning has received less research attention. To address these limits, we investigated how explanations affect adolescents’ initial trust in an e-learning platform that recommends mathematics exercises with collaborative filtering. In a randomized controlled experiment with 37 adolescents, we compared real explanations with placebo and no explanations. Our results show that real explanations significantly increased initial trust when trust was measured as a multidimensional construct of competence, benevolence, integrity, intention to return, and perceived transparency. Yet, this result did not hold when trust was measured one-dimensionally. Furthermore, not all adolescents attached equal importance to explanations and trust scores were high overall. These findings underline the need to tailor explanations and suggest that dynamically learned factors may be more important than explanations for building initial trust. To conclude, we thus reflect upon the need for explanations and recommendations in e-learning in low-stakes and high-stakes situations.

Skip Supplemental Material Section

Supplemental Material

References

  1. Solmaz Abdi, Hassan Khosravi, Shazia Sadiq, and Dragan Gasevic. 2019. A Multivariate Elo-based Learner Model for Adaptive Educational Systems. arXiv:1910.12581 [cs] (Oct. 2019). arxiv:1910.12581 [cs]Google ScholarGoogle Scholar
  2. Solmaz Abdi, Hassan Khosravi, Shazia Sadiq, and Dragan Gasevic. 2020. Complementing Educational Recommender Systems with Open Learner Models. In Proceedings of the Tenth International Conference on Learning Analytics & Knowledge. Association for Computing Machinery, New York, NY, USA, 360–365.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Ashraf Abdul, Jo Vermeulen, Danding Wang, Brian Y. Lim, and Mohan Kankanhalli. 2018. Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems(CHI ’18). Association for Computing Machinery, New York, NY, USA, 1–18. https://doi.org/10.1145/3173574.3174156Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Amina Adadi and Mohammed Berrada. 2018. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access 6(2018), 52138–52160. https://doi.org/10.1109/ACCESS.2018.2870052Google ScholarGoogle ScholarCross RefCross Ref
  5. Alejandro Barredo Arrieta, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador Garcia, Sergio Gil-Lopez, Daniel Molina, Richard Benjamins, Raja Chatila, and Francisco Herrera. 2020. Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI. Information Fusion 58 (June 2020), 82–115. https://doi.org/10.1016/j.inffus.2019.12.012Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Jordan Barria-Pineda. 2020. Exploring the Need for Transparency in Educational Recommender Systems. In Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization. Association for Computing Machinery, New York, NY, USA, 376–379.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Izak Benbasat and Weiquan Wang. 2005. Trust In and Adoption of Online Recommendation Agents. Journal of the Association for Information Systems 6, 3 (March 2005), 72–101. https://doi.org/10.17705/1jais.00065Google ScholarGoogle ScholarCross RefCross Ref
  8. Shlomo Berkovsky, Ronnie Taib, and Dan Conway. 2017. How to Recommend? User Trust Factors in Movie Recommender Systems. In Proceedings of the 22nd International Conference on Intelligent User Interfaces(IUI ’17). Association for Computing Machinery, New York, NY, USA, 287–300. https://doi.org/10.1145/3025171.3025209Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Svetlin Bostandjiev, John O’Donovan, and Tobias Höllerer. 2012. TasteWeights: A Visual Interactive Hybrid Recommender System. In Proceedings of the Sixth ACM Conference on Recommender Systems(RecSys ’12). Association for Computing Machinery, New York, NY, USA, 35–42. https://doi.org/10.1145/2365952.2365964Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Susan Bull and Judy Kay. 2010. Open Learner Models. In Advances in Intelligent Tutoring Systems, Janusz Kacprzyk, Roger Nkambou, Jacqueline Bourdeau, and Riichiro Mizoguchi (Eds.). Vol. 308. Springer Berlin Heidelberg, Berlin, Heidelberg, 301–322. https://doi.org/10.1007/978-3-642-14363-2_15Google ScholarGoogle ScholarCross RefCross Ref
  11. Adrian Bussone, Simone Stumpf, and Dympna O’Sullivan. 2015. The Role of Explanations on Trust and Reliance in Clinical Decision Support Systems. In 2015 International Conference on Healthcare Informatics. IEEE, Dallas, TX, USA, 160–169. https://doi.org/10.1109/ICHI.2015.26Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Li Chen (Ed.). 2008. User Decision Improvement and Trust Building in Product Recommender Systems. EPFL, Lausanne. https://doi.org/10.5075/epfl-thesis-4140Google ScholarGoogle ScholarCross RefCross Ref
  13. K. Chopra and W.A. Wallace. 2003. Trust in Electronic Environments. In Proceedings of the 36th Annual Hawaii International Conference on System Sciences, 2003.IEEE, Big Island, HI, USA, 10 pp.–. https://doi.org/10.1109/HICSS.2003.1174902Google ScholarGoogle ScholarCross RefCross Ref
  14. Henriette Cramer, Vanessa Evers, Satyan Ramlal, Maarten van Someren, Lloyd Rutledge, Natalia Stash, Lora Aroyo, and Bob Wielinga. 2008. The Effects of Transparency on Trust in and Acceptance of a Content-Based Art Recommender. User Modeling and User-Adapted Interaction 18, 5 (Nov. 2008), 455–496. https://doi.org/10.1007/s11257-008-9051-3Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Julie Bu Daher, Armelle Brun, and Anne Boyer. 2017. A Review on Explanations in Recommender Systems. Technical Report. LORIA - Université de Lorraine.Google ScholarGoogle Scholar
  16. Ole Halvor Dahl and Olav Fykse. 2018. Combining Elo Rating and Collaborative Filtering to Improve Learner Ability Estimation in an E-Learning Context. Master’s thesis. NTNU.Google ScholarGoogle Scholar
  17. Brittany Davis, Maria Glenski, William Sealy, and Dustin Arendt. 2020. Measure Utility, Gain Trust: Practical Advice for XAI Researchers. In 2020 IEEE Workshop on TRust and EXpertise in Visual Analytics (TREX). IEEE, Salt Lake City, UT, USA, 1–8. https://doi.org/10.1109/TREX51495.2020.00005Google ScholarGoogle ScholarCross RefCross Ref
  18. Shipi Dhanorkar, Christine T. Wolf, Kun Qian, Anbang Xu, Lucian Popa, and Yunyao Li. 2021. Who Needs to Know What, When?: Broadening the Explainable AI (XAI) Design Space by Looking at Explanations Across the AI Lifecycle. In Designing Interactive Systems Conference 2021. Association for Computing Machinery, New York, NY, USA, 1591–1602.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Tim Donkers, Timm Kleemann, and Jürgen Ziegler. 2020. Explaining Recommendations by Means of Aspect-Based Transparent Memories. In Proceedings of the 25th International Conference on Intelligent User Interfaces. ACM, Cagliari Italy, 166–176. https://doi.org/10.1145/3377325.3377520Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Finale Doshi-Velez and Been Kim. 2017. Towards A Rigorous Science of Interpretable Machine Learning. arXiv:1702.08608 [cs, stat] (March 2017). arxiv:1702.08608 [cs, stat]Google ScholarGoogle Scholar
  21. Upol Ehsan and Mark O. Riedl. 2020. Human-Centered Explainable AI: Towards a Reflective Sociotechnical Approach. In HCI International 2020 - Late Breaking Papers: Multimodality and Intelligence, Constantine Stephanidis, Masaaki Kurosu, Helmut Degen, and Lauren Reinerman-Jones (Eds.). Vol. 12424. Springer International Publishing, Cham, 449–466. https://doi.org/10.1007/978-3-030-60117-1_33Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Malin Eiband, Daniel Buschek, Alexander Kremer, and Heinrich Hussmann. 2019. The Impact of Placebic Explanations on Trust in Intelligent Systems. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems(CHI EA ’19). Association for Computing Machinery, New York, NY, USA, 1–6. https://doi.org/10.1145/3290607.3312787Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Malin Eiband, Hanna Schneider, Mark Bilandzic, Julian Fazekas-Con, Mareike Haug, and Heinrich Hussmann. 2018. Bringing Transparency Design into Practice. In 23rd International Conference on Intelligent User Interfaces. ACM, Tokyo Japan, 211–223. https://doi.org/10.1145/3172944.3172961Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Arpad E. Elo. 1978. The Rating of Chessplayers, Past and Present. Arco Pub, New York.Google ScholarGoogle Scholar
  25. Daniel Fitton, Janet C C. Read, and Matthew Horton. 2013. The Challenge of Working with Teens as Participants in Interaction Design. In CHI ’13 Extended Abstracts on Human Factors in Computing Systems on - CHI EA ’13. ACM Press, Paris, France, 205. https://doi.org/10.1145/2468356.2468394Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Steve Fox, Kuldeep Karnawat, Mark Mydland, Susan Dumais, and Thomas White. 2005. Evaluating Implicit Measures to Improve Web Search. ACM Transactions on Information Systems 23, 2 (April 2005), 147–168. https://doi.org/10.1145/1059981.1059982Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Fatih Gedikli, Dietmar Jannach, and Mouzhi Ge. 2014. How Should I Explain? A Comparison of Different Explanation Types for Recommender Systems. International Journal of Human-Computer Studies 72, 4 (April 2014), 367–382. https://doi.org/10.1016/j.ijhcs.2013.12.007Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Leilani H. Gilpin, David Bau, Ben Z. Yuan, Ayesha Bajwa, Michael Specter, and Lalana Kagal. 2018. Explaining Explanations: An Overview of Interpretability of Machine Learning. In 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA). IEEE, Turin, Italy, 80–89. https://doi.org/10.1109/DSAA.2018.00018Google ScholarGoogle ScholarCross RefCross Ref
  29. Rachel Glennerster and Kudzai Takavarasha. 2013. Running Randomized Evaluations: A Practical Guide. Princeton University Press, Princeton, New Jersey. https://doi.org/10.1515/9781400848447Google ScholarGoogle ScholarCross RefCross Ref
  30. Tyrone Grandison and Morris Sloman. 2000. A Survey of Trust in Internet Applications. IEEE Communications Surveys Tutorials 3, 4 (2000), 2–16. https://doi.org/10.1109/COMST.2000.5340804Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and Dino Pedreschi. 2019. A Survey of Methods for Explaining Black Box Models. Comput. Surveys 51, 5 (Jan. 2019), 1–42. https://doi.org/10.1145/3236009Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. David Gunning and David Aha. 2019. DARPA’s Explainable Artificial Intelligence (XAI) Program. AI Magazine 40, 2 (June 2019), 44–58. https://doi.org/10.1609/aimag.v40i2.2850Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Jonathan L. Herlocker, Joseph A. Konstan, and John Riedl. 2000. Explaining Collaborative Filtering Recommendations. In Proceedings of the 2000 ACM Conference on Computer Supported Cooperative Work(CSCW ’00). Association for Computing Machinery, New York, NY, USA, 241–250. https://doi.org/10.1145/358916.358995Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Kevin Anthony Hoff and Masooda Bashir. 2015. Trust in Automation: Integrating Empirical Evidence on Factors That Influence Trust. Human Factors: The Journal of the Human Factors and Ergonomics Society 57, 3 (May 2015), 407–434. https://doi.org/10.1177/0018720814547570Google ScholarGoogle ScholarCross RefCross Ref
  35. Robert R. Hoffman, Shane T. Mueller, Gary Klein, and Jordan Litman. 2019. Metrics for Explainable AI: Challenges and Prospects. arXiv:1812.04608 [cs] (Feb. 2019). arxiv:1812.04608 [cs]Google ScholarGoogle Scholar
  36. Daniel Holliday, Stephanie Wilson, and Simone Stumpf. 2016. User Trust in Intelligent Systems: A Journey Over Time. In Proceedings of the 21st International Conference on Intelligent User Interfaces. ACM, Sonoma California USA, 164–168. https://doi.org/10.1145/2856767.2856811Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Y. Jin, N. Tintarev, and K. Verbert. 2018. Effects of Personal Characteristics on Music Recommender Systems with Different Levels of Controllability. In RecSys 2018 - 12th ACM Conference on Recommender Systems. Association for Computing Machinery, Vancouver, British Columbia, Canada, 13–21. https://doi.org/10.1145/3240323.3240358Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Shotallo Kato. 2021. Practicing the Right Math: Enhancing Trust in an E-Learning Platform Using an Explainable Recommender System. Master’s thesis. KU Leuven, Faculteit Ingenieurswetenschappen.Google ScholarGoogle Scholar
  39. S. Klinkenberg, M. Straatemeier, and H. L. J. van der Maas. 2011. Computer Adaptive Practice of Maths Ability Using a New Item Response Model for on the Fly Ability and Difficulty Estimation. Computers & Education 57, 2 (Sept. 2011), 1813–1824. https://doi.org/10.1016/j.compedu.2011.02.003Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Pigi Kouki, James Schaffer, Jay Pujara, John O’Donovan, and Lise Getoor. 2019. Personalized Explanations for Hybrid Recommender Systems. In Proceedings of the 24th International Conference on Intelligent User Interfaces. ACM, Marina del Ray California, 379–390. https://doi.org/10.1145/3301275.3302306Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Todd Kulesza, Simone Stumpf, Margaret Burnett, Sherry Yang, Irwin Kwan, and Weng-Keen Wong. 2013. Too Much, Too Little, or Just Right? Ways Explanations Impact End Users’ Mental Models. In 2013 IEEE Symposium on Visual Languages and Human Centric Computing. IEEE, San Jose, CA, USA, 3–10. https://doi.org/10.1109/VLHCC.2013.6645235Google ScholarGoogle ScholarCross RefCross Ref
  42. Johannes Kunkel, Tim Donkers, Lisa Michael, Catalin-Mihai Barbu, and Jürgen Ziegler. 2019. Let Me Explain: Impact of Personal and Impersonal Explanations on Trust in Recommender Systems. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, Glasgow Scotland Uk, 1–12. https://doi.org/10.1145/3290605.3300717Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Ellen Langer, Arthur Blank, and Benzion Chanowitz. 1978. The Mindlessness of Ostensibly Thoughtful Action: The Role of ”Placebic” Information in Interpersonal Interaction. Journal of Personality and Social Psychology 36, 6(1978), 635–642. https://doi.org/10.1037/0022-3514.36.6.635Google ScholarGoogle ScholarCross RefCross Ref
  44. John D. Lee and Katrina A. See. 2004. Trust in Automation: Designing for Appropriate Reliance. Human Factors 46, 1 (March 2004), 50–80. https://doi.org/10.1518/hfes.46.1.50_30392Google ScholarGoogle ScholarCross RefCross Ref
  45. Zachary C. Lipton. 2018. The Mythos of Model Interpretability: In Machine Learning, the Concept of Interpretability Is Both Important and Slippery.Queue 16, 3 (June 2018), 31–57. https://doi.org/10.1145/3236386.3241340Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Maria Madsen and Shirley Gregor. 2000. Measuring Human-Computer Trust. In Proceedings of the 11th Australasian Conference on Information Systems, Vol. 53. Australasian Association for Information Systems, Brisbane, Australia, 6–8.Google ScholarGoogle Scholar
  47. Nikos Manouselis, Hendrik Drachsler, Katrien Verbert, and Olga C. Santos (Eds.). 2014. Recommender Systems for Technology Enhanced Learning. Springer New York, New York, NY. https://doi.org/10.1007/978-1-4939-0530-0Google ScholarGoogle ScholarCross RefCross Ref
  48. D. Harrison McKnight, Vivek Choudhury, and Charles Kacmar. 2002. Developing and Validating Trust Measures for E-Commerce: An Integrative Typology. Information Systems Research 13, 3 (Sept. 2002), 334–359. https://doi.org/10.1287/isre.13.3.334.81Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. Sean M. McNee, Shyong K. Lam, Joseph A. Konstan, and John Riedl. 2003. Interfaces for Eliciting New User Preferences in Recommender Systems. In User Modeling 2003, Peter Brusilovsky, Albert Corbett, and Fiorella de Rosis (Eds.). Vol. 2702. Springer Berlin Heidelberg, Berlin, Heidelberg, 178–187. https://doi.org/10.1007/3-540-44963-9_24Google ScholarGoogle ScholarCross RefCross Ref
  50. Stephanie M. Merritt, Heather Heimbaugh, Jennifer LaChapell, and Deborah Lee. 2013. I Trust It, but I Don’t Know Why: Effects of Implicit Attitudes Toward Automation on Trust in an Automated System. Human Factors 55, 3 (June 2013), 520–534. https://doi.org/10.1177/0018720812465081Google ScholarGoogle ScholarCross RefCross Ref
  51. Martijn Millecamp, Nyi Nyi Htun, Cristina Conati, and Katrien Verbert. 2019. To Explain or Not to Explain: The Effects of Personal Characteristics When Explaining Music Recommendations. In Proceedings of the 24th International Conference on Intelligent User Interfaces. ACM, Marina del Ray California, 397–407. https://doi.org/10.1145/3301275.3302313Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. Tim Miller. 2019. Explanation in Artificial Intelligence: Insights from the Social Sciences. Artificial Intelligence 267 (Feb. 2019), 1–38. https://doi.org/10.1016/j.artint.2018.07.007Google ScholarGoogle ScholarCross RefCross Ref
  53. Sina Mohseni, Niloofar Zarei, and Eric D. Ragan. 2021. A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems. ACM Transactions on Interactive Intelligent Systems 11, 3-4 (Aug. 2021), 24:1–24:45. https://doi.org/10.1145/3387166Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. Bonnie M. Muir. 1987. Trust between Humans and Machines, and the Design of Decision Aids. International Journal of Man-Machine Studies 27, 5-6 (Nov. 1987), 527–539. https://doi.org/10.1016/S0020-7373(87)80013-5Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. Mahsan Nourani, Samia Kabir, Sina Mohseni, and Eric D. Ragan. 2019. The Effects of Meaningful and Meaningless Explanations on Trust and Perceived System Accuracy in Intelligent Systems. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 7 (Oct. 2019), 97–105.Google ScholarGoogle ScholarCross RefCross Ref
  56. Mahsan Nourani, Joanie King, and Eric Ragan. 2020. The Role of Domain Expertise in User Trust and the Impact of First Impressions with Intelligent Systems. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 8 (Oct. 2020), 112–121.Google ScholarGoogle ScholarCross RefCross Ref
  57. Ingrid Nunes, this link will open in a new window Link to external site, and Dietmar Jannach. 2017. A Systematic Review and Taxonomy of Explanations in Decision Support and Recommender Systems. User Modeling and User - Adapted Interaction 27, 3-5 (Dec. 2017), 393–444. https://doi.org/10.1007/s11257-017-9195-0Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. Jeroen Ooge. 2019. Het personaliseren van motivationele strategieën en gamificationtechnieken m.b.v. recommendersystemen. Master’s thesis. KU Leuven, Faculteit Wetenschappen.Google ScholarGoogle Scholar
  59. Jeroen Ooge and Katrien Verbert. 2021. Trust in Prediction Models: A Mixed-Methods Pilot Study on the Impact of Domain Expertise. In 2021 IEEE Workshop on TRust and EXpertise in Visual Analytics (TREX). IEEE, New Orleans, LA, USA, 8–13. https://doi.org/10.1109/TREX53765.2021.00007Google ScholarGoogle ScholarCross RefCross Ref
  60. Umberto Panniello, Michele Gorgoglione, and Alexander Tuzhilin. 2016. In CARSs We Trust: How Context-Aware Recommendations Affect Customers’ Trust and Other Business Performance Measures of Recommender Systems. Information Systems Research 27, 1 (2016), 182–196.Google ScholarGoogle ScholarCross RefCross Ref
  61. Pearl Pu and Li Chen. 2006. Trust Building with Explanation Interfaces. In Proceedings of the 11th International Conference on Intelligent User Interfaces - IUI ’06. ACM Press, Sydney, Australia, 93. https://doi.org/10.1145/1111449.1111475Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. Pearl Pu and Li Chen. 2007. Trust-Inspiring Explanation Interfaces for Recommender Systems. Knowledge-Based Systems 20, 6 (Aug. 2007), 542–556. https://doi.org/10.1016/j.knosys.2007.04.004Google ScholarGoogle ScholarDigital LibraryDigital Library
  63. Maxwell Szymanski, Martijn Millecamp, and Katrien Verbert. 2021. Visual, Textual or Hybrid: The Effect of User Expertise on Different Explanations. In 26th International Conference on Intelligent User Interfaces. ACM, College Station TX USA, 109–119. https://doi.org/10.1145/3397481.3450662Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. Nava Tintarev and Judith Masthoff. 2007. Effective Explanations of Recommendations: User-Centered Design. In Proceedings of the 2007 ACM Conference on Recommender Systems(RecSys ’07). Association for Computing Machinery, New York, NY, USA, 153–156. https://doi.org/10.1145/1297231.1297259Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. Nava Tintarev and Judith Masthoff. 2007. A Survey of Explanations in Recommender Systems. In 2007 IEEE 23rd International Conference on Data Engineering Workshop. IEEE, Istanbul, Turkey, 801–810. https://doi.org/10.1109/ICDEW.2007.4401070Google ScholarGoogle ScholarDigital LibraryDigital Library
  66. Nava Tintarev and Judith Masthoff. 2011. Designing and Evaluating Explanations for Recommender Systems. In Recommender Systems Handbook, Francesco Ricci, Lior Rokach, Bracha Shapira, and Paul B. Kantor (Eds.). Springer US, Boston, MA, 479–510. https://doi.org/10.1007/978-0-387-85820-3_15Google ScholarGoogle ScholarCross RefCross Ref
  67. Nava Tintarev and Judith Masthoff. 2012. Evaluating the Effectiveness of Explanations for Recommender Systems. User Modeling and User-Adapted Interaction 22, 4 (Oct. 2012), 399–439. https://doi.org/10.1007/s11257-011-9117-5Google ScholarGoogle ScholarDigital LibraryDigital Library
  68. Chun-Hua Tsai and Peter Brusilovsky. 2019. Evaluating Visual Explanations for Similarity-Based Recommendations: User Perception and Performance. In Proceedings of the 27th ACM Conference on User Modeling, Adaptation and Personalization. Association for Computing Machinery, New York, NY, USA, 22–30.Google ScholarGoogle ScholarDigital LibraryDigital Library
  69. Chun-Hua Tsai and Peter Brusilovsky. 2019. Explaining Recommendations in an Interactive Hybrid Social Recommender. In Proceedings of the 24th International Conference on Intelligent User Interfaces. ACM, Marina del Ray California, 391–396. https://doi.org/10.1145/3301275.3302318Google ScholarGoogle ScholarDigital LibraryDigital Library
  70. Raphael Vallat. 2018. Pingouin: Statistics in Python. Journal of Open Source Software 3, 31 (Nov. 2018), 1026. https://doi.org/10.21105/joss.01026Google ScholarGoogle ScholarCross RefCross Ref
  71. Alfredo Vellido. 2020. The Importance of Interpretability and Visualization in Machine Learning for Applications in Medicine and Health Care. Neural Computing and Applications 32, 24 (Dec. 2020), 18069–18083. https://doi.org/10.1007/s00521-019-04051-wGoogle ScholarGoogle ScholarDigital LibraryDigital Library
  72. Katrien Verbert, Nikos Manouselis, Xavier Ochoa, Martin Wolpers, Hendrik Drachsler, Ivana Bosnic, and Erik Duval. 2012. Context-Aware Recommender Systems for Learning: A Survey and Future Challenges. IEEE Transactions on Learning Technologies 5, 4 (Oct. 2012), 318–335. https://doi.org/10.1109/TLT.2012.11Google ScholarGoogle ScholarDigital LibraryDigital Library
  73. Giulio Vidotto, Davide Massidda, Stefano Noventa, and Marco Vicentini. 2012. Trusting Beliefs: A Functional Measurement Study. Psicologica: International Journal of Methodology and Experimental Psychology 33, 3 (2012), 575–590.Google ScholarGoogle Scholar
  74. Danding Wang, Qian Yang, Ashraf Abdul, and Brian Y. Lim. 2019. Designing Theory-Driven User-Centric Explainable AI. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1–15.Google ScholarGoogle ScholarDigital LibraryDigital Library
  75. Y. Diana Wang. 2014. Building Trust in E-Learning. ATHENS JOURNAL OF EDUCATION 1, 1 (Jan. 2014), 9–18. https://doi.org/10.30958/aje.1-1-1Google ScholarGoogle ScholarCross RefCross Ref
  76. Kelly Wauters, Piet Desmet, and Wim Van Den Noortgate. 2012. Item Difficulty Estimation: An Auspicious Collaboration between Data and Judgment. Computers & Education 58, 4 (May 2012), 1183–1193. https://doi.org/10.1016/j.compedu.2011.11.020Google ScholarGoogle ScholarDigital LibraryDigital Library
  77. Yongfeng Zhang and Xu Chen. 2020. Explainable Recommendation: A Survey and New Perspectives. Foundations and Trends® in Information Retrieval 14, 1(2020), 1–101. https://doi.org/10.1561/1500000066Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Explaining Recommendations in E-Learning: Effects on Adolescents' Trust
            Index terms have been assigned to the content through auto-classification.

            Recommendations

            Comments

            Login options

            Check if you have access through your login credentials or your institution to get full access on this article.

            Sign in
            • Published in

              cover image ACM Conferences
              IUI '22: Proceedings of the 27th International Conference on Intelligent User Interfaces
              March 2022
              888 pages
              ISBN:9781450391443
              DOI:10.1145/3490099

              Copyright © 2022 ACM

              Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

              Publisher

              Association for Computing Machinery

              New York, NY, United States

              Publication History

              • Published: 22 March 2022

              Permissions

              Request permissions about this article.

              Request Permissions

              Check for updates

              Qualifiers

              • research-article
              • Research
              • Refereed limited

              Acceptance Rates

              Overall Acceptance Rate746of2,811submissions,27%

            PDF Format

            View or Download as a PDF file.

            PDF

            eReader

            View online with eReader.

            eReader

            HTML Format

            View this article in HTML Format .

            View HTML Format