Abstract
The knowledge of sign language by hearing people is not widespread and this may pose unpleasant barriers with the deaf. One of the biggest challenges is to raise the awareness about the importance of sign language while providing support for learning it. Our research aims at providing sign language learners with an advantageous interactive experience. In the paper we analyze the engagement of users learning through our intelligent interactive system and show that higher motivation is achieved.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Antia, S.D., Stinson, M.S., Gaustad, M.G.: Developing membership in the education (2002)
Balog, A., Pribeanu, C.: The role of perceived enjoyment in the students’ acceptance of an augmented reality teaching platform: a structural equation modelling approach. Stud. Inf. Control 19(3), 319–330 (2010)
Convention on the Rights of Persons with Disabilities (CRPD). https://www.un.org/development/desa/disabilities/convention-on-the-rights-of-persons-with-disabilities.html. Accessed Aug 2019
Field, A.: Discovering Statistics Using SPSS. Sage Publications, Thousand Oaks (2009)
Rao, G.A., Syamala, K., Kishore, P.V.V., Sastry, A.S.C.S.: Deep convolutional neural networks for sign language recognition. In: 2018 Conference on Signal Processing And Communication Engineering Systems (SPACES), Vijayawada, pp. 194–197 (2018).https://doi.org/10.1109/spaces.2018.8316344
Mann, H.B., Whitney, D.R.: On a test of whether one of two random variables is stochastically larger than the other. Ann. Math. Stat. 18(1), 50–60 (1947)
O’brien, H.L., Cairns, P., Hall, M.: A practical approach to measuring user engagement with the refined user engagement scale (UES) and new UES short form. Int. J. Hum.–Comput. Stud. 112, 28–39 (2018)
Pigou, L., Dieleman, S., Kindermans, P.-J., Schrauwen, B.: Sign language recognition using convolutional neural networks. In: Agapito, L., Bronstein, Michael, M., Rother, C. (eds.) ECCV 2014. LNCS, vol. 8925, pp. 572–578. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-16178-5_40
Suresh, S., Mithun, H.T.P., Supriya, M.H.: 2019 5th International Conference on Sign Language Recognition System Advanced Computing & Communication Systems (ICACCS), pp. 614–618 (2019)
Babakus, E., Mangold, W.G.: Adapting the SERVQUAL scale to hospital services: an empirical investigation. Health Serv. Res. 26(6), 767 (1992)
Buttle, F. (ed.): Relationship Marketing: Theory and Practice. Sage, Thousand Oaks (1996)
Cronbach, L.J.: Coefficient alpha and the internal structure of tests. Psychometrika 16(3), 297–334 (1951)
Kline, P.: A Handbook of Psychological Testing, 2nd edn. Routledge, London (1999)
Hertzog, M.A.: Considerations in determining sample size for pilot studies. Res. Nurs. Health 31(2), 180–191 (2008)
Battistoni, P., Sebillo, M., Di Gregorio, M., Vitiello, G., Romano, M.: ProSign+ a cloud-based platform supporting inclusiveness in public communication. In: 2020 IEEE 17th Annual Consumer Communications & Networking Conference (CCNC), pp. 1–5. IEEE, January 2020
Di Gregorio, M., Sebillo, M., Vitiello, G., Pizza, A., Vitale, F.: ProSign everywhere-addressing communication empowerment goals for deaf people. In: Proceedings of the 5th EAI International Conference on Smart Objects and Technologies for Social Good, pp. 207–212 (2019)
Battistoni, P., Di Gregorio, M., Sebillo, M., Vitiello, G.: AI at the edge for sign language learning support. In: 2019 IEEE International Conference on Humanized Computing and Communication (HCC), pp. 16–23. IEEE (2019)
Vitiello, G., et al.: Do you like my outfit? Cromnia, a mobile assistant for blind users. In: Proceedings of the 4th EAI International Conference on Smart Objects and Technologies for Social Good, pp. 249–254. ACM (2018)
Romano, M., Bellucci, A., Aedo, I.: Understanding touch and motion gestures for blind people on mobile devices. In: Abascal, J., Barbosa, S., Fetter, M., Gross, T., Palanque, P., Winckler, M. (eds.) INTERACT 2015. LNCS, vol. 9296, pp. 38–46. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-22701-6_3
Di Chiara, G., et al.: The framy user interface for visually-impaired users. In: 2011 Sixth International Conference on Digital Information Management, pp. 36–41. IEEE (2011)
Pugeault, N., Bowden, R.R.: Spelling it out: real-time ASL fingerspelling recognition. In: IEEE International Conference on Computer Vision Workshops (ICCV Workshops), pp. 1114–1119 (2011)
Wang, C., Liu, Z., Chan, S.-C.: Superpixel-based hand gesture recognition with kinect depth camera. IEEE Trans. Multimed. 17(1), 29–39 (2015)
Maqueda, A.I., del Blanco, C.R., Jaureguizar, F., GarcÃa, N.: Human– computer interaction based on visual hand-gesture recognition using volumetric spatiograms of local binary patterns. Comput. Vis. Image Underst. 141, 126–137 (2015)
Nai, W., Liu, Y., Rempel, D., Wang, Y.: Fast hand posture classification using depth features extracted from random line segments. Pattern Recognit. 65, 1–10 (2017). https://doi.org/10.1016/j.patcog.2016.11.022. ISSN 0031-3203
Kuznetsova, A., Leal-Taixé, L., Rosenhahn, B.: Real-time sign language recognition using a consumer depth camera. In: IEEE International Conference on Computer Vision Workshops, pp. 83–90 (2013)
Wohlkinger, W., Vincze, M.: Ensemble of shape functions for 3D object classification. In: IEEE International Conference on Robotics and Biomimetics, pp. 2987–2992 (2011)
Zhang, C., Tian, Y.: Histogram of 3D facets: a depth descriptor for human action and hand gesture recognition. Comput. Vis. Image Underst. 139, 29–39 (2015). https://doi.org/10.1016/j.cviu.2015.05.010. ISSN 1077-3142
Zhang, C., Yang, X., Tian, Y.: Histogram of 3D facets: a characteristic descriptor for hand gesture recognition. In: 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), pp. 1–8 (2013). https://doi.org/10.1109/fg.2013.6553754
Rioux-Maldague, L., Giguere, P.L.: Sign language fingerspelling classification from depth and color images using a deep belief network. In: Canadian Conference on Computer and Robot Vision, Montreal, QC, pp. 92–97 (2014). https://doi.org/10.1109/crv.2014.20
Keskin, C., Kıraç, F., Kara, Y.E., Akarun, L.: Hand pose estimation and hand shape classification using multi-layered randomized decision forests. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7577, pp. 852–863. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33783-3_61
Shotton, J., et al.: Real-time human pose recognition in parts from single depth images. In: Proceedings of CVPR 2011, Colorado Springs, CO, USA, pp. 1297–1304 (2013). https://doi.org/10.1109/cvpr.2011.5995316
Funding
Funding: This research was partially funded by MIUR, PRIN 2017 grant number 2017JMHK4F_004.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Battistoni, P., Di Gregorio, M., Romano, M., Sebillo, M., Vitiello, G., Solimando, G. (2020). Sign Language Interactive Learning - Measuring the User Engagement. In: Zaphiris, P., Ioannou, A. (eds) Learning and Collaboration Technologies. Human and Technology Ecosystems. HCII 2020. Lecture Notes in Computer Science(), vol 12206. Springer, Cham. https://doi.org/10.1007/978-3-030-50506-6_1
Download citation
DOI: https://doi.org/10.1007/978-3-030-50506-6_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-50505-9
Online ISBN: 978-3-030-50506-6
eBook Packages: Computer ScienceComputer Science (R0)