Skip to main content

Sign Language Interactive Learning - Measuring the User Engagement

  • Conference paper
  • First Online:
Learning and Collaboration Technologies. Human and Technology Ecosystems (HCII 2020)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 12206))

Included in the following conference series:

Abstract

The knowledge of sign language by hearing people is not widespread and this may pose unpleasant barriers with the deaf. One of the biggest challenges is to raise the awareness about the importance of sign language while providing support for learning it. Our research aims at providing sign language learners with an advantageous interactive experience. In the paper we analyze the engagement of users learning through our intelligent interactive system and show that higher motivation is achieved.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Antia, S.D., Stinson, M.S., Gaustad, M.G.: Developing membership in the education (2002)

    Google Scholar 

  2. Balog, A., Pribeanu, C.: The role of perceived enjoyment in the students’ acceptance of an augmented reality teaching platform: a structural equation modelling approach. Stud. Inf. Control 19(3), 319–330 (2010)

    Google Scholar 

  3. Convention on the Rights of Persons with Disabilities (CRPD). https://www.un.org/development/desa/disabilities/convention-on-the-rights-of-persons-with-disabilities.html. Accessed Aug 2019

  4. Field, A.: Discovering Statistics Using SPSS. Sage Publications, Thousand Oaks (2009)

    MATH  Google Scholar 

  5. Rao, G.A., Syamala, K., Kishore, P.V.V., Sastry, A.S.C.S.: Deep convolutional neural networks for sign language recognition. In: 2018 Conference on Signal Processing And Communication Engineering Systems (SPACES), Vijayawada, pp. 194–197 (2018).https://doi.org/10.1109/spaces.2018.8316344

  6. Mann, H.B., Whitney, D.R.: On a test of whether one of two random variables is stochastically larger than the other. Ann. Math. Stat. 18(1), 50–60 (1947)

    Google Scholar 

  7. O’brien, H.L., Cairns, P., Hall, M.: A practical approach to measuring user engagement with the refined user engagement scale (UES) and new UES short form. Int. J. Hum.–Comput. Stud. 112, 28–39 (2018)

    Google Scholar 

  8. Pigou, L., Dieleman, S., Kindermans, P.-J., Schrauwen, B.: Sign language recognition using convolutional neural networks. In: Agapito, L., Bronstein, Michael, M., Rother, C. (eds.) ECCV 2014. LNCS, vol. 8925, pp. 572–578. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-16178-5_40

    Chapter  Google Scholar 

  9. Suresh, S., Mithun, H.T.P., Supriya, M.H.: 2019 5th International Conference on Sign Language Recognition System Advanced Computing & Communication Systems (ICACCS), pp. 614–618 (2019)

    Google Scholar 

  10. Babakus, E., Mangold, W.G.: Adapting the SERVQUAL scale to hospital services: an empirical investigation. Health Serv. Res. 26(6), 767 (1992)

    Google Scholar 

  11. Buttle, F. (ed.): Relationship Marketing: Theory and Practice. Sage, Thousand Oaks (1996)

    Google Scholar 

  12. Cronbach, L.J.: Coefficient alpha and the internal structure of tests. Psychometrika 16(3), 297–334 (1951)

    Article  Google Scholar 

  13. Kline, P.: A Handbook of Psychological Testing, 2nd edn. Routledge, London (1999)

    Google Scholar 

  14. Hertzog, M.A.: Considerations in determining sample size for pilot studies. Res. Nurs. Health 31(2), 180–191 (2008)

    Article  Google Scholar 

  15. Battistoni, P., Sebillo, M., Di Gregorio, M., Vitiello, G., Romano, M.: ProSign+ a cloud-based platform supporting inclusiveness in public communication. In: 2020 IEEE 17th Annual Consumer Communications & Networking Conference (CCNC), pp. 1–5. IEEE, January 2020

    Google Scholar 

  16. Di Gregorio, M., Sebillo, M., Vitiello, G., Pizza, A., Vitale, F.: ProSign everywhere-addressing communication empowerment goals for deaf people. In: Proceedings of the 5th EAI International Conference on Smart Objects and Technologies for Social Good, pp. 207–212 (2019)

    Google Scholar 

  17. Battistoni, P., Di Gregorio, M., Sebillo, M., Vitiello, G.: AI at the edge for sign language learning support. In: 2019 IEEE International Conference on Humanized Computing and Communication (HCC), pp. 16–23. IEEE (2019)

    Google Scholar 

  18. Vitiello, G., et al.: Do you like my outfit? Cromnia, a mobile assistant for blind users. In: Proceedings of the 4th EAI International Conference on Smart Objects and Technologies for Social Good, pp. 249–254. ACM (2018)

    Google Scholar 

  19. Romano, M., Bellucci, A., Aedo, I.: Understanding touch and motion gestures for blind people on mobile devices. In: Abascal, J., Barbosa, S., Fetter, M., Gross, T., Palanque, P., Winckler, M. (eds.) INTERACT 2015. LNCS, vol. 9296, pp. 38–46. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-22701-6_3

    Chapter  Google Scholar 

  20. Di Chiara, G., et al.: The framy user interface for visually-impaired users. In: 2011 Sixth International Conference on Digital Information Management, pp. 36–41. IEEE (2011)

    Google Scholar 

  21. Pugeault, N., Bowden, R.R.: Spelling it out: real-time ASL fingerspelling recognition. In: IEEE International Conference on Computer Vision Workshops (ICCV Workshops), pp. 1114–1119 (2011)

    Google Scholar 

  22. Wang, C., Liu, Z., Chan, S.-C.: Superpixel-based hand gesture recognition with kinect depth camera. IEEE Trans. Multimed. 17(1), 29–39 (2015)

    Article  Google Scholar 

  23. Maqueda, A.I., del Blanco, C.R., Jaureguizar, F., García, N.: Human– computer interaction based on visual hand-gesture recognition using volumetric spatiograms of local binary patterns. Comput. Vis. Image Underst. 141, 126–137 (2015)

    Article  Google Scholar 

  24. Nai, W., Liu, Y., Rempel, D., Wang, Y.: Fast hand posture classification using depth features extracted from random line segments. Pattern Recognit. 65, 1–10 (2017). https://doi.org/10.1016/j.patcog.2016.11.022. ISSN 0031-3203

    Article  Google Scholar 

  25. Kuznetsova, A., Leal-Taixé, L., Rosenhahn, B.: Real-time sign language recognition using a consumer depth camera. In: IEEE International Conference on Computer Vision Workshops, pp. 83–90 (2013)

    Google Scholar 

  26. Wohlkinger, W., Vincze, M.: Ensemble of shape functions for 3D object classification. In: IEEE International Conference on Robotics and Biomimetics, pp. 2987–2992 (2011)

    Google Scholar 

  27. Zhang, C., Tian, Y.: Histogram of 3D facets: a depth descriptor for human action and hand gesture recognition. Comput. Vis. Image Underst. 139, 29–39 (2015). https://doi.org/10.1016/j.cviu.2015.05.010. ISSN 1077-3142

    Article  Google Scholar 

  28. Zhang, C., Yang, X., Tian, Y.: Histogram of 3D facets: a characteristic descriptor for hand gesture recognition. In: 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), pp. 1–8 (2013). https://doi.org/10.1109/fg.2013.6553754

  29. Rioux-Maldague, L., Giguere, P.L.: Sign language fingerspelling classification from depth and color images using a deep belief network. In: Canadian Conference on Computer and Robot Vision, Montreal, QC, pp. 92–97 (2014). https://doi.org/10.1109/crv.2014.20

  30. Keskin, C., Kıraç, F., Kara, Y.E., Akarun, L.: Hand pose estimation and hand shape classification using multi-layered randomized decision forests. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7577, pp. 852–863. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33783-3_61

    Chapter  Google Scholar 

  31. Shotton, J., et al.: Real-time human pose recognition in parts from single depth images. In: Proceedings of CVPR 2011, Colorado Springs, CO, USA, pp. 1297–1304 (2013). https://doi.org/10.1109/cvpr.2011.5995316

Download references

Funding

Funding: This research was partially funded by MIUR, PRIN 2017 grant number 2017JMHK4F_004.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Marco Romano .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Battistoni, P., Di Gregorio, M., Romano, M., Sebillo, M., Vitiello, G., Solimando, G. (2020). Sign Language Interactive Learning - Measuring the User Engagement. In: Zaphiris, P., Ioannou, A. (eds) Learning and Collaboration Technologies. Human and Technology Ecosystems. HCII 2020. Lecture Notes in Computer Science(), vol 12206. Springer, Cham. https://doi.org/10.1007/978-3-030-50506-6_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-50506-6_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-50505-9

  • Online ISBN: 978-3-030-50506-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics