Towards improving the e-learning experience for deaf students: e-LUX - Université Toulouse III - Paul Sabatier - Toulouse INP Accéder directement au contenu
Communication Dans Un Congrès Année : 2014

Towards improving the e-learning experience for deaf students: e-LUX

Vers l'amélioration de l'expérience d'apprentissage électronique pour les étudiants sourds: e-LUX

Verso il miglioramento dell'approccio all'e-learning per gli studenti sordi: e-LUX

Résumé

Just like any other software, e-learning applications need to be understood by their users in order to be effectively exploited. This particular class of software requires an even more careful design, during which many issues must be addressed.In most cases, attention merely focuses on features available on the container, i.e. the software platform to deploy the e-learning content [1]. However, content, i.e. learning material, is even more important when addressing users with special needs [2][6]. This calls for devising ways to transmit the information through the best suited (sensorial) channels for each category of users. Besides usability, accessibility becomes a main requirement.Deaf people are heavily affected by the digital divide. Most accessibility guidelines addressing their needs just deal with captioning and audio-content transcription. Only a few organizations, like W3C [11], produced guidelines dealing with a most distinctive feature of deaf people: Sign Language (SL). SL is, in fact, the visual-gestural language used by many deaf people to communicate among themselves.The present work aims toward e-learning accessibility for deaf people. In particular, we propose preliminary solutions to tailor activities which can be more fruitful when performed in one’s own “native” language.A condition to achieve this goal for deaf learners is integrating SL resources and tools within e-learning applications, since the benefits of this methodology cannot be matched by any other accessibility solution [7].Videos are a powerful resource in deaf-oriented accessibility; nonetheless, written language cannot be completely replaced by video resources. Some functions cannot be effectively exploited with the support of video resources; they range from general ones, such as searching and browsing, to more content-specific, like annotation and tagging.The communication channel employed and the simultaneity of the spatial-temporal patterns by which information is transmitted make impossible to represent SL using the same systems developed for spoken languages. Many writing systems have been devised for SL(see [5] for a comparison), but none has been widely accepted by the deaf communities. We focused on SignWriting (SW), a writing system using visual symbols to represent the handshapes, movements, and facial expressions of any SL [10] (see Fig. 01 for an example). Figure 01: LIS sign for “FUN", written in SW.SW has been already successfully exploited to build whole deaf-oriented websites, such as the ASL Wikipedia Project [9], which employs deaf ASL signers to produce ASL Wikipedia articles.While SW can be simply written with pencil and paper, to ensure its diffusion various digital editors have been developed. Those editors basically give the possibility to write signs and save them in different formats; our proposal is to introduce SL integration on e-learning platforms through the use of SW editors.Our team has devised SWift, a web-based SW editor[3], whose features have been conceived together with its main target users, deaf people, to ensure a high degree of usability and accessibility [2]. Since many e-learning environments are implemented as web-based applications, they can easily embed SWift for achieveing a prompt SL support for e-learning, to grant didactic experts the design of Learning Objects (LOs) written in SL, and to allow deaf students the possibility to learn using their own language. A first step towards integration of SW within didactic content authoring/fruition interfaces has been the integration of SWift inside DELE, a platform designed taking into consideration the essentially visual learning style of deaf people [4].Despite the efforts, SW editors are still far from providing a composition interface able to match the simplicity of handwriting. Currently, all SW editors rely heavily on WIMP interfaces, both for accessing the application features and for the SW production process. This can make composing a SW text rather laborious.We evaluated the possibility to design a new generation of SW editing applications, for partially overcome the constraints of WIMP interfaces. The new tools are intended to relieve the user of any burden related to clicking, dragging, browsing during the SW production process, and to provide an interaction style as similar as possible to natural handwriting. We are trying to achieve this goal by producing a SignWriting Optical Glyph Recognition (SW-OGR) engine, which convert real-time produced (or batch-fed) images containing handwritten SWsymbols into machine-encoded SW text.In the present work we introduce the architectural design for the new generation of SW editors featuring a SW-OGR engine. Our aim is to support both creation and fruition of SW LOs by making them more natural and fast to handle in digital settings. We hope in this way to offer more and more deaf people the chance to access distance learning. Furthermore, once the SW-OGR will be fully working, it will also be possible to exploit it in touch-screen-based interaction, and in “transcribing” the content of signed videos, thus providing a complete set of tools to support SL. References1.Ardito, C., Costabile, M., De Marsico, M., Lanzilotti, R., Levialdi, S., Roselli, T., & Rossano, V. (2006). An approach to usability evaluation of e-learning applications. Universal Access in the Information Society, 4 (3), 270-283. doi: 10.1007/s10209-005-0008-62.Bianchini, C. S., Borgia, F., Bottoni, P., & De Marsico, M. (2012). SWift: a SignWriting improved fast transcriber. In G. Tortora, S. Levialdi, & M. Tucci (Eds.), AVI (p. 390-393). ACM. doi: http://doi.acm.org/10.1145/2254556.22546313.Borgia, F. (2010). SWift: SignWriting improved fast transcriber (Unpublished master's thesis). Sapienza Università di Roma, Rome.4.Bottoni, P., Borgia, F., Buccarella, D., Capuano, D., De Marsico, M., & Labella, A. (2013). Stories and signs in an e-learning environment for deaf people. International Journal on Universal Access in the Information Society, 12 (4), 369-386.5.Channon, R., & van der Hulst, H. (2010). Notation Sytems. In D. Brentari (Ed.), Sign Languages. Cambridge: Cambridge University Press.6.De Marsico, M., Kimani, S., Mirabella, V., Norman, K., & Catarci, T. (2006). A proposal toward the development of accessible e-learning content by human involvement. Universal Access in the Information Society, 5 (2), 150-169. doi: 10.1007/s10209-006-0035-y7.Fajardo, I., Vigo, M., & Salmerón, L. (2009). Technology for supporting web information search and learning in Sign Language. Interactingwith Computers, 21 (4), 243-256. doi: 10.1016/j.intcom.2009.05.0058.Norman, D. A., & Draper, S. W. (1986). User centered system design. L. Erlbaum Associates, USA.9.Sutton, V. (n.d.). ASL Wikipedia Project. Retrieved on November 10, 2013, from http://ase.wikipedia.wmflabs.org/wiki/Main Page.10.Sutton, V. (1995). Lessons in SignWriting. Deaf Action Commitee for SignWriting, USA.11.WHO. (2008, December 11). Web Content Accessibility Guidelines 2.0 (WCAG 2.0). Retrieved on November 11, 2013, from http://www.w3.org/TR/WCAG.
4C C027 2014-presentazione+ Heraklion x HAL.pdf (1.43 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-02558857 , version 1 (15-05-2020)

Identifiants

  • HAL Id : hal-02558857 , version 1

Citer

Fabrizio Borgia, Claudia S. Bianchini, Maria de Marsico. Towards improving the e-learning experience for deaf students: e-LUX. Human-Computer Interaction International Conf. (HCII2014), Jun 2014, Heraklion, Greece. ⟨hal-02558857⟩
60 Consultations
17 Téléchargements

Partager

Gmail Facebook X LinkedIn More