ISMAR 2018
IEEEIEEE computer societyIEEE vgtcACM In-CooperationACM In-Cooperation


Platinum Apple
Silver MozillaIntelDaqriPTCAmazon
Bronze FacebookQualcommUmajinDisney ResearchUniSA VenturesReflektOccipital
SME EnvisageARKhronos
Academic TUMETHZ

Stefan Werrlich, Alexandra Ginger, Austino Daniel, Phuc-Anh Nguyen, and Gunther Notni. Comparing hmd-based and paper-based training. In Proceedings of the IEEE International Symposium for Mixed and Augmented Reality 2018 (To appear). 2018.


Collaborative Systems are in daily use by millions of people promising to improve everyone's life. Smartphones, smartwatches and tablets are everyday objects and life without these unimaginable. New assistive systems such as head-mounted displays (HMDs) are becoming increasingly important for various domains, especially for the industrial domain, because they claim to improve the efficiency and quality of procedural tasks. A range of scientific laboratory studies already demonstrated the potential of augmented reality (AR) technologies especially for training tasks. However, most researches are limited in terms of inadequate task complexity, measured variables and lacking comparisons. In this paper, we want to close this gap by introducing a novel multimodal HMD-based training application and compare it to paper-based learning for manual assembly tasks. We perform a user study with 30 participants measuring the training transfer of an engine assembly training task, the user satisfaction and perceived workload during the experiment. Established questionnaires such as the system usability scale (SUS), the user experience questionnaire (UEQ) and the Nasa Task Load Index (NASA-TLX) are used for the assessment. Results indicate significant differences between both learning approaches. Participants perform significantly faster and significantly worse using paper-based instructions. Furthermore, all trainees preferred HMD-based learning for future assembly trainings which was scientifically proven by the UEQ.