


default search action
14th VISIGRAPP 2019: Funchal, Portugal - Volume 2: HUCAPP
- Manuela Chessa, Alexis Paljic, José Braz:

Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, VISIGRAPP 2019, Volume 2: HUCAPP, Prague, Czech Republic, February 25-27, 2019. SciTePress 2019, ISBN 978-989-758-354-4
Invited Speakers
- Daniel McDuff:

Building Emotionally Intelligent AI: From Sensing to Synthesis. VISIGRAPP 2019: 5 - Diego Gutierrez:

Reinventing Movies: How Do We Tell Stories in VR? VISIGRAPP 2019: 7 - Jiri Matas:

Robust Fitting of Multiple Models in Computer Vision. VISIGRAPP 2019: 9 - Dima Damen:

A Fine-grained Perspective onto Object Interactions from First-person Views. VISIGRAPP 2019: 11-13
Papers
- Stefano Federici, Maria Laura Mele, Marco Bracalenti, Arianna Buttafuoco, Rosa Lanzilotti, Giuseppe Desolda:

Bio-behavioral and Self-report User Experience Evaluation of a Usability Assessment Platform (UTAssistant). VISIGRAPP (2: HUCAPP) 2019: 19-27 - Tanja Joan Eiler

, Armin Grünewald, Rainer Brück:
Fighting Substance Dependency Combining AAT Therapy and Virtual Reality with Game Design Elements. 28-37 - Maria Laura Mele, Damon Millar, Christiaan Erik Rijnders:

Explicit and Implicit Measures in Video Quality Assessment. 38-49 - Maxime Reynal, Pietro Aricò

, Jean-Paul Imbert, Christophe Hurter
, Gianluca Borghini
, Gianluca Di Flumeri, Nicolina Sciaraffa
, Antonio Di Florio, Michela Terenzi, Ana Ferreira, Simone Pozzi, Viviana Betti, Matteo Marucci
, Fabio Babiloni:
Investigating Multimodal Augmentations Contribution to Remote Control Tower Contexts for Air Traffic Management. 50-61 - Chiara Bassano, Manuela Chessa

, Luca Fengone, Luca Isgró, Fabio Solari, Giovanni Spallarossa, Davide Tozzi, Aldo Zini:
Evaluation of a Virtual Reality System for Ship Handling Simulations. 62-73 - Suzanne Kieffer

, Luka Rukonic
, Vincent Kervyn de Meerendré, Jean Vanderdonckt:
Specification of a UX Process Reference Model towards the Strategic Planning of UX Activities. 74-85 - Robin Horst

, Sebastian Alberternst, Jan Sutter, Philipp Slusallek, Uwe Kloos, Ralf Dörner:
Avatar2Avatar: Augmenting the Mutual Visual Communication between Co-located Real and Virtual Environments. 89-96 - Abdikadirova Banu

, Praliyev Nurgeldy, Xydas Evagoras:
Effect of Frequency Level on Vibro-tactile Sound Detection. 97-102 - Anabela Marto

, Alexandrino Gonçalves
, José Martins
, Maximino Bessa
:
Applying UTAUT Model for an Acceptance Study Alluding the Use of Augmented Reality in Archaeological Sites. 111-120 - Andrea Canessa, Paolo Casu, Fabio Solari, Manuela Chessa

:
Comparing Real Walking in Immersive Virtual Reality and in Physical World using Gait Analysis. 121-128 - Fabien Boucaud, Quentin Tafiani, Catherine Pelachaud

, Indira Thouvenin:
Social Touch in Human-agent Interactions in an Immersive Virtual Environment. 129-136 - Vanessa Lopes, João Magalhães, Sofia Cavaco:

A Dynamic Difficulty Adjustment Model for Dysphonia Therapy Games. 137-144 - Pierre Gac, Paul Richard, Yann Papouin, Sébastien George

, Émmanuelle Richard:
Virtual Interactive Tablet to Support Vocational Training in Immersive Environment. 145-152 - Gabriele Scali

, Robert D. Macredie:
Shared Mental Models as a Way of Managing Transparency in Complex Human-Autonomy Teaming. 153-159 - Loup Vuarnesson:

Empathic Interaction: Design Guidelines to Induce Flow States in Gestural Interfaces. 160-167 - Cédric Plessiet, Georges Gagneré, Rémy Sohier:

A Proposal for the Classification of Virtual Character. 168-174 - Nefeli Georgakopoulou

, Dionysios Zamplaras, Sofia Kourkoulakou, Chu-Yin Chen, François Garnier
:
Exploring the Virtuality Continuum Frontiers: Multisensory and Magical Experiences in Interactive Art. 175-182 - Almoctar Hassoumi, Christophe Hurter

:
Eye Gesture in a Mixed Reality Environment. 183-187 - Gustaf Bohlin, Kristoffer Linderman, Cecilia Ovesdotter Alm, Reynold Bailey

:
Considerations for Face-based Data Estimates: Affect Reactions to Videos. 188-194 - Fiona Dermody

, Alistair Sutherland:
Practising Public Speaking: User Responses to using a Mirror versus a Multimodal Positive Computing System. 195-201

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID







last updated on 2026-05-01 23:22 CEST by the 







