Due: May 15, 2019 @ 11:59 PM


Table of Contents


For the 4 credit assignment, you will complete a review of 2 of the papers listed below. Your review should

  1. Be 4 to 6 pages in length total, so each paper you review should have 2 to 3 pages devoted to it.
  2. Be submitted as a PDF.
  3. Include the following information at the top of the first page:
    1. Your name
    2. Your netid
    3. Title of first paper you reviewed
    4. Title of the second paper you reviewed

The structure of each review should be as follows:

  1. Summary
    First paragraph should summarize the goal of the research reported in the paper. You should also give your opinion about the importance of the goal. If this research is successful, what impact will it have?
  2. Analyze Results
    Next several paragraphs should analyze the results reported by the authors. Be detailed and analyze if there any problems you see with the results (e.g. small sample size). Do the results support the authors conclusions or not?
  3. Analyze Impact
    Include a paragraph analyzing if and how much this work has advanced the field of VR.
  4. Future Work
    Conclude by spending one or more paragraphs proposing how this work could be extended or what related work you could do if you were a VR researcher.

Papers

  • Pfeuffer, K., Geiger, M. J., Prange, S., Mecke, L., Buschek, D., & Alt, F. (2019, May). Behavioural biometrics in vr: Identifying people from body motion and relations in virtual reality. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1-12).[PDF]

    Every person is unique, with individual behavioural charac-teristics: how one moves, coordinates, and uses their body.In this paper we investigate body motion as behavioural biometrics for virtual reality. In particular, we look into which behaviour is suitable to identify a user. This is valuable in situations where multiple people use a virtual reality environ-ment in parallel, for example in the context of authentication or to adapt the VR environment to users’ preferences. Wepresent a user study (N=22) where people perform controlledVR tasks (pointing, grabbing, walking, typing), monitoring their head, hand, and eye motion data over two sessions.These body segments can be arbitrarily combined into bodyrelations, and we found that these movements and their combination lead to characteristic behavioural patterns. Wepresent an extensive analysis of which motion/relation is useful to identify users in which tasks using classification methods. Our findings are beneficial for researchers and prac-titioners alike who aim to build novel adaptive and secure user interfaces in virtual reality.

  • Schwind, V., Knierim, P., Haas, N., & Henze, N. (2019, May). Using presence questionnaires in virtual reality. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1-12).[PDF]

    Virtual Reality (VR) is gaining increasing importance in science, education, and entertainment. A fundamental characteristic of VR is creating presence, the experience of ‘being’ or ‘acting’, when physically situated in another place. Measuring presence is vital for VR research and development. It is typically repeatedly assessed through questionnaires completed after leaving a VR scene. Requiring participants to leave and re-enter the VR costs time and can cause disorientation. In this paper, we investigate the effect of completing presence questionnaires directly in VR. Thirty-six participants experienced two immersion levels and filled three standardized presence questionnaires in the real world or VR. We found no effect on the questionnaires’ mean scores; however, we found that the variance of those measures significantly depends on the realism of the virtual scene and if the subjects had left the VR. The results indicate that, besides reducing a study’s duration and reducing disorientation, completing questionnaires in VR does not change the measured presence but can increase the consistency of the variance.

  • Alzayat, A., Hancock, M., & Nacenta, M. A. (2019, May). Quantitative Measurement of Tool Embodiment for Virtual Reality Input Alternatives. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1-11).[PDF]

    Virtual reality (VR) strives to replicate the sensation of the physical environment by mimicking people’s perceptions and experience of being elsewhere. These experiences are of-ten mediated by the objects and tools we interact with in the virtual world (e.g., a controller). Evidence from psychology posits that when using the tool proficiently, it becomes em-bodied (i.e., an extension of one’s body). There is little work,however, on how to measure this phenomenon in VR, andon how different types of tools and controllers can affect the experience of interaction. In this work, we leverage cognitive psychology and philosophy literature to construct the Locus-of-Attention Index (LAI), a measure of tool embodiment. We designed and conducted a study that measures readiness-to-hand and unreadiness-to-hand for three VR interaction techniques: hands, a physical tool, and a VR controller. The study shows that LAI can measure differences in embodiment with working and broken tools and that using the hand directly results in more embodiment than using controllers.

  • Latoschik, M. E., Kern, F., Stauffert, J. P., Bartl, A., Botsch, M., & Lugrin, J. L. (2019). Not Alone Here?! Scalability and User Experience of Embodied Ambient Crowds in Distributed Social Virtual Reality. IEEE transactions on visualization and computer graphics, 25(5), 2134-2144.[PDF]

    This article investigates performance and user experience in Social Virtual Reality (SVR) targeting distributed, embodied, and immersive, face-to-face encounters. We demonstrate the close relationship between scalability, reproduction accuracy, and the resulting performance characteristics, as well as the impact of these characteristics on users co-located with larger groups of embodied virtual others. System scalability provides a variable number of co-located avatars and Al-controlled agents with a variety of different appearances, including realistic-looking virtual humans generated from photogrammetry scans. The article reports on how to meet the requirements of embodied SVR with today’s technical off-the-shelf solutions and what to expect regarding features, performance, and potential limitations. Special care has been taken to achieve low latencies and sufficient frame rates necessary for reliable communication of embodied social signals. We propose a hybrid evaluation approach which coherently relates results from technical benchmarks to subjective ratings and which confirms required performance characteristics for the target scenario of larger distributed groups. A user-study reveals positive effects of an increasing number of co-located social companions on the quality of experience of virtual worlds, i.e., on presence, possibility of interaction, and co-presence. It also shows that variety in avatar/agent appearance might increase eeriness but might also stimulate an increased interest of participants about the environment.

  • Toothman, N., & Neff, M. (2019, March). The Impact of Avatar Tracking Errors on User Experience in VR. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR) (pp. 756-766). IEEE.[PDF]

    There is evidence that adding motion-tracked avatars to virtual environments increases users’ sense of presence. High quality motion capture systems are cost sensitive for the average user and low cost resource-constrained systems introduce various forms of error to the tracking. Much research has looked at the impact of particular kinds of error, primarily latency, on factors such as body ownership, but it is still not known what level of tracking error is permissible in these systems to afford compelling social interaction. This paper presents a series of experiments employing a sizable subject pool (n=96) that study the impact of motion tracking errors on user experience for activities including social interaction and virtual object manipulation. Diverse forms of error that arise in tracking are examined, including latency, popping (jumps in position), stuttering (positions held in time) and constant noise. The focus is on error on a person’s own avatar, but some conditions also include error on an interlocutor, which appears underexplored. The picture that emerges is complex. Certain forms of error impact performance, a person’s sense of embodiment, enjoyment and perceived usability, while others do not. Notably, evidence was not found that tracking errors impact social presence, even when those errors are severe.

  • Denes, G., Maruszczyk, K., Ash, G., & Mantiuk, R. K. (2019). Temporal Resolution Multiplexing: Exploiting the limitations of spatio-temporal vision for more efficient VR rendering. IEEE transactions on visualization and computer graphics, 25(5), 2072-2082.[PDF]

    Rendering in virtual reality (VR) requires substantial computational power to generate 90 frames per second at high resolution with good-quality antialiasing. The video data sent to a VR headset requires high bandwidth, achievable only on dedicated links. In this paper we explain how rendering requirements and transmission bandwidth can be reduced using a conceptually simple technique that integrates well with existing rendering pipelines. Every even-numbered frame is rendered at a lower resolution, and every odd-numbered frame is kept at high resolution but is modified in order to compensate for the previous loss of high spatial frequencies. When the frames are seen at a high frame rate, they are fused and perceived as high-resolution and high-frame-rate animation. The technique relies on the limited ability of the visual system to perceive high spatio-temporal frequencies. Despite its conceptual simplicity, correct execution of the technique requires a number of non-trivial steps: display photometric temporal response must be modeled, flicker and motion artifacts must be avoided, and the generated signal must not exceed the dynamic range of the display. Our experiments, performed on a high-frame-rate LCD monitor and OLED-based VR headsets, explore the parameter space of the proposed technique and demonstrate that its perceived quality is indistinguishable from full-resolution rendering. The technique is an attractive alternative to reprojection and resolution reduction of all frames.

  • Peillard, E., Thebaud, T., Norrnand, J. M., Argelaguet, F., Moreau, G., & Lécuyer, A. (2019, March). Virtual Objects Look Farther on the Sides: The Anisotropy of Distance Perception in Virtual Reality. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR) (pp. 227-236). IEEE.[PDF]

    The topic of distance perception has been widely investigated in Virtual Reality (VR). However, the vast majority of previous work mainly focused on distance perception of objects placed in front of the observer. Then, what happens when the observer looks on the side? In this paper, we study differences in distance estimation when comparing objects placed in front of the observer with objects placed on his side. Through a series of four experiments (n=85), we assessed participants’ distance estimation and ruled out potential biases. In particular, we considered the placement of visual stimuli in the field of view, users’ exploration behavior as well as the presence of depth cues. For all experiments a two-alternative forced choice (2AFC) standardized psychophysical protocol was employed, in which the main task was to determine the stimuli that seemed to be the farthest one. In summary, our results showed that the orientation of virtual stimuli with respect to the user introduces a distance perception bias: objects placed on the sides are systematically perceived farther away than objects in front. In addition, we could observe that this bias increases along with the angle, and appears to be independent of both the position of the object in the field of view as well as the quality of the virtual scene. This work sheds a new light on one of the specificities of VR environments regarding the wider subject of visual space theory. Our study paves the way for future experiments evaluating the anisotropy of distance perception in real and virtual environments.

  • Waltemate, T., Gall, D., Roth, D., Botsch, M., & Latoschik, M. E. (2018). The impact of avatar personalization and immersion on virtual body ownership, presence, and emotional response. IEEE transactions on visualization and computer graphics, 24(4), 1643-1652.[PDF]

    This article reports the impact of the degree of personalization and individualization of users’ avatars as well as the impact of the degree of immersion on typical psychophysical factors in embodied Virtual Environments. We investigated if and how virtual body ownership (including agency), presence, and emotional response are influenced depending on the specific look of users’ avatars, which varied between (1) a generic hand-modeled version, (2) a generic scanned version, and (3) an individualized scanned version. The latter two were created using a state-of-the-art photogrammetry method providing a fast 3D-scan and post-process workflow. Users encountered their avatars in a virtual mirror metaphor using two VR setups that provided a varying degree of immersion, (a) a large screen surround projection (L-shape part of a CAVE) and (b) a head-mounted display (HMD). We found several significant as well as a number of notable effects. First, personalized avatars significantly increase body ownership, presence, and dominance compared to their generic counterparts, even if the latter were generated by the same photogrammetry process and hence could be valued as equal in terms of the degree of realism and graphical quality. Second, the degree of immersion significantly increases the body ownership, agency, as well as the feeling of presence. These results substantiate the value of personalized avatars resembling users’ real-world appearances as well as the value of the deployed scanning process to generate avatars for VR-setups where the effect strength might be substantial, e.g., in social Virtual Reality (VR) or in medical VR-based therapies relying on embodied interfaces. Additionally, our results also strengthen the value of fully immersive setups which, today, are accessible for a variety of applications due to the widely available consumer HMDs.