DSpace@TEDU

Deep into visual saliency for immersive VR environments rendered in real-time

Show simple item record

dc.contributor.author Albayrak, Dilara
dc.contributor.author Çapın, Tolga K.
dc.contributor.author Çelikcan, Ufuk
dc.contributor.author Askın, Mehmet Bahadır
dc.date.accessioned 2021-01-12T12:59:01Z
dc.date.available 2021-01-12T12:59:01Z
dc.date.issued 2020-05
dc.identifier.issn 0097-8493
dc.identifier.uri https://doi.org/10.1016/j.cag.2020.03.006
dc.identifier.uri http://hdl.handle.net/20.500.12485/706
dc.description.abstract As virtual reality (VR) headsets with head-mounted-displays (HMDs) are becoming more and more prevalent, new research questions are arising. One of the emergent questions is how best to employ visual saliency prediction in VR applications using current line of advanced HMDs. Due to the complex nature of human visual attention mechanism, the problem needs to be investigated from different points of view using different approaches. Having such an outlook, this work extends the previous effort on exploring a set of well-studied visual saliency cues and saliency prediction methods making use of these cues with the aim of assessing how applicable they are for estimating visual saliency in immersive VR environments that are rendered in real-time and experienced with consumer HMDs. To that end, a new user study was conducted with a larger sample and reveals the effects of experiencing dynamic computer-generated scenes with reduced navigation speeds on visual saliency. Using these scenes that have varying visual experiences in terms of contents and range of depth-of-field, the study also compares VR viewing to 2D desktop viewing with an expanded set of results. The presented evaluation offers the most in-depth view of visual saliency in immersive, real-time rendered VR to date. The analysis encompassing the results of both studies indicate that decreasing navigation speed reduces the contribution of depth-cue to visual saliency and has a boosting effect for cues based on 2D image features only. While there are content-dependent variances among their scores, it is seen that the saliency prediction methods based on boundary connectivity and surroundedness work best in general for the given settings. (C) 2020 Elsevier Ltd. All rights reserved. en_US
dc.language.iso en en_US
dc.publisher PERGAMON-ELSEVIER SCIENCE LTD, THE BOULEVARD, LANGFORD LANE, KIDLINGTON, OXFORD OX5 1GB, ENGLAND en_US
dc.subject Visual saliency en_US
dc.subject Virtual reality en_US
dc.subject Stereographics en_US
dc.title Deep into visual saliency for immersive VR environments rendered in real-time en_US
dc.type Article en_US
dc.relation.journal COMPUTERS & GRAPHICS-UK en_US
dc.identifier.startpage 70 en_US
dc.identifier.endpage 82 en_US
dc.identifier.volume 80 en_US


Files in this item

Files Size Format View

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Advanced Search

Browse

My Account

Statistics