TY - CHAP
T1 - Towards a Perceptual Evaluation Framework for Lighting Estimation
AU - Giroux, Justine
AU - Dastjerdi, Mohammad Reza Karimi
AU - Hold-Geoffroy, Yannick
AU - Vazquez-Corral, Javier
AU - Lalonde, Jean Francois
N1 - Publisher Copyright:
© 2024 Society for Imaging Science and Technology. All rights reserved.
PY - 2024
Y1 - 2024
N2 - Lighting is a crucial component of an image, especially during the task of virtual object insertion in a photograph, e.g. for AR/VR/MR applications. The human brain is quite attuned to changes in lighting for a given object. Thus, it is imperative to estimate the scene's lighting in order to produce a realistic image. However, this task remains complex, as disentangling lighting from the 3D geometry and the material properties of the objects in the image is an ill-posed problem. For this reason, the community has been developing lighting estimation methods for over the past decades, by using handcrafted priors and more recently leveraging the power of deep learning. The great variety of lighting estimation methods currently available must be compared to each order in order to quantify their progress with regard to their accuracy and realism of their estimations. For this task, it is standard to use popular image quality assessment (IQA) metrics to compare a rendered virtual object, lit using the predicted lighting from different methods, compared to the ground truth render. However, standard IQA metrics are not designed to quantify differences in lighting, since they are usually developed for other specific tasks (such as noise perception for compression). Thus, it is unclear if standard IQA metrics are appropriate to use when judging the perceptual quality of renders generated with estimated lighting. In this work, we evaluate whether IQA metrics and human perception align. To do so, we perform a calibrated user study, which allows us to compare the preferences of humans with standard IQA metrics. We demonstrate that they are not in agreement; hence we propose our new IQA metric for lighting estimation, which is in agreement with the perceptual data. Our new perceptual IQA metric shows great generalisation to other lighting estimation methods not included in our dataset, meaning that it will be helpful for the development of new lighting estimation methods. To encourage future research, all (anonymised) perceptual data and code are available at https: //Ivsn. github. io/ PerceptionMetric/.
AB - Lighting is a crucial component of an image, especially during the task of virtual object insertion in a photograph, e.g. for AR/VR/MR applications. The human brain is quite attuned to changes in lighting for a given object. Thus, it is imperative to estimate the scene's lighting in order to produce a realistic image. However, this task remains complex, as disentangling lighting from the 3D geometry and the material properties of the objects in the image is an ill-posed problem. For this reason, the community has been developing lighting estimation methods for over the past decades, by using handcrafted priors and more recently leveraging the power of deep learning. The great variety of lighting estimation methods currently available must be compared to each order in order to quantify their progress with regard to their accuracy and realism of their estimations. For this task, it is standard to use popular image quality assessment (IQA) metrics to compare a rendered virtual object, lit using the predicted lighting from different methods, compared to the ground truth render. However, standard IQA metrics are not designed to quantify differences in lighting, since they are usually developed for other specific tasks (such as noise perception for compression). Thus, it is unclear if standard IQA metrics are appropriate to use when judging the perceptual quality of renders generated with estimated lighting. In this work, we evaluate whether IQA metrics and human perception align. To do so, we perform a calibrated user study, which allows us to compare the preferences of humans with standard IQA metrics. We demonstrate that they are not in agreement; hence we propose our new IQA metric for lighting estimation, which is in agreement with the perceptual data. Our new perceptual IQA metric shows great generalisation to other lighting estimation methods not included in our dataset, meaning that it will be helpful for the development of new lighting estimation methods. To encourage future research, all (anonymised) perceptual data and code are available at https: //Ivsn. github. io/ PerceptionMetric/.
UR - https://www.scopus.com/pages/publications/105000658932
M3 - Chapter
AN - SCOPUS:105000658932
VL - 32
T3 - Final Program and Proceedings - IS and T/SID Color Imaging Conference
SP - A3-A5
BT - Final Program and Proceedings - IS and T/SID Color Imaging Conference
PB - Society for Imaging Science and Technology
ER -