Cosaliency: Where People Look When Comparing Images


David E. Jacobs Dan B Goldman Eli Shechtman

UIST 2010

An overview of our algorithm and its results for two pairs of images. From left to right: Standard thumbnails for the input image pair, our calculated model for image cosaliency, its processed version and our automatically generated collection-aware crops. Note that small image features like the position of the woman's arm or the angle of the bird's head are nearly impossible to see using standard thumbnails alone.




Abstract

Image triage is a common task in digital photography. Determining which photos are worth processing for sharing with friends and family and which should be deleted to make room for new ones can be a challenge, especially on a device with a small screen like a mobile phone or camera. In this work we explore the importance of local structure changes--e.g. human pose, appearance changes, object orientation, etc.--to the photographic triage task. We perform a user study in which subjects are asked to mark regions of image pairs most useful in making triage decisions. From this data, we train a model for image saliency in the context of other images that we call cosaliency. This allows us to create collection-aware crops that can augment the information provided by existing thumbnailing techniques for the image triage task.

Paper: cosaliency.pdf


Citation: David E. Jacobs, Dan B Goldman, and Eli Shechtman. Cosaliency: Where People Look When Comparing Images. In UIST 2010, Proc. ACM Symposium on User Interface Software and Technology, October 2010.


BibTeX: jacobs_UIST2010.bib