RUS  ENG
Full version
JOURNALS // Computer Optics // Archive

Computer Optics, 2020 Volume 44, Issue 4, Pages 618–626 (Mi co828)

This article is cited in 4 papers

IMAGE PROCESSING, PATTERN RECOGNITION

Visual preferences prediction for a photo gallery based on image captioning methods

A. S. Kharchevnikova, A. V. Savchenko

National Research University Higher School of Economics, Nizhny Novgorod, Russia

Abstract: The paper considers a problem of extracting user preferences based on their photo gallery. We propose a novel approach based on image captioning, i.e., automatic generation of textual descriptions of photos, and their classification. Known image captioning methods based on convolutional and recurrent (Long short-term memory) neural networks are analyzed. We train several models that combine the visual features of a photograph and the outputs of an Long short-term memory block by using Google's Conceptual Captions dataset. We examine application of natural language processing algorithms to transform obtained textual annotations into user preferences. Experimental studies are carried out using Microsoft COCO Captions, Flickr8k and a specially collected dataset reflecting the user’s interests. It is demonstrated that the best quality of preference prediction is achieved using keyword search methods and text summarization from Watson API, which are 8 % more accurate compared to traditional latent Dirichlet allocation. Moreover, descriptions generated by trained neural models are classified 1 – 7 % more accurately when compared to known image captioning models.

Keywords: user modeling, image processing, image captioning, convolutional neural networks.

Received: 13.12.2019
Accepted: 06.03.2020

DOI: 10.18287/2412-6179-CO-678



© Steklov Math. Inst. of RAS, 2026