These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


PUBMED FOR HANDHELDS

Search MEDLINE/PubMed


  • Title: Contextual kernel and spectral methods for learning the semantics of images.
    Author: Lu Z, Ip HH, Peng Y.
    Journal: IEEE Trans Image Process; 2011 Jun; 20(6):1739-50. PubMed ID: 21193376.
    Abstract:
    This paper presents contextual kernel and spectral methods for learning the semantics of images that allow us to automatically annotate an image with keywords. First, to exploit the context of visual words within images for automatic image annotation, we define a novel spatial string kernel to quantify the similarity between images. Specifically, we represent each image as a 2-D sequence of visual words and measure the similarity between two 2-D sequences using the shared occurrences of s -length 1-D subsequences by decomposing each 2-D sequence into two orthogonal 1-D sequences. Based on our proposed spatial string kernel, we further formulate automatic image annotation as a contextual keyword propagation problem, which can be solved very efficiently by linear programming. Unlike the traditional relevance models that treat each keyword independently, the proposed contextual kernel method for keyword propagation takes into account the semantic context of annotation keywords and propagates multiple keywords simultaneously. Significantly, this type of semantic context can also be incorporated into spectral embedding for refining the annotations of images predicted by keyword propagation. Experiments on three standard image datasets demonstrate that our contextual kernel and spectral methods can achieve significantly better results than the state of the art.
    [Abstract] [Full Text] [Related] [New Search]