These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
Pubmed for Handhelds
PUBMED FOR HANDHELDS
Search MEDLINE/PubMed
Title: Crowdsourcing for reference correspondence generation in endoscopic images. Author: Maier-Hein L, Mersmann S, Kondermann D, Stock C, Kenngott HG, Sanchez A, Wagner M, Preukschas A, Wekerle AL, Helfert S, Bodenstedt S, Speidel S. Journal: Med Image Comput Comput Assist Interv; 2014; 17(Pt 2):349-56. PubMed ID: 25485398. Abstract: Computer-assisted minimally-invasive surgery (MIS) is often based on algorithms that require establishing correspondences between endoscopic images. However, reference annotations frequently required to train or validate a method are extremely difficult to obtain because they are typically made by a medical expert with very limited resources, and publicly available data sets are still far too small to capture the wide range of anatomical/scene variance. Crowdsourcing is a new trend that is based on outsourcing cognitive tasks to many anonymous untrained individuals from an online community. To our knowledge, this paper is the first to investigate the concept of crowdsourcing in the context of endoscopic video image annotation for computer-assisted MIS. According to our study on publicly available in vivo data with manual reference annotations, anonymous non-experts obtain a median annotation error of 2 px (n = 10,000). By applying cluster analysis to multiple annotations per correspondence, this error can be reduced to about 1 px, which is comparable to that obtained by medical experts (n = 500). We conclude that crowdsourcing is a viable method for generating high quality reference correspondences in endoscopic video images.[Abstract] [Full Text] [Related] [New Search]