MAMIM: Models d'aprenentatge multitasca per a l'anàlisi mèdica

  • Radeva Ivanova , Petia (Principal Investigator)
  • Barajas Zamora, Joel (Scholar)
  • Hernandez Sabate, Aura (Scholar)
  • Aldavert Miró, David (Investigator)
  • Garcia Barnés, Jaume (Investigator)
  • Marti Godia, Enric (Investigator)
  • Toledo Morales, Ricardo Juan (Investigator)
  • Vilariño Freire, Fernando Luis (Investigator)

Project Details


(...) Multitask learning is a novel machine learning approach that learns each problem better by also learning from the training signals of other related problems. (...) This project addresses the development of multitask learning methods for a Computer Vision learning scenario: the classification of visual data in images. In order to prove the potential and the generalization ability of the multitask paradigm, we will work on two different computer vision fields: object recognition and medical image analysis. Our main hypothesis in this project is that multitask learning will provide two complementary advantages to practical solutions of the studied problems: more classification accuracy when classifiers are trained on several problems in parallel (parallel knowledge transfer), and less training requirements when classifiers are trained sequentially (sequential knowledge transfer). In the first case, we expect large accuracy increments, and in the second case significant reductions in the number of training samples that are necessary for learning at a given performance level. The application part of the project splits in four subtasks: a) an automatic system for facial detection and identifiication, b) a system for automatic traffic sign recognition, c) a computer-aided system for plaque characterization in ultrasound images, and d) an intelligent system for automatic annotation of intestine motility in wireless endoscopy.
Effective start/end date1/10/0630/09/09

Collaborative partners

  • Computer Vision Center (CVC)


Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.