Logo Detection with No Priors

Diego A. Velazquez*, Josep M. Gonfaus, Pau Rodriguez, F. Xavier Roca, Seiichi Ozawa, Jordi Gonzalez

*Corresponding author for this work

Research output: Contribution to journalArticleResearchpeer-review

1 Citation (Scopus)


In recent years, top referred methods on object detection like R-CNN have implemented this task as a combination of proposal region generation and supervised classification on the proposed bounding boxes. Although this pipeline has achieved state-of-the-art results in multiple datasets, it has inherent limitations that make object detection a very complex and inefficient task in computational terms. Instead of considering this standard strategy, in this paper we enhance Detection Transformers (DETR) which tackles object detection as a set-prediction problem directly in an end-to-end fully differentiable pipeline without requiring priors. In particular, we incorporate Feature Pyramids (FP) to the DETR architecture and demonstrate the effectiveness of the resulting DETR-FP approach on improving logo detection results thanks to the improved detection of small logos. So, without requiring any domain specific prior to be fed to the model, DETR-FP obtains competitive results on the OpenLogo and MS-COCO datasets offering a relative improvement of up to 30%, when compared to a Faster R-CNN baseline which strongly depends on hand-designed priors.

Original languageEnglish
Article number9502074
Pages (from-to)106998-107011
Number of pages14
JournalIEEE Access
Publication statusPublished - 2021


  • attention
  • deep learning
  • logo detection
  • Object detection
  • transformers


Dive into the research topics of 'Logo Detection with No Priors'. Together they form a unique fingerprint.

Cite this