Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[R] paper and a PyTorch implementation of “What is wrong with scene text recognition model comparisons? dataset and model analysis”

[R] paper and a PyTorch implementation of "What is wrong with scene text recognition model comparisons? dataset and model analysis"

Paper: https://arxiv.org/pdf/1904.01906.pdf

PyTorch code: https://github.com/clovaai/deep-text-recognition-benchmark

Abstract:

Many new proposals for scene text recognition (STR) models have been introduced in recent years. While each claim to have pushed the boundary of the technology, a holistic and fair comparison has been largely missing in the field due to the inconsistent choices of training and evaluation datasets. This paper addresses this difficulty with three major contributions. First, we examine the inconsistencies of training and evaluation datasets, and the performance gap results from inconsistencies. Second, we introduce a unified four-stage STR framework that most existing STR models fit into. Using this framework allows for the extensive evaluation of previously proposed STR modules and the discovery of previously unexplored module combinations. Third, we analyze the module-wise contributions to performance in terms of accuracy, speed, and memory demand, under one consistent set of training and evaluation datasets. Such analyses clean up the hindrance on the current comparisons to understand the performance gain of the existing modules. Our code will be publicly available.

https://i.redd.it/h04nixqyays21.jpg

# To strongly remind inconsistent training and evaluation settings in the scene text recognition field, we named our paper in this way.

submitted by /u/ku21fan
[link] [comments]