Scene text detection and recognition are challenging due to the following issues: scattered and sparse, blur, illumination, partial occlusion, multi-oriented, multi-lingual.

Scene text detection:

The detection methods can be grouped into proposal-based method and part-based method.

Paper list (in chronological order):

  1. Detecting Text in Natural Scenes with
    Stroke Width Transform
    , CVPR 2010: assume consistent stroke width within each character

  2. Detecting Texts of Arbitrary Orientations in Natural Images, CVPR 2012: design rotation-invariant features

  3. Deep Features for Text Spotting, ECCV 2014: add three branches for prediction

  4. Robust scene text detection with convolution neural network induced mser trees, ECCV 2014

  5. Real-time Lexicon-free Scene Text
    Localization and Recognition
    , T-PAMI 2016

  6. Reading Text in the Wild with Convolutional Neural Networks, IJCV 2016

  7. Synthetic Data for Text Localisation in Natural Images, CVPR 2016: directly predict the bounding boxes, generate synthetic dataset

  8. Multi-oriented text detection with fully convolutional networks, CVPR 2016

  9. Detecting Text in Natural Image with Connectionist Text Proposal Network, ECCV 2016: look for text lines an fine vertical text pieces. sliding windows fed to Bi-LSTM.

  10. SSD: single shot multibox detector, ECCV 2016

  11. Reading Scene Text in Deep Convolutional Sequences, AAAI 2016

  12. Scene text detection via holistic, multi-channel prediction, arxiv 2016: holistic and pixel-wise predictions on text region map, character map, and linking
    orientation map

  13. Deep Direct Regression for Multi-Oriented Scene Text Detection, ICCV 2017

  14. WordSup: Exploiting Word Annotations for Character based Text Detection, ICCV 2017: a weakly supervised framework that can utilize word annotations for character detector training

  15. TextBoxes: A Fast Text Detector with a Single Deep Neural Network, AAAI 2017

  16. Detecting Oriented Text in Natural Images by Linking Segments, CVPR 2017: detect text with segments and links

  17. EAST: An Efficient and Accurate Scene Text Detector, CVPR 2017: use DenseBox to generate quadrangle proposals

  18. TextBoxes++: A Single-Shot Oriented Scene Text Detector, TIP 2018: extension of TextBoxes

  19. Rotation-sensitive Regression for Oriented Scene Text Detection, CVPR 2018: rotation-sensitive feature maps for regression and rotation-invariant features for classification

  20. Multi-Oriented Scene Text Detection via Corner Localization and Region Segmentation, CVPR 2018: combine corner localization and region segmentation

  21. PixelLink: Detecting Scene Text via Instance Segmentation, AAAI 2018: rectangle enclosing instance segmentation mask, which is obtained based on text/non-text prediction and link prediction.

  22. Arbitrary-Oriented Scene Text Detection via Rotation Proposals, TMM 2018: generate rotated proposals

  23. TextSnake: A Flexible Representation for Detecting Text of Arbitrary Shapes, ECCV 2018: infer the center line area (TCL) and associated circle radius/rotation

Scene text recognition:

The recognition methods can be grouped into character-level, word-level, and sequence-level.

Paper list (in chronological order):

  1. End-to-End Scene Text Recognition, ICCV 2011: detection using Random Ferns and recognition via Pictorial Structure with a Lexicon

  2. Top-down and bottom-up cues for scene text recognition, CVPR 2012: construct a CRF model to impose both bottom-up (i.e. character detections) and top-down (i.e. language statistics) cues

  3. Scene text recognition using part-based tree-structured character detection, CVPR 2013: build a CRF model to incorporate the detection scores, spatial constraints and linguistic knowledge into one framework

  4. PhotoOCR: Reading text in uncontrolled conditions, ICCV 2013: automatically generate training data and perform OCR on web images

  5. Label embedding: A frugal baseline for text recognition, IJCV 2015: learn a common space for image and word

  6. Reading Text in the Wild with Convolutional Neural Networks, IJCV 2016

  7. Robust Scene Text Recognition with Automatic Rectification, CVPR 2016

  8. Recursive Recurrent Nets with Attention Modeling for OCR in the Wild, CVPR 2016: character-level language model embodied in a recurrent neural network

  9. An end-to-end trainable neural network for image-based sequence recognition and its application to scene text recognition, T-PAMI 2017

  10. Focusing Attention: Towards Accurate Text Recognition in Natural Images, ICCV 2017: Focusing Network to handle the attention drift

  11. Visual attention models for scene text recognition, 2017 arxiv

  12. AON: Towards Arbitrarily-Oriented Text Recognition , CVPR 2018

  13. (recommended by Guo)An end-to-end trainable neural network for image-based sequence recognition and its application to scene text recognition, T-PAMI, 2017

End-to-end

Integrate scene text detection and recognition in an end-to-end system.

Paper list (in chronological order):

  1. A method for text localization and recognition in real-world images, ACCV 2010

  2. Real-Time Scene Text Localization and Recognition, CVPR 2012

  3. Towards End-to-end Text Spotting with Convolutional
    Recurrent Neural Networks
    , ICCV 2017: designed for horizontal scene text

  4. Deep TextSpotter: An End-To-End Trainable Scene Text Localization and Recognition Framework, ICCV 2017: detect and recognize horizontal and multioriented
    scene text

  5. FOTS: Fast Oriented Text Spotting with a Unified Network, CVPR 2018: using EAST as text detector and CRNN as text recognizer

Datasets

Surveys

Special Sessions

  1. Use Spatial Transformation Network (STN) [1] [2] [3] [4]

  2. Use Deformable Convolution Network (DCN) [1]

This problem is well discussed in https://arxiv.org/pdf/1506.01497.pdf. Different schemes for addressing multiple scales and sizes: (a) multi-scale input images (b) multi-scale feature maps (c) multi-scale anchor boxes on one feature map.

  1. The first way is based on image/feature pyramids, e.g., in DPM and CNN-based methods. The images are resized at multiple scales, and feature maps (HOG or deep convolutional features) are computed for each scale. This way is often useful but is time-consuming.

  2. The second way is to use sliding windows of multiple scales (and/or aspect ratios) of the feature maps. For example, in DPM, models of different aspect ratios are trained separately using different filter sizes. If this way is used to address multiple scales, it can be thought of as a “pyramid of filters”. The second way is usually adopted jointly with the first way.

  3. As a comparison, our anchor-based method is built on comparison, our anchor-based method is built on a pyramid of anchors, which is more cost-efficient. Our method classifies and regresses bounding boxes with reference to anchor boxes of multiple scales and aspect ratios. It only relies on images and feature maps of a single scale, and uses filters (sliding windows on the feature map) of a single size. We show by experiments the effects of this scheme for addressing multiple scales and sizes. Because of this multi-scale design based on anchors, we can simply use the convolutional features computed on a single-scale image, as is also done by the Fast R-CNN detector. The design of multi- scale anchors is a key component for sharing features without extra cost for addressing scales.

  4. use different dilation rates to vary receptive fields

  5. use feature pyramid [1]

Reference

[1] Lin, Tsung-Yi, et al. “Feature pyramid networks for object detection.” CVPR, 2017.

two-stage: use region proposal network (RPN) to generate proposals

  1. faster-RCNN

one-stage: remove RPN and use anchors with associated fixed proposals based on predefined scales/aspect-ratios.

  1. YOLO v1 v2 v3
  2. SSD

Corner Points: remove anchors and directly predict corner points

  1. CornerNet

No anchor: actually use aach cell as an anchor

  1. RPDet: use object centers as positive cells; paired with deformable CNN

  2. FoveaBox: use the cells in fovea area (object bounding box) as positive cells

  3. Guided Anchoring: use deformable CNN to obtain adapted feature map

Fast RCNN

where $p$ is $(K+1)$-dim class probability vector with 0 being the background class, $u$ is the groundtruth class, $v$ is the ground-truth regression tuple, and $t^u$ is the predicted regression tuple for class $u$. $L_{cls}$ is a multi-class softmax loss and $L_{loc}$ is a smooth L1 loss.

Faster RCNN

where $L_{cls}$ is a two-class (e.g., obj or not obg) (resp., multi-class) softmax loss for RPN (resp., gen) and $L_{reg}$ is a smooth L1 loss. So the loss of faster RCNN is basically the same as fast RCNN.

fast and faster RCNN generate proposals, so they have the pos/neg labels for anchor boxes. However, the following SSD and YOLO do not generate proposals, so they need to match anchor boxes with ground-truth boxes.

SSD

By using $x_{ij}^p$ as a binary indicator for matching the i-th default box to the j-th ground-truth box of category p. Multiple detection boxes can be matched to the same ground-truth box.

where $L_{conf}$ is a (K+1)-class softmax loss, and

YOLO

Note that for the noobj anchorboxes, there is only one loss term involved.

[1] MLSD

Reference

[1] Gu, Geonmo, et al. “Towards light-weight and real-time line segment detection.” Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 36. No. 1. 2022.

  1. row-wise and column-wise LSTM on feature map: [1]
  2. graph LSTM on superpixels: [2]
  3. 3D graph: [3]
  4. DAG on feature map: [4]

Reference

  1. Li, Zhen, et al. “Lstm-cf: Unifying context modeling and fusion with lstms for rgb-d scene labeling.” ECCV, 2016.
  2. Liang, Xiaodan, et al. “Semantic object parsing with graph lstm.” ECCV, 2016.
  3. Qi, Xiaojuan, et al. “3d graph neural networks for rgbd semantic segmentation. ICCV, 2017.
  4. Ding, Henghui, et al. “Boundary-aware feature propagation for scene segmentation.” ICCV, 2019.

layer area

From layer i to layer i+1, assume the parameters on layer i are $s_i$ (stride), $p_i$ (patch), $k_i$ (kernel filter size), the width or height of layer i are $r_i$. Then, based on common sense,

In the reverse process, $r_i = s_i r_{i+1}-s_i-2p_i+k_i$ or $r_i = s_i r_{i+1}-s_i+k_i$ if counting in padding area.

coordinate map

Now consider mapping the point $x_i$ on the ROI to the point $x_{i+1}$ on the feature map, which can be transformed to the layer area problem above. In particular, the receptive field formed by left-up corner and $x_i$ on the ROI can be mapped to the region formed by left-up corner and $x_{i+1}$ on the feature map. Based on the similar formula for the layer area problem above (note the only difference is that we only include left padding and up padding, and subtract the radius of kernel filter $(k_i-1)/2$,

The above coordinate system starts from 1. When the coordinate system starts from 0,

which can be simplified as

when $p_i=floor(k_i/2)$, $x_i=s_i x_{i+1}$ approximately, which is the simplest case.

By applying $x_i=s_i x_{i+1}+(\frac{k_i-1}{2}-p_i)$ recursively, we can achieve a general solution

in which $\alpha_L = \prod_{l=1}^{L-1} s_l$ and $\beta_L=\sum_{l=1}^{L-1} (\prod_{n=1}^{l-1} s_n)(\frac{k_l-1}{2}-p_l) $

anchor box to ROI

Given two corner points of an anchor box on the feature map, we can find their corresponding points on the original image, which determine the ROI.

Surveys:

  1. Deep learning for fine-grained image analysis: A survey [1]: include few-shot classification, few-shot retrieval, and few-shot generation

  2. A survey on deep learningbased fine-grained object classification and semantic segmentation [2]

[1] Xiu-Shen Wei, Jianxin Wu, Quan Cui. “Deep learning for fine-grained image analysis: A survey.” arXiv preprint arXiv:1907.03069 (2019).

[2] Zhao, Bo, et al. “A survey on deep learning-based fine-grained object classification and semantic segmentation.” International Journal of Automation and Computing 14.2 (2017): 119-135.

Datasets:

  1. clothing dataset
  2. car dataset
  3. CUB, Birdsnap
  4. scene dataset
  5. dog dataset
  6. flower dataset
  7. aircraft dataset
  8. Food-101 dataset

  • Feature generation for novel categories: [1] [3] [4]

  • Model emsembling: [2]

Reference

[1] Zhang, Weilin, and Yu-Xiong Wang. “Hallucination Improves Few-Shot Object Detection.” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.

[2] Zhang, Weilin, Yu-Xiong Wang, and David A. Forsyth. “Cooperating RPN’s Improve Few-Shot Object Detection.” arXiv preprint arXiv:2011.10142 (2020).

[3] Xu, Honghui, et al. “Few-Shot Object Detection via Sample Processing.” IEEE Access 9 (2021): 29207-29221.

[4] Wu, Aming, et al. “Universal-prototype enhancing for few-shot object detection.” Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021.

One-shot/few-shot learning

The first one-shot learning paper dates back to 2006, but becomes more popular recently.

Concepts

training/validation/test categories: Training categories and test categories have no overlap

support(sample)/query(batch) set: In the testing stage, for each test category, we preserve some instances to form the support set and sample from the remaining instances to form the query set

C-way K-shot: The test set has C categories. For each test category, we preserve K instances as the support set

episode: Episode-based strategy used in the training stage to match the inference in the testing stage. First sample some categories and then sample the suppport/query set for each category

Methods

  • Metric based:

    • Siamese network: the earliest and simplest metric-learning based few-shot learning, standard verification problem.

    • Matching network: map a support set to a classification function p(y|x,S) (KNN or LSTM). For the LSTM version, there is another similar work using memory module.

    • Relation network: calculate the relation score for 1-shot, calculate the average of relation scores for k-shot

    • Prototypical network: compare with the prototype representations of each class. Each class can have more than one prototype representation. There are some other prototype-based methods [1] [2].

  • Optimization (gradient) based:

  • Model based:

    • [learnet] [2] [3] [4] [5]: predict the parameters of classifiers for novel categories.

    • [1]: predict the parameters of CNN feature extractor by virtue of memory module.

  • Generation based: generate more features for novel categories [1], [2]

  • Pretraind and fine-tune: use the whole meta-training set to learn feature extractor [1] [2] pretrain+MatchingNet [3]

Survey

  1. Generalizing from a Few Examples: A Survey on Few-Shot Learning

  2. Learning from Few Samples: A Survey

Datsets

  1. Meta-Dataset
0%