Newly Blog


  • Home

  • Tags

  • Categories

  • Archives

  • Search

Vector Quantization

Posted on 2022-06-16 | In paper note

Vector Quantization: VQVAE [1],VQVAE2 [2], VQGAN [6].

Residual Quantization: RQVAE [3]

Accelerate auto-regression: [4] [5]

Hierarchical residual quantization: VAR [7]

References

[1] Oord, Aaron van den, Oriol Vinyals, and Koray Kavukcuoglu. “Neural discrete representation learning.” arXiv preprint arXiv:1711.00937 (2017).

[2] Razavi, Ali, Aaron van den Oord, and Oriol Vinyals. “Generating diverse high-fidelity images with vq-vae-2.” Advances in neural information processing systems. 2019.

[3] Lee, Doyup, et al. “Autoregressive Image Generation using Residual Quantization.” arXiv preprint arXiv:2203.01941 (2022).

[4] Bond-Taylor, Sam, et al. “Unleashing Transformers: Parallel Token Prediction with Discrete Absorbing Diffusion for Fast High-Resolution Image Generation from Vector-Quantized Codes.” arXiv preprint arXiv:2111.12701 (2021).

[5] Huiwen Chang, Han Zhang, Lu Jiang, Ce Liu, William T. Freeman, “MaskGIT: Masked Generative Image Transformer”, arXiv preprint arXiv:2202.04200.

[6] Patrick Esser, Robin Rombach, Björn Ommer, “Taming Transformers for High-Resolution Image Synthesis”.

[7] Tian, Keyu, et al. “Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction.” arXiv preprint arXiv:2404.02905 (2024).

Unsupervised Attribute Learning

Posted on 2022-06-16 | In paper note
  1. learn attribute vector based on the relation and difference between different categories (each dimension if uninterpretable): [1] (Laplacian matrix), [2] (triplet loss)

  2. exploit local information and encode them into attribute vector (each dimension is interpretable): [3] (discriminative cluster, doublets), [4] (joint attribute learning and feature learning)

  3. learn attention map for each latent attribute [5]

Reference

  1. Yu, Felix X., et al. “Designing category-level attributes for discriminative visual recognition.” CVPR, 2013.

  2. Li, Yan, et al. “Discriminative learning of latent features for zero-shot recognition.” CVPR, 2018.

  3. Singh, Saurabh, Abhinav Gupta, and Alexei A. Efros. “Unsupervised discovery of mid-level discriminative patches.” ECCV, 2012.

  4. Huang, Chen, Chen Change Loy, and Xiaoou Tang. “Unsupervised learning of discriminative attributes and visual representations.” CVPR, 2016.

  5. Yang, Wenjie, et al. “Towards rich feature discovery with class activation maps augmentation for person re-identification.” CVPR, 2019.

Training Categories and Test Categories

Posted on 2022-06-16 | In paper note

Let us use $S$ to denote the set of training categories and $T$ to denote the set of testing categories.

  • $S=T$: the most common case
  • $S\cap T=\emptyset$: zero-shot learning
  • $S\subset T$: generalized zero-shot learning
  • $S\supset T$: pretrained model

Synthetic Text Images

Posted on 2022-06-16 | In paper note
  1. Blend text and background images.

    • text image (font, color, border, blending): [1]
    • scene-text image (font, color, border, blending, geometry): [2] [3]

Reference:

  1. Jaderberg, Max, et al. “Synthetic data and artificial neural networks for natural scene text recognition.” arXiv preprint arXiv:1406.2227 (2014).

  2. Gupta, Ankush, Andrea Vedaldi, and Andrew Zisserman. “Synthetic data for text localisation in natural images.” CVPR, 2016.

  3. Zhan, Fangneng, Shijian Lu, and Chuhui Xue. “Verisimilar image synthesis for accurate detection and recognition of texts in scenes.” ECCV, 2018.

Subjective Annotation

Posted on 2022-06-16 | In paper note

As mentioned in [1] One major concern of subjective annotation is that the annotations provided by different workers for each image may not be reliable, which calls for consistency analysis on the annotations. We use Spearman’s rank correlation ρ between pairs of workers to measure consistency and estimate p-values to evaluate statistical significance of the correlation relative to a null hypothesis of uncorrelated responses. We use the Benjamini-Hochberg
procedure to control the false discovery rate (FDR) for multiple comparisons [2]. At an FDR level of 0.05, we find 98.45% batches have significant agreement among raters. Further consistency analysis of the dataset can be found in the supplementary material of [1].

[1] Kong, Shu, et al. “Photo aesthetics ranking network with attributes and content adaptation.” European Conference on Computer Vision. Springer, Cham, 2016.

[2] Benjamini, Yoav, and Daniel Yekutieli. “The control of the false discovery rate in multiple testing under dependency.” Annals of statistics (2001): 1165-1188.

Soft Loss

Posted on 2022-06-16 | In paper note

Given the predicted softmax logits $p_i$, ground-truth softmax logits or free-form weights $w_i$.

  1. weighted softmax loss: $-\sum_{i} w_i \log p_i$

  2. EMD softmax loss: $-\sum_{i} w_i p_i$

  3. softmax loss after label flip layer: $-\log{\sum_{i} w_i p_i}$

  4. knowledge distillation: $\sum_{i} (p_i-w_i)$

Shadow-related Application

Posted on 2022-06-16 | In paper note

Application

  • Shadow detection: [1]

  • Object-shadow pair detection/matting: [2] [10] [11]

  • Shadow removal: [3] [4] [5] [6]

  • Shadow generation: [7] [8]

  • Remove occluder and its associated shadow [9]

Dataset

Shadow Generation

  1. Shadow-AR (rendered) paper
  2. RGB-AO-depth (rendered) paper
  3. Composition datasets: WILDTRACK, Penn-Fudan, UA-DETRAC, Cityscapes, ShapeNet paper
  4. Soft shadow dataset (rendered) paper
  5. ShadowGAN (rendered, 12,400 rendered images, 9265 objects, 110 textures for rendering the plane, up to four objects in each scene) paper
  6. SID (single object, 25, 000 images, 12, 500 3D objects, 50 homogeneous color and 200 variable set of textured patterns) paper
  7. SID2 (45,000 images, similar to SID, more than one object in each scene) paper
  8. SHAD3S paper
  9. DESOBA paper

Shadow Removal/Detection

  1. ISTD/ ISTD+ (1870 0 triplets of shadow, shadow mask and shadow-free images) paper
  2. USR(unpaired, 2,445 shadow images, 1,770 shadow-free) paper
  3. SRD/ SRD+ (3088 pairs, paired shadow and shadow-free, without the ground-truth shadow mask) paper
  4. LRSS (37 image pairs, soft shadow) paper
  5. UIUC (76 pairs, paired shadow/shadow-free) paper
  6. GTAV (5723 pairs, 5110 daylight scenes, occlude objects inside camera) paper
  7. SynShadow (based on USR, occlude objects outside camera, shadow/shadow-free/matte image triplets synthesized from rendered 10,000 matte images and about 1,800 background images) paper
  8. UCF (245 pairs, shadow/shadow mask, only for detection)
  9. SBU (4727 pairs, shadow/shadow mask, only for detection)
  10. CUHK-Shadow (10,500 pairs, shadow/shadow mask, only for detection) paper
  11. SOBA (1013 images) paper
  12. AISD (514 pairs, shadow/shadow mask, only for detection, areial images) paper
  13. video shadow removal dataset (8 videos, shadow/shadow mask/shadow free) paper
  14. CMU dataset(135 pairs, shadow/shadow boundaries) paper
  15. ViSha (120 videos with 11685 frames) paper
  16. VISAD (82 videos, half-annotated) paper

References

  1. Zhu, Lei, et al. “Bidirectional feature pyramid network with recurrent attention residual modules for shadow detection.” Proceedings of the European Conference on Computer Vision (ECCV). 2018.

  2. Wang, Tianyu, et al. “Instance shadow detection.” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020.

  3. Hu, Xiaowei, et al. “Mask-ShadowGAN: Learning to remove shadows from unpaired data.” Proceedings of the IEEE International Conference on Computer Vision. 2019.

  4. Le, Hieu, and Dimitris Samaras. “Shadow removal via shadow image decomposition.” Proceedings of the IEEE International Conference on Computer Vision. 2019.

  5. Xiaodong, Cun, Pun Chi-Man, and Shi Cheng. “Towards Ghost-free Shadow Removal via Dual Hierarchical Aggregation Network and Shadow Matting GAN.” arXiv preprint arXiv:1911.08718 (2019).

  6. Le, Hieu, and Dimitris Samaras. “From Shadow Segmentation to Shadow Removal.” European Conference on Computer Vision. Springer, Cham, 2020.

  7. Liu, Daquan, et al. “ARShadowGAN: Shadow Generative Adversarial Network for Augmented Reality in Single Light Scenes.” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020.

  8. Zhan, Fangneng, et al. “Adversarial Image Composition with Auxiliary Illumination.” Proceedings of the Asian Conference on Computer Vision. 2020.

  9. Zhang, Edward, et al. “No Shadow Left Behind: Removing Objects and their Shadows using Approximate Lighting and Geometry.” CVPR, 2021.

  10. Wang, Tianyu, et al. “Single-stage instance shadow detection with bidirectional relation learning.” CVPR, 2021.

  11. Lu, Erika, et al. “Omnimatte: Associating objects and their effects in video.” CVPR, 2021.

Outlier Detection

Posted on 2022-06-16 | In paper note

Statistical methods

  • use a model (e.g., Gaussian) to fit the distribution of all data
  • use two models to fit the distributions of non-outliers and outliers separately
  • Grubbs’ test

Distance based methods

  • the density within a neighborhood
  • the distance from a nearest neighbor

Learning based method

  • clustering, the smallest cluster is likely to contain outliers
  • one-class classifier (e.g., one-class SVM)
  • binary classifier (e.g., naive bayes for spam filtering, weighted binary SVM)

Optical Flow

Posted on 2022-06-16 | In paper note
  1. Estimate optical flow based on video: FlowNet [1], FlowNet2 [2]

  2. Estimate optical flow based on image: [3] [4] [5]

[1] Dosovitskiy, Alexey, et al. “Flownet: Learning optical flow with convolutional networks.” ICCV, 2015.

[2] Ilg, Eddy, et al. “Flownet 2.0: Evolution of optical flow estimation with deep networks.” CVPR, 2017.

[3] Gao, Ruohan, Bo Xiong, and Kristen Grauman. “Im2flow: Motion hallucination from static images for action recognition.” CVPR, 2018.

[4] Silvia L. Pintea, Jan C. van Gemert, and Arnold W. M. Smeulders, “Deja Vu: Motion Prediction in Static Images”, arxiv, 2018.

[5] Walker, Jacob, Abhinav Gupta, and Martial Hebert. “Dense optical flow prediction from a static image.” ICCV, 2015.

Normalization

Posted on 2022-06-16 | In paper note

Normalize weights:

  1. weight normalization [1]: $\mathbf{w}=\frac{g}{|\mathbf{v}|} \mathbf{v}$, weight normalization can be viewed as a cheaper and less noisy approximation to batch normalization

Normalize outputs:

  1. batch normalization [2]: make the input and output have the same variance

  2. layer normalization [3]

  3. instance normalization [4]

  4. group normalization [5]

N as the batch axis, C as the channel axis, and (H, W)
as the spatial axes

[1] Salimans T, Kingma D P. Weight normalization: A simple reparameterization to accelerate training of deep neural networks[C]//Advances in Neural Information Processing Systems. 2016: 901-909.

[2] Ioffe S, Szegedy C. Batch normalization: Accelerating deep network training by reducing internal covariate shift[J]. arXiv preprint arXiv:1502.03167, 2015.

[3] Ba J L, Kiros J R, Hinton G E. Layer normalization[J]. arXiv preprint arXiv:1607.06450, 2016.

[4] D. Ulyanov, A. Vedaldi, and V. Lempitsky. Instance normalization: The missing ingredient for fast stylization. arXiv:1607.08022, 2016.

[5] Wu Y, He K. Group normalization[J]. arXiv preprint arXiv:1803.08494, 2018.

1…101112…24
Li Niu

Li Niu

237 posts
18 categories
112 tags
Homepage GitHub Linkedin
© 2025 Li Niu
Powered by Hexo
|
Theme — NexT.Mist v5.1.4