Think with Image
Survey: [1]
Reference
[1] Su, Zhaochen, et al. “Thinking with images for multimodal reasoning: Foundations, methods, and future frontiers.” arXiv preprint arXiv:2506.23918 (2025).
Survey: [1]
[1] Su, Zhaochen, et al. “Thinking with images for multimodal reasoning: Foundations, methods, and future frontiers.” arXiv preprint arXiv:2506.23918 (2025).
We use the Dawid-Skene vote aggregation algorithm to obtain the ground truth label for each snippet, since this is often considered ‘gold standard’ for aggregation in practice. DawidSkene is an unsupervised inference algorithm that gives the Maximum Likelihood Estimate of observer error rates using the EM algorithm.
1) Using the labels given by multiple annotators, estimate the most likely “correct” label for each video snippet.
2) Based on the estimated correct answer for each object, compute the error rates for each annotator.
3) Taking into consideration the error rates for each annotator, recompute the most likely “correct” label for each object.
4) Repeat steps 2 and 3 until one of the termination criteria is met (error rates are below a pre-specified threshold or a pre-specified number of iterations are completed).
Vector Quantization: VQVAE [1],VQVAE2 [2], VQGAN [6].
Residual Quantization: RQVAE [3]
Accelerate auto-regression: [4] [5]
Hierarchical residual quantization: VAR [7]
[1] Oord, Aaron van den, Oriol Vinyals, and Koray Kavukcuoglu. “Neural discrete representation learning.” arXiv preprint arXiv:1711.00937 (2017).
[2] Razavi, Ali, Aaron van den Oord, and Oriol Vinyals. “Generating diverse high-fidelity images with vq-vae-2.” Advances in neural information processing systems. 2019.
[3] Lee, Doyup, et al. “Autoregressive Image Generation using Residual Quantization.” arXiv preprint arXiv:2203.01941 (2022).
[4] Bond-Taylor, Sam, et al. “Unleashing Transformers: Parallel Token Prediction with Discrete Absorbing Diffusion for Fast High-Resolution Image Generation from Vector-Quantized Codes.” arXiv preprint arXiv:2111.12701 (2021).
[5] Huiwen Chang, Han Zhang, Lu Jiang, Ce Liu, William T. Freeman, “MaskGIT: Masked Generative Image Transformer”, arXiv preprint arXiv:2202.04200.
[6] Patrick Esser, Robin Rombach, Björn Ommer, “Taming Transformers for High-Resolution Image Synthesis”.
[7] Tian, Keyu, et al. “Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction.” arXiv preprint arXiv:2404.02905 (2024).
learn attribute vector based on the relation and difference between different categories (each dimension if uninterpretable): [1] (Laplacian matrix), [2] (triplet loss)
exploit local information and encode them into attribute vector (each dimension is interpretable): [3] (discriminative cluster, doublets), [4] (joint attribute learning and feature learning)
learn attention map for each latent attribute [5]
Yu, Felix X., et al. “Designing category-level attributes for discriminative visual recognition.” CVPR, 2013.
Li, Yan, et al. “Discriminative learning of latent features for zero-shot recognition.” CVPR, 2018.
Singh, Saurabh, Abhinav Gupta, and Alexei A. Efros. “Unsupervised discovery of mid-level discriminative patches.” ECCV, 2012.
Huang, Chen, Chen Change Loy, and Xiaoou Tang. “Unsupervised learning of discriminative attributes and visual representations.” CVPR, 2016.
Yang, Wenjie, et al. “Towards rich feature discovery with class activation maps augmentation for person re-identification.” CVPR, 2019.
Let us use $S$ to denote the set of training categories and $T$ to denote the set of testing categories.
Blend text and background images.
Jaderberg, Max, et al. “Synthetic data and artificial neural networks for natural scene text recognition.” arXiv preprint arXiv:1406.2227 (2014).
Gupta, Ankush, Andrea Vedaldi, and Andrew Zisserman. “Synthetic data for text localisation in natural images.” CVPR, 2016.
Zhan, Fangneng, Shijian Lu, and Chuhui Xue. “Verisimilar image synthesis for accurate detection and recognition of texts in scenes.” ECCV, 2018.
As mentioned in [1] One major concern of subjective annotation is that the annotations provided by different workers for each image may not be reliable, which calls for consistency analysis on the annotations. We use Spearman’s rank correlation ρ between pairs of workers to measure consistency and estimate p-values to evaluate statistical significance of the correlation relative to a null hypothesis of uncorrelated responses. We use the Benjamini-Hochberg
procedure to control the false discovery rate (FDR) for multiple comparisons [2]. At an FDR level of 0.05, we find 98.45% batches have significant agreement among raters. Further consistency analysis of the dataset can be found in the supplementary material of [1].
[1] Kong, Shu, et al. “Photo aesthetics ranking network with attributes and content adaptation.” European Conference on Computer Vision. Springer, Cham, 2016.
[2] Benjamini, Yoav, and Daniel Yekutieli. “The control of the false discovery rate in multiple testing under dependency.” Annals of statistics (2001): 1165-1188.
Given the predicted softmax logits $p_i$, ground-truth softmax logits or free-form weights $w_i$.
weighted softmax loss: $-\sum_{i} w_i \log p_i$
EMD softmax loss: $-\sum_{i} w_i p_i$
softmax loss after label flip layer: $-\log{\sum_{i} w_i p_i}$
knowledge distillation: $\sum_{i} (p_i-w_i)$
Shadow Generation
Shadow Removal/Detection
Zhu, Lei, et al. “Bidirectional feature pyramid network with recurrent attention residual modules for shadow detection.” Proceedings of the European Conference on Computer Vision (ECCV). 2018.
Wang, Tianyu, et al. “Instance shadow detection.” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020.
Hu, Xiaowei, et al. “Mask-ShadowGAN: Learning to remove shadows from unpaired data.” Proceedings of the IEEE International Conference on Computer Vision. 2019.
Le, Hieu, and Dimitris Samaras. “Shadow removal via shadow image decomposition.” Proceedings of the IEEE International Conference on Computer Vision. 2019.
Xiaodong, Cun, Pun Chi-Man, and Shi Cheng. “Towards Ghost-free Shadow Removal via Dual Hierarchical Aggregation Network and Shadow Matting GAN.” arXiv preprint arXiv:1911.08718 (2019).
Le, Hieu, and Dimitris Samaras. “From Shadow Segmentation to Shadow Removal.” European Conference on Computer Vision. Springer, Cham, 2020.
Liu, Daquan, et al. “ARShadowGAN: Shadow Generative Adversarial Network for Augmented Reality in Single Light Scenes.” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020.
Zhan, Fangneng, et al. “Adversarial Image Composition with Auxiliary Illumination.” Proceedings of the Asian Conference on Computer Vision. 2020.
Zhang, Edward, et al. “No Shadow Left Behind: Removing Objects and their Shadows using Approximate Lighting and Geometry.” CVPR, 2021.
Wang, Tianyu, et al. “Single-stage instance shadow detection with bidirectional relation learning.” CVPR, 2021.
Lu, Erika, et al. “Omnimatte: Associating objects and their effects in video.” CVPR, 2021.