Few-Shot Image Generation From Base Categories to Novel Categories

  1. Fusion-based method: Generative Matching Network (GMN) [1] (VAE with matching network for generator and recognizer). MatchingGAN [3] learns reasonable interpolation coefficients. F2GAN [5] first fuses high-level features and then fills in low-level details.

  2. Optimization-based method: FIGR [2] is based on Reptile. DAWSON [4] is based on MAML.

  3. Transformation-based method: DAGAN [6] samples random vectors to generate new images. DeltaGAN [7] learns sample-specific delta.

Reference

[1] Bartunov, Sergey, and Dmitry Vetrov. “Few-shot generative modelling with generative matching networks.” , 2018.

[2] Clouâtre, Louis, and Marc Demers. “FIGR: Few-shot ImASTATISage Generation with Reptile.” arXiv preprint arXiv:1901.02199 (2019).

[3] Yan Hong, Li Niu, Jianfu Zhang, Liqing Zhang, “MatchingGAN: Matching-based Few-shot Image Generation”, ICME, 2020

[4] Weixin Liang, Zixuan Liu, Can Liu: “DAWSON: A Domain Adaptive Few Shot Generation Framework.” CoRR abs/2001.00576 (2020)

[5] Yan Hong, Li Niu, Jianfu Zhang, Weijie Zhao, Chen Fu, Liqing Zhang: “F2GAN: Fusing-and-Filling GAN for Few-shot Image Generation.” ACM MM (2020)

[6] Antreas Antoniou, Amos J. Storkey, Harrison Edwards: “Data Augmentation Generative Adversarial Networks.” stat (2018)

[7] Yan Hong, Li Niu, Jianfu Zhang, Jing Liang, Liqing Zhang: “DeltaGAN: Towards Diverse Few-shot Image Generation with Sample-Specific Delta.” CoRR abs/2009.08753 (2020)