Privileged Information

Learning Using Privileged Information (LUPI) or SVM+ was proposed by Vapnik in [the first paper].

High-level ideas:

  • Use privileged information in the same way as for multi-view learning
  • Transfer between privileged information and primary information
  • Use privileged information to control the training process like training uncertainty or training difficulty (e.g., training loss, noise).

Applications:

  • SVM for binary classification

    • model the slack variable : SVM+ [1]
    • model the margin: [1] [2]
    • structural SVM: [1]
    • theoretical analysis: [1] [2]
  • Gaussian process classification

  • L2 loss for classification/Hash

    • multi-labeling [1]
    • Hash ITQ [1]
  • clustering

    • clustering [1]
  • metric learning for verification/classification

  • CRF

    • probilistic inference [1]: similar with multi-view, but integral over the latent privileged information space during testing
  • random forest

    • conditional regression forest [1]: design node splitting criterion
  • matrix factorization for collaborative filtering

  • Maximum Entropy Discrimination

  • Deep Learning

Settings:

  • multi-view + LUPI [1]
  • multi-task multi-class LUPI [1]
  • multi-instance LUPI [1]
  • active learning + LUPI [1]
  • distillation + LUPI [1]
  • domain adaptation + LUPI [1]