Invariances for Learning Images

1) What kind of invariances can we have for learning images and what are the current state of art techniques to embed the prior knowledge into learning algorithms to establish those invariances?

2) What are the advantages and disadvantages of learning invariances instead of embedding the prior knowledge about the invariances into our models?

3) A related question: given the task of identifying objects in the videos, which one might be more practical: Learning invariant features or embedding the prior knowledge about the invariances into our model?

Advertisements

1 Response to “Invariances for Learning Images”


  1. 1 Nicholas Léonard March 1, 2013 at 13:49

    1) Invariances: Luminosity (amplitude, position of source). Position (translation). Scale. Rotation.

    Techniques: Contrast Normalization: Subtract mean and divide by std dev
    Local Contrast Normalization. Whitening. PCA. ZCA. SiFT and HOG and Gist. SURF (Speeded Up Robust Features). Standardization. LBP. K-means.

    2) Learning can adapt to a particular task. For example prediction. So when you embed the knowledge of invariances in the architecture of your model, for example a Convolutional NN, the model must learn to minimize its error under the constraint of this embedded knowledge. In some cases, this can make learning generalize and converge much better.


Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s





%d bloggers like this: