relu, smoothness assumption and Pylearn

1) What makes relu scale invariant? How does it relate to images?

2) What is problematic with the smoothness assumption? How can we get around this and still use simple parametric or local non-parametric model (Gaussian kernel for example)?

A practical question about pylearn for Ian :

3) Is there a convenient way to define an unsupervised pretraining followed by a supervised training using a yaml file? Or do we need to write a script do to so?

Advertisements

2 Responses to “relu, smoothness assumption and Pylearn”


  1. 1 Xavier Bouthillier April 4, 2013 at 10:50

    Ok and in the second file rather than defining the model structure, I give the file_path to the saved layer or model I want to reuse? Is it possible in yaml definition to specify a layer of a model if I only need to reuse the layer and not the entire model?

  2. 2 Ian Goodfellow April 4, 2013 at 10:43

    I think you can just pass two yaml files to train.py and it will run both of them in sequence.


Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s





%d bloggers like this: