Parameter sharing

One way to use parameter sharing is to use the same parameter in differents parts of the inputs.

For an (d x d) image, is it like assuming that all (s x s) sub-windows of the image have the same statistical structure?

If yes this assumption seems pretty restrictive. Why does it increase the power of our model?


1 Response to “Parameter sharing”

  1. 1 Sina Honari February 7, 2013 at 11:42

    That’s correct, in parameter sharing models we assume that their is some correlation between different parts of the image. However, the higher levels of neural networks find how these structures at lower levels are correlated. This is kind of restrictive but gives us better representational power, if there are indeed such correlations.

Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: