Network Structure

Imagine a network with 3 synapse layers. The input and output of the network each represent an entity. The hidden synapse layer represents a relation between the input and the output. There are different types of relations, and each of these has its own hidden synapse layer instance, i.e. its own set of weights. Yet the input and output synapse layer never change (but their weights do).

When we wish to predict an entity B given a relation R and entity A, we swap in the relation R, and propagate the entity A to get B. What are the advantages and disadvantages of this model compared to other models such as those discussed in Professor Hinton’s presentation?


1 Response to “Network Structure”

  1. 1 Nicholas Léonard February 17, 2013 at 13:18

    Hinton discusses 2 models. The first one takes as input an entity A and a relation R to output an entity B. The second takes as input the entities A and B, as well as the relation B and outputs the probability of A,R and B. My proposition is to have A as input, B as output and R as a swappable hidden weight matrix. The difference being that my approach give each R its own matrix, while Hinton’s approaches give each R its own vector. The disadvantage then is in the amount of parameters to learn. My method potentially has more such parameters, and would thus require more data, matrix decomposition techniques to keep the matrices small, additional regularization or a combination thereof to keep it from overfitting.

Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: