Instability in DBN

When training a DBN with sampling as mentioned in Yoshua’s book section 6.1 , for each layer we generate a sample using CD and then use that sample as input to the next layer and repeat the process.

But CD is a “weak” form of Gibb sampling and we don’t really sample from the “true” distribution. So each layer as for input the result of a bias sampling and generate another bias sample. I would suspect such a chain of bias sampling would lead to a significant difference between the true distribution and the one learned. (Like error propagation in chaotic system as we have seen for recurrent neural net).

Is that a real problem for DBN ? (Why?). If so, how do we handle it ?

0 Responses to “Instability in DBN”



  1. Leave a Comment

Leave a comment