Wake-Sleep Learning

Wake-sleep learning is algorithm that jointly learns a generative model, represented as a generative function p and an inference model, represented as an inference function q.

During the wake phase, p is trained on complete traces generated by running q on the observed data, which can be done using the `train! method. During the sleep phase, q is trained on data generated by simulating from p, which can be done using lecture! or lecture_batched!:

Gen.lecture!Function
score = lecture!(
    p::GenerativeFunction, p_args::Tuple,
    q::GenerativeFunction, get_q_args::Function)

Simulate a trace of p representing a training example, and use to update the gradients of the trainable parameters of q.

Used for training q via maximum expected conditional likelihood. Random choices will be mapped from p to q based on their address. getqargs maps a trace of p to an argument tuple of q. score is the conditional log likelihood (or an unbiased estimate of a lower bound on it, if not all of q's random choices are constrained, or if q uses non-addressable randomness).

source
Gen.lecture_batched!Function
score = lecture_batched!(
    p::GenerativeFunction, p_args::Tuple,
    q::GenerativeFunction, get_q_args::Function)

Simulate a batch of traces of p representing training samples, and use them to update the gradients of the trainable parameters of q.

Like lecture! but q is batched, and must make random choices for training sample i under hierarchical address namespace i::Int (e.g. i => :z). getqargs maps a vector of traces of p to an argument tuple of q.

source