vqvae
0.1. commitment loss
- why is it there?
- basically we don't want the encoder outputs to go to infinity. They should stay close to the codebook entries
- see https://stats.stackexchange.com/questions/595049/whats-the-role-of-the-commitment-loss-in-vq-vae
- alternatively, see this explanation, which compares it to the two-step optimization that happens in k-means
- https://stats.stackexchange.com/questions/592742/vq-vae-why-do-we-need-to-separate-the-codebook-alignment-loss-and-the-commitme