This post is my personal note for the keynote presentation given by Andreas Geiger at CVPR 2020 in the Workshop on Autonomous Driving.
Experiment with critical learning period
This post shows experiments for an interesting phenomenon: "In the early stage of training, if we mess up with the training data, then the network will never learn properly."
Pitfalls encountered porting models to Keras from PyTorch/TensorFlow/MXNet
This post summarizes pitfalls for manually porting model weights from various frameworks to Keras.
Let’s Train GANs to Play Guitar: Deep Generative Models for Guitar Cover
This post summarizes my attempt for generating guitar cover videos from audio input using GANs.
Notes for paper “How Does Batch Normalization Help Optimization? (No, It Is Not About Internal Covariate Shift)”
True root of BN's success: BN impacts network training by making optimization landscape "significantly" more smooth
Experiments with group normalization
This post is about experimenting with different normalization methods under different configurations.
Notes for XGAN: Unsupervised Image-to-image Translation for Many-to-many Mappings
This post is my personal note on a recent paper published by Google brain.
Experiment with mixup: Beyond Empirical Risk Minimization
This post describes some experiments with the mixup technique that is simple yet effective for improving accuracy of image classification tasks.
Cloth Swapping with Deep Learning: Implement Conditional Analogy GAN in Keras
This post records my project using GANs for cloth-swapping. Starting from implementing CAGAN in Keras followed by architecture exploration for better output quality.
Experiment with Swish, ReLU and SELU (on neptune.ml)
Experiments with different activation functions.