Training apply clean mnist data set added noise to be used as input Use clean mnist data set as output Train the autoencoder using backpropagation 72/e团/4A2 Added noise 72e/2 Clean MNIST samples same Autoencoder training by backpropagation Clean minst samples 72/6474as9 72/e/团6与|2
Training • Apply clean MNIST data set + added noise to be used as input, • Use clean MNIST data set as output • Train the autoencoder using backpropagation Ch10. Auto and variational encoders v.9r6 11 Added noise Autoencoder training by backpropagation + Clean MINST samples Clean MNIST samples same
Reca After training autoencoders can be used to remove noise oE团图M圖s回Z Trained autoencoder ountout727o47457 Ch10. Auto and variational encoders v.9r6
Recall • After training, autoencoders can be used to remove noise Ch10. Auto and variational encoders v.9r6 12 Trained autoencoder Noisy Input Denoised Output
Exercise 2 (a) autoencoder training If you have 1000 images for each of the handwritten numerals (class o to 9 in the clean data set total 10x1000 images), describe the training process of an auto-encoder using pseudo code (b) Autoencoder usage: If the trained encoder receives a noisy image of a handwritten numeral, what do you expect at the output? Ch10. Auto and variational encoders v.9r6
Exercise 2 • (a) Autoencoder training: If you have 1000 images for each of the handwritten numerals (class 0 to 9) in the clean data set (total 10x1000 images), describe the training process of an auto-encoder using pseudo code. • (b) Autoencoder usage: If the trained encoder receives a noisy image of a handwritten numeral, what do you expect at the output? Ch10. Auto and variational encoders v.9r6 13
Answer: Exercise 2 Answer: Exercise 2(a): Auto-encoder training For (epoch=1; epoch <max epoch epoch ++ Noise clean image for numera For all 10,000 imagest Feed each clean image plus noise to the encoder input Present the clean image of the numerical to 2 the output of the decoder Use backpropagation to train the whole autoencoder network ( encoder decoder) Break if loss is too small 圈乙 Nosy ng Autoencoder usage: If the trained encoder receives extrad from the mago a noisy image of a handwritten numeral what do you expect at the output? auto-encoder Answer: a denoised image of the real numeral 14 Ch10. Auto and variational encoders v.9r6
Answer: Exercise 2 • Answer: Exercise 2(a): Auto-encoder training • For (epoch=1;epoch <max_epoch ; epoch++) – {For all 10,000 images{ • Feed each clean image plus noise to the encoder input • Present the clean image of the numerical to the output of the decoder, • Use backpropagation to train the whole autoencoder network (encoder + decoder) • } • Break if Loss is too small – } • Autoencoder usage: If the trained encoder receives a noisy image of a handwritten numeral, what do you expect at the output? – Answer: a denoised image of the real numeral Ch10. Auto and variational encoders v.9r6 14 + Noise clean image for numeral “2” auto-encoder
#part1 np. random seed(1337) Code t mnist dataset (x train,),(x test, ) =mnist load data( Part(i) image size =x train shape[1] x train =np reshape(x train [-1, image size, image size 1) obtain x test=np reshape(x test, [-1, image size, image size, 1 x train= train. astype ( 'float32)/255 x test=x test. astype 'float32)/255 dataset Generate corrupted mNisT images by adding noise with normal dist and add centered at o5 and std=0. 5 noise=np. random normal(loc=0.5, scale=0.5, size=x train shape) x train noisy =x train+ noise noise noise =np. random normal(loc=0. 5, scale=0.5, size=x test shape x test noisy =x test + noise https://towardsdatascience com/how-to-reduce-image- x train noisy =np clip(x train noisy, 0, 1. noises-by-autoencoder- x test noisy =np clip(x test noisy 0. 1.) 65d5e6de543 Ch10. Auto and variational encoders v.9r6
Code: Part(i): obtain dataset and add noise https://towardsdatascience. com/how-to-reduce-imagenoises-by-autoencoder- 65d5e6de543 • #part1 --------------------------------------------------- • np.random.seed(1337) • # MNIST dataset • (x_train, _), (x_test, _) = mnist.load_data() • image_size = x_train.shape[1] • x_train = np.reshape(x_train, [-1, image_size, image_size, 1]) • x_test = np.reshape(x_test, [-1, image_size, image_size, 1]) • x_train = x_train.astype('float32') / 255 • x_test = x_test.astype('float32') / 255 • # Generate corrupted MNIST images by adding noise with normal dist • # centered at 0.5 and std=0.5 • noise = np.random.normal(loc=0.5, scale=0.5, size=x_train.shape) • x_train_noisy = x_train + noise • noise = np.random.normal(loc=0.5, scale=0.5, size=x_test.shape) • x_test_noisy = x_test + noise • x_train_noisy = np.clip(x_train_noisy, 0., 1.) • x_test_noisy = np.clip(x_test_noisy, 0., 1.) Ch10. Auto and variational encoders v.9r6 15