Home Page > > Details

Last Modified: May 24, 2022

CS 179: Introduction to Graphical Models: Spring 2022

Homework 7

Due Date: Friday, June 3, 2022

The submission for this homework should be a single PDF file containing all of the relevant code, figures, and any

text explaining your results. When coding your answers, try to write functions to encapsulate and reuse code,

instead of copying and pasting the same code multiple times. This will not only reduce your programming efforts,

but also make it easier for us to understand and give credit for your work. Show and explain the reasoning

behind your work!

In this homework, we will run a simple variational auto-encoder (VAE) model and explore its resulting

representation.

To simplify the effort for this homework, a template containing much of the required code for the assignment

is provided in a Jupyter ipynb file.

Part 1: Build the VAE & load data (30 points)

First, download the template Jupyter notebook and look over the provided code. You can run the code locally or

on Google Colab, as you prefer. The VAE is defined by two quantities. First, an encoder defines the variational

distribution q(Z|X = x), which we express as a Gaussian distribution,

Z ∼ N Z;µ(x),ν(x)I

(µ,logν) = W2 α(W1 x )

where α(·) is a ReLU activation function and is the usual matrix-vector dot product. In other words, (µ,logν)

are expressed by a two-layer neural network with ReLU activation on the hidden layer, and linear activation on the

output.

The decoder defines p(X|Z = z), which we express as a Gaussian distribution,

X ∼ N X; ¯µ(z),ω

¯µ(z) = σ( V2 α( V1

z ) )

where σ(·) is the logistic function, so that X is modeled as a two-layer neural network transformation of z, plus a

fixed amount of Gaussian noise on the pixel values.

The loss is given by the divergence between ˆp(X)q(Z|X) and p(Z)p(X|Z), where we assume p(Z) is a basic

unit Gaussian, which we estimate by sampling from q(Z|X). Given samples {(x

(i)

, z

(i)

)}, our estimated loss is

1

m

k

¯µ(z

(i)

) − x

(i)

k 2 +

1

2

detν(x

(i)

) + k µ(x

(i)

)k

2 − log detν(x

(i)

) − 1

Data Our data consist of a small sample of hand images (mine, but inspired by Josh Tenenbaum’s IsoMap

experiment), located at

https://sli.ics.uci.edu/extras/cs179/data/frames.txt

Load the data, then shift and scale the values to be between zero and one:

1 data -= data.min(1,keepdims=True)

2 data /= data.max(1,keepdims=True)

3 data = torch.tensor(data).float()

Part 2: Train the model (30 points)

You have also been provided with a function train which computes the gradient of a mini-batch of data and

updates the parameter values. Use this function to train your model:

1 vae = VAE()

2 optim = Adam(vae.parameters(), lr=0.0001)

3 train(vae, data, optim, batch = 16, epochs=500)

Homework 7 UC Irvine 1/ 2

CS 179: Introduction to Graphical Models Spring 2022

This may be a bit slow. The train function also calls another provided function, plot_scatter , which plots

a scatter of images (preventing any overlapping images), so you can visualize the two-dimensional latent space Z

which is being used to capture the variability in the images.

Part 3: Visualize reconstructions (30 points)

Note: if you are short on time, or want to compare your results, you can obtain my trained VAE at

https://sli.ics.uci.edu/extras/cs179/data/vae.pkl

and load using pickle ,

1 with open('vae.pkl','rb') as fh: vae = pickle.load(fh)

However, if you have trained your own in part (2), please use yours for this question as well.

(a) Select 6 random images from the data set. Encode and then decode each image, and show ( imshow ) the

original image and reconstructed image, ¯µ(µ( x ) ). (Note: these will not look that great; the reconstruction

network may be too simple to do a good job, or perhaps we just don’t have enough data in this small data

set. But they should be recognizable.)

(b) Now select 10 points in a linear path across the distribution (say, from z = (−3,0) to z = (3,0). For each

latent location, decode ¯µ(z). Interpret the resulting sequence of images.

Part 4: Work on your projects! (10 points)

Good luck with your projects, and your finals for any other classes!

Homework 7 UC Irvine 2/ 2

Contact Us(Ghostwriter Service)

- QQ：99515681
- WeChat：codinghelp
- Email：99515681@qq.com
- Work Time：8:00-23:00

- Help With Program,Help With Python Pro... 2024-02-19
- Help With Cs2910,Help With C/C++ Progr... 2024-02-19
- Help With Cs 532,Matlab Programminghel... 2024-02-19
- Data Programminghelp With ,Help With P... 2024-02-18
- Programhelp With ,Help With Java/Pytho... 2024-02-18
- Help With Program Programming,Help Wit... 2024-02-18
- Help With Econ 323,C/C++，Java Program... 2024-02-17
- B31se Programminghelp With ,Java，C++ ... 2024-02-17
- Help With Program,Help With Java/Pytho... 2024-02-16
- Help With Ece438,Help With C/C++ Progr... 2024-02-16
- Help With Program,Python Programminghe... 2024-02-16
- Data Programminghelp With ,Python/Java... 2024-02-16
- Help With Cs9053,Help With Java Progra... 2024-02-15
- Help With Comp26020,Help With Java，C+... 2024-02-15
- Help With Csci3280,Python Programmingh... 2024-02-14
- Help With Program,Help With Python/Jav... 2024-02-14
- Help With Ems5730,Help With Python Pro... 2024-02-14
- Help With Cs 211 Programming,Help With... 2024-02-13
- Programhelp With ,Help With Java，C++ ... 2024-02-13
- Prog10065help With ,Help With C++ Prog... 2024-02-13

Contact Us - Email：99515681@qq.com WeChat：codinghelp

© 2021 www.asgnhelp.com

Programming Assignment Help！