GAN

博客内容

One neural network tries to generate realistic data (note that GANs can be used to model any data distribution, but are mainly used for images these days), and the other network tries to discriminate between real data and data generated by the generator network.

The generator network uses the discriminator as a loss function and updates its parameters to generate data that starts to look more realistic.

The discriminator network, on the other hand, updates its parameters to make itself better at picking out fake data from real data. So it too gets better at its job.

The game of cat and mouse continues, until the system reaches a so-called “equilibrium,” where the generator creates data that looks real enough that the best the discriminator can do is guess randomly.

一个神经网络尝试生成真实数据(请注意,GANs 可用于对任何数据分布进行建模,但目前主要用于图像),另一个网络尝试区分真实数据和生成器网络生成的数据。

生成器网络使用鉴别器作为损失函数并更新其参数以生成开始看起来更真实的数据。

另一方面,鉴别器网络会更新其参数,以使自己更好地从真实数据中挑选出虚假数据。因此,它的工作也变得更好。

猫捉老鼠的游戏继续进行,直到系统达到所谓的“平衡”,在这种平衡中,生成器创建的数据看起来足够真实,鉴别器所能做的最好的事情就是随机猜测。

要点

标准的GAN训练循环有三个步骤:

  1. 用真实的训练数据集训练鉴别器。
  2. 用生成的数据训练鉴别器。
  3. 训练生成器生成数据,并使鉴别器以为它是真实数据。

provide higher gradient in the early stage of training:

视频参考

https://www.bilibili.com/video/BV1Wv411h7kN?p=62&vd_source=909d7728ce838d2b9656fb13a31483ca

Theory behind GAN

应使$P_G$和$P_{data}$这两个分布尽可能靠近(下图公式中Div是divergence):

$P_G$和$P_{data}$这两个分布,我们并不知道它俩长什么样子,但是可以从这两个分布中采样:

目标函数如下:

Optimizing G is equal to minimizing the JS distance:

所以$Div(P_G,P_{data})$可以被替换了:

也就是:

求解步骤如下:

Maximize $\tilde{V}$ = Minimize Cross-entropy

直觉

蓝色分布和绿色分布不断靠近

pytorch代码

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
import argparse
import os
import numpy as np
import math

import torchvision.transforms as transforms
from torchvision.utils import save_image

from torch.utils.data import DataLoader
from torchvision import datasets
from torch.autograd import Variable

import torch.nn as nn
import torch.nn.functional as F
import torch

os.makedirs("images", exist_ok=True)

parser = argparse.ArgumentParser()
parser.add_argument("--n_epochs", type=int, default=200, help="number of epochs of training")
parser.add_argument("--batch_size", type=int, default=64, help="size of the batches")
parser.add_argument("--lr", type=float, default=0.0002, help="adam: learning rate")
parser.add_argument("--b1", type=float, default=0.5, help="adam: decay of first order momentum of gradient")
parser.add_argument("--b2", type=float, default=0.999, help="adam: decay of first order momentum of gradient")
parser.add_argument("--n_cpu", type=int, default=8, help="number of cpu threads to use during batch generation")
parser.add_argument("--latent_dim", type=int, default=100, help="dimensionality of the latent space")
parser.add_argument("--img_size", type=int, default=28, help="size of each image dimension")
parser.add_argument("--channels", type=int, default=1, help="number of image channels")
parser.add_argument("--sample_interval", type=int, default=400, help="interval betwen image samples")
opt = parser.parse_args()
print(opt)

img_shape = (opt.channels, opt.img_size, opt.img_size)

cuda = True if torch.cuda.is_available() else False


class Generator(nn.Module):
def __init__(self):
super(Generator, self).__init__()

def block(in_feat, out_feat, normalize=True):
layers = [nn.Linear(in_feat, out_feat)]
if normalize:
layers.append(nn.BatchNorm1d(out_feat, 0.8))
layers.append(nn.LeakyReLU(0.2, inplace=True))
return layers

self.model = nn.Sequential(
*block(opt.latent_dim, 128, normalize=False), #block返回的是一个元组,所以前面加上*就相当于给nn.Sequential传入了nn.Linear()、nn.LeakyReLU()等
*block(128, 256),
*block(256, 512),
*block(512, 1024),
nn.Linear(1024, int(np.prod(img_shape))), #np.prod()返回给定轴上的数组元素的乘积
nn.Tanh()
)

def forward(self, z):
img = self.model(z)
img = img.view(img.size(0), *img_shape)
return img


class Discriminator(nn.Module):
def __init__(self):
super(Discriminator, self).__init__()

self.model = nn.Sequential(
nn.Linear(int(np.prod(img_shape)), 512),
nn.LeakyReLU(0.2, inplace=True),
nn.Linear(512, 256),
nn.LeakyReLU(0.2, inplace=True),
nn.Linear(256, 1),
nn.Sigmoid(),
)

def forward(self, img):
img_flat = img.view(img.size(0), -1)
validity = self.model(img_flat)

return validity


# Loss function
adversarial_loss = torch.nn.BCELoss()

# Initialize generator and discriminator
generator = Generator()
discriminator = Discriminator()

if cuda:
generator.cuda()
discriminator.cuda()
adversarial_loss.cuda()

# Configure data loader
os.makedirs("../../data/mnist", exist_ok=True)
dataloader = torch.utils.data.DataLoader(
datasets.MNIST(
"../../data/mnist",
train=True,
download=True,
transform=transforms.Compose(
[transforms.Resize(opt.img_size), transforms.ToTensor(), transforms.Normalize([0.5], [0.5])]
),
),
batch_size=opt.batch_size,
shuffle=True,
)

# Optimizers
optimizer_G = torch.optim.Adam(generator.parameters(), lr=opt.lr, betas=(opt.b1, opt.b2))
optimizer_D = torch.optim.Adam(discriminator.parameters(), lr=opt.lr, betas=(opt.b1, opt.b2))

Tensor = torch.cuda.FloatTensor if cuda else torch.FloatTensor

# ----------
# Training
# ----------

for epoch in range(opt.n_epochs):
for i, (imgs, _) in enumerate(dataloader):

# Adversarial ground truths
valid = Variable(Tensor(imgs.size(0), 1).fill_(1.0), requires_grad=False) #真为1
fake = Variable(Tensor(imgs.size(0), 1).fill_(0.0), requires_grad=False) #假为0

# Configure input
real_imgs = Variable(imgs.type(Tensor))

# -----------------
# Train Generator
# -----------------

optimizer_G.zero_grad()

# Sample noise as generator input
z = Variable(Tensor(np.random.normal(0, 1, (imgs.shape[0], opt.latent_dim))))

# Generate a batch of images
gen_imgs = generator(z)

# Loss measures generator's ability to fool the discriminator
g_loss = adversarial_loss(discriminator(gen_imgs), valid)

g_loss.backward()
optimizer_G.step()

# ---------------------
# Train Discriminator
# ---------------------

optimizer_D.zero_grad()

# Measure discriminator's ability to classify real from generated samples
real_loss = adversarial_loss(discriminator(real_imgs), valid)
fake_loss = adversarial_loss(discriminator(gen_imgs.detach()), fake) #detach()作用:不需要计算生成器的梯度。
d_loss = (real_loss + fake_loss) / 2

d_loss.backward()
optimizer_D.step()

print(
"[Epoch %d/%d] [Batch %d/%d] [D loss: %f] [G loss: %f]"
% (epoch, opt.n_epochs, i, len(dataloader), d_loss.item(), g_loss.item())
)

batches_done = epoch * len(dataloader) + i
if batches_done % opt.sample_interval == 0:
save_image(gen_imgs.data[:25], "images/%d.png" % batches_done, nrow=5, normalize=True)

代码运行之后的结果展示:

有关train GAN的一些tips


DCGAN: Deep Convolutional Generative Adversarial Network

博客内容

Convolutions+GANs=Good for generating images

DCGAN changed that by using something called a transposed convolution operation or, its “unfortunate” name, Deconvolution layer,.

转置卷积不是卷积的逆运算,起上采样作用。

https://blog.csdn.net/LoseInVain/article/details/81098502

https://www.bilibili.com/video/BV1mh411J7U4

要点

  1. 卷积模块缩减数据,同样配置的转置卷积模块可以抵消这种缩减。因此,转置卷积是生成网络的理想选择。
  2. 转置卷积卷积核在中间网格上以步长1移动,这个步长是固定的。不同于一般的卷积,这里的步长选项不用来决定卷积核的移动方式,而只用于设置原始方格在中间网格中的距离。
  3. 转置卷积中加入补全,与普通的卷积不同,之前补全的作用是扩展图像。在这里,补全的作用是缩小图像。

CGAN: Conditional Generative Adversarial Network

要点

  1. 不同于GAN,条件式GAN可以直接生成特定类型的输出。
  2. 训练条件式GAN,需要将类别标签分别与图像和种子一起输入鉴别器和生成器。

pytorch代码

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
import argparse
import os
import numpy as np
import math

import torchvision.transforms as transforms
from torchvision.utils import save_image

from torch.utils.data import DataLoader
from torchvision import datasets
from torch.autograd import Variable

import torch.nn as nn
import torch.nn.functional as F
import torch

os.makedirs("images", exist_ok=True)

parser = argparse.ArgumentParser()
parser.add_argument("--n_epochs", type=int, default=200, help="number of epochs of training")
parser.add_argument("--batch_size", type=int, default=64, help="size of the batches")
parser.add_argument("--lr", type=float, default=0.0002, help="adam: learning rate")
parser.add_argument("--b1", type=float, default=0.5, help="adam: decay of first order momentum of gradient")
parser.add_argument("--b2", type=float, default=0.999, help="adam: decay of first order momentum of gradient")
parser.add_argument("--n_cpu", type=int, default=8, help="number of cpu threads to use during batch generation")
parser.add_argument("--latent_dim", type=int, default=100, help="dimensionality of the latent space")
parser.add_argument("--n_classes", type=int, default=10, help="number of classes for dataset")
parser.add_argument("--img_size", type=int, default=32, help="size of each image dimension")
parser.add_argument("--channels", type=int, default=1, help="number of image channels")
parser.add_argument("--sample_interval", type=int, default=400, help="interval between image sampling")
opt = parser.parse_args()
print(opt)

img_shape = (opt.channels, opt.img_size, opt.img_size)

cuda = True if torch.cuda.is_available() else False


class Generator(nn.Module):
def __init__(self):
super(Generator, self).__init__()

self.label_emb = nn.Embedding(opt.n_classes, opt.n_classes) #nn.Embedding(num_embeddings, embedding_dim)。num_embeddings (python:int) – 词典的大小尺寸,比如总共出现5000个词,那就输入5000。此时index为(0-4999) embedding_dim (python:int) – 嵌入向量的维度,即用多少维来表示一个符号。

def block(in_feat, out_feat, normalize=True):
layers = [nn.Linear(in_feat, out_feat)]
if normalize:
layers.append(nn.BatchNorm1d(out_feat, 0.8))
layers.append(nn.LeakyReLU(0.2, inplace=True))
return layers

self.model = nn.Sequential(
*block(opt.latent_dim + opt.n_classes, 128, normalize=False),
*block(128, 256),
*block(256, 512),
*block(512, 1024),
nn.Linear(1024, int(np.prod(img_shape))),
nn.Tanh()
)

def forward(self, noise, labels):
# Concatenate label embedding and image to produce input
gen_input = torch.cat((self.label_emb(labels), noise), -1) #将label embedding和noise相连接
img = self.model(gen_input)
img = img.view(img.size(0), *img_shape)
return img


class Discriminator(nn.Module):
def __init__(self):
super(Discriminator, self).__init__()

self.label_embedding = nn.Embedding(opt.n_classes, opt.n_classes)

self.model = nn.Sequential(
nn.Linear(opt.n_classes + int(np.prod(img_shape)), 512),
nn.LeakyReLU(0.2, inplace=True),
nn.Linear(512, 512),
nn.Dropout(0.4),
nn.LeakyReLU(0.2, inplace=True),
nn.Linear(512, 512),
nn.Dropout(0.4),
nn.LeakyReLU(0.2, inplace=True),
nn.Linear(512, 1),
)

def forward(self, img, labels):
# Concatenate label embedding and image to produce input
d_in = torch.cat((img.view(img.size(0), -1), self.label_embedding(labels)), -1) #将label embedding和img相连接
validity = self.model(d_in)
return validity


# Loss functions
adversarial_loss = torch.nn.MSELoss()

# Initialize generator and discriminator
generator = Generator()
discriminator = Discriminator()

if cuda:
generator.cuda()
discriminator.cuda()
adversarial_loss.cuda()

# Configure data loader
os.makedirs("../../data/mnist", exist_ok=True)
dataloader = torch.utils.data.DataLoader(
datasets.MNIST(
"../../data/mnist",
train=True,
download=True,
transform=transforms.Compose(
[transforms.Resize(opt.img_size), transforms.ToTensor(), transforms.Normalize([0.5], [0.5])]
),
),
batch_size=opt.batch_size,
shuffle=True,
)

# Optimizers
optimizer_G = torch.optim.Adam(generator.parameters(), lr=opt.lr, betas=(opt.b1, opt.b2))
optimizer_D = torch.optim.Adam(discriminator.parameters(), lr=opt.lr, betas=(opt.b1, opt.b2))

FloatTensor = torch.cuda.FloatTensor if cuda else torch.FloatTensor
LongTensor = torch.cuda.LongTensor if cuda else torch.LongTensor #64-bit integer


def sample_image(n_row, batches_done):
"""Saves a grid of generated digits ranging from 0 to n_classes"""
# Sample noise
z = Variable(FloatTensor(np.random.normal(0, 1, (n_row ** 2, opt.latent_dim))))
# Get labels ranging from 0 to n_classes for n rows
labels = np.array([num for _ in range(n_row) for num in range(n_row)]) #两个for循环,最后生成[0,1,2,...,9,0,1,2...,9,0,...]
labels = Variable(LongTensor(labels))
gen_imgs = generator(z, labels)
save_image(gen_imgs.data, "images/%d.png" % batches_done, nrow=n_row, normalize=True)


# ----------
# Training
# ----------

for epoch in range(opt.n_epochs):
for i, (imgs, labels) in enumerate(dataloader):

batch_size = imgs.shape[0]

# Adversarial ground truths
valid = Variable(FloatTensor(batch_size, 1).fill_(1.0), requires_grad=False)
fake = Variable(FloatTensor(batch_size, 1).fill_(0.0), requires_grad=False)

# Configure input
real_imgs = Variable(imgs.type(FloatTensor))
labels = Variable(labels.type(LongTensor))

# -----------------
# Train Generator
# -----------------

optimizer_G.zero_grad()

# Sample noise and labels as generator input
z = Variable(FloatTensor(np.random.normal(0, 1, (batch_size, opt.latent_dim))))
gen_labels = Variable(LongTensor(np.random.randint(0, opt.n_classes, batch_size)))

# Generate a batch of images
gen_imgs = generator(z, gen_labels)

# Loss measures generator's ability to fool the discriminator
validity = discriminator(gen_imgs, gen_labels)
g_loss = adversarial_loss(validity, valid)

g_loss.backward()
optimizer_G.step()

# ---------------------
# Train Discriminator
# ---------------------

optimizer_D.zero_grad()

# Loss for real images
validity_real = discriminator(real_imgs, labels)
d_real_loss = adversarial_loss(validity_real, valid)

# Loss for fake images
validity_fake = discriminator(gen_imgs.detach(), gen_labels)
d_fake_loss = adversarial_loss(validity_fake, fake)

# Total discriminator loss
d_loss = (d_real_loss + d_fake_loss) / 2

d_loss.backward()
optimizer_D.step()

print(
"[Epoch %d/%d] [Batch %d/%d] [D loss: %f] [G loss: %f]"
% (epoch, opt.n_epochs, i, len(dataloader), d_loss.item(), g_loss.item())
)

batches_done = epoch * len(dataloader) + i
if batches_done % opt.sample_interval == 0:
sample_image(n_row=10, batches_done=batches_done)

运行代码的结果如下:


CycleGAN

博客

G takes in an image from X and tries to map it to some image in Y. The discriminator $D_Y$ predicts whether an image was generated by G or was actually in Y.

Similarly, F takes in an image from Y and tries to map it to some image in X, And the discriminator $D_X$ predicts whether an image was generated by F or was actually in X.

To further improve performance, CycleGAN uses another metric, cycle consistency loss.

视频参考

https://www.bilibili.com/video/BV1Wv411h7kN?p=61&vd_source=909d7728ce838d2b9656fb13a31483ca

仍然使用原来的GAN可能会无视generator的输入,如下图:

所以就提出了Cycle GAN,结构如下:

为了让第二个generator能够成功还原原来的图片,第一个generator产生的图片就不能跟输入差太多。

下图为完整的Cycle GAN:

CoGAN


ProGAN: Progressive growing of Generative Adversarial Networks

博客

The intuition here is that it’s easier to generate a 4x4 image than it is to generate a 1024x1024 image. Also, it’s easier to map a 16x16 image to a 32x32 image than it is to map a 2x2 image to a 32x32 image.


WGAN: Wasserstein Generative Adversarial Networks

博客

The Jensen-Shannon divergence is a way of measuring how different two probability distributions are.The larger the JSD, the more “different” the two distributions are, and vice versa.

The alternate distance metric proposed by the WGAN authors is the 1-Wasserstein distance, sometimes called the earth mover distance.

理解JS散度(Jensen–Shannon divergence)

KL散度

KL散度具有非负性和不对称$KL(P||Q)\ne KL\left( Q||P \right) $

JS散度

一般地,JS散度是对称的,其取值是 0 到 1 之间。如果两个分布 P,Q 离得很远,完全没有重叠的时候,那么KL散度值是没有意义的,而JS散度值是一个常数。这在学习算法中是比较致命的,这就意味这这一点的梯度为 0。梯度消失了。

为什么会出现两个分布没有重叠的现象

令人拍案叫绝的Wasserstein GAN

https://zhuanlan.zhihu.com/p/25071913

上面这个博客讲得非常清楚:在第一篇《Towards Principled Methods for Training Generative Adversarial Networks》里面推了一堆公式定理,从理论上分析了原始GAN的问题所在,从而针对性地给出了改进要点;在这第二篇《Wasserstein GAN》里面,又再从这个改进点出发推了一堆公式定理,最终给出了改进的算法实现流程,而改进后相比原始GAN的算法实现流程却只改了四点

  • 判别器最后一层去掉sigmoid
  • 生成器和判别器的loss不取log
  • 每次更新判别器的参数之后把它们的绝对值截断到不超过一个固定常数c
  • 不要用基于动量的优化算法(包括momentum和Adam),推荐RMSProp,SGD也行

视频参考

https://www.bilibili.com/video/BV1Wv411h7kN?p=64&vd_source=909d7728ce838d2b9656fb13a31483ca

JS divergence存在着如下的问题:

earth mover distance的定义如下:

中间推导过程省略,直接给出$P_{data}$和$P_G$之间的Wasserstein distance如何计算

Lipschitz Function的直观理解:input有变化时,output的变化不会太大。

How to fulfill this constraint?→Weight Clipping

WGAN的训练步骤:


SAGAN: Self-Attention Generative Adversarial Networks

博客

Since GANs use transposed convolutions to “scan” feature maps, they only have access to nearby information.

Self-attention allows the generator to take a step back and look at the “big picture.”


BigGAN

博客

The authors also train BigGAN on a new dataset called JFT-300, which is an ImageNet-like dataset which has, you guessed it, 300 million images. They showed that BigGAN perform better on this dataset, suggesting that more massive datasets might be the way to go for GANs


StyleGAN: Style-based Generative Adversarial Networks

博客

StyleGAN is like a photoshop plugin, while most GAN developments are a new version of photoshop.

有关GAN的参考书籍