万本电子书0元读

万本电子书0元读

顶部广告

Hands-On Generative Adversarial Networks with Keras电子书

售       价:¥

1人正在读 | 0人评论 9.8

作       者:Rafael Valle

出  版  社:Packt Publishing

出版时间:2019-05-03

字       数:25.0万

所属分类: 进口书 > 外文原版书 > 电脑/网络

温馨提示:数字商品不支持退换货,不提供源文件,不支持导出打印

为你推荐

  • 读书简介
  • 目录
  • 累计评论(0条)
  • 读书简介
  • 目录
  • 累计评论(0条)
Develop generative models for a variety of real-world use-cases and deploy them to production Key Features * Discover various GAN architectures using Python and Keras library * Understand how GAN models function with the help of theoretical and practical examples * Apply your learnings to become an active contributor to open source GAN applications Book Description Generative Adversarial Networks (GANs) have revolutionized the fields of machine learning and deep learning. This book will be your first step towards understanding GAN architectures and tackling the challenges involved in training them. This book opens with an introduction to deep learning and generative models, and their applications in artificial intelligence (AI). You will then learn how to build, evaluate, and improve your first GAN with the help of easy-to-follow examples. The next few chapters will guide you through training a GAN model to produce and improve high-resolution images. You will also learn how to implement conditional GANs that give you the ability to control characteristics of GAN outputs. You will build on your knowledge further by exploring a new training methodology for progressive growing of GANs. Moving on, you'll gain insights into state-of-the-art models in image synthesis, speech enhancement, and natural language generation using GANs. In addition to this, you'll be able to identify GAN samples with TequilaGAN. By the end of this book, you will be well-versed with the latest advancements in the GAN framework using various examples and datasets, and you will have the skills you need to implement GAN architectures for several tasks and domains, including computer vision, natural language processing (NLP), and audio processing. Foreword by Ting-Chun Wang, Senior Research Scientist, NVIDIA What you will learn * Learn how GANs work and the advantages and challenges of working with them * Control the output of GANs with the help of conditional GANs, using embedding and space manipulation * Apply GANs to computer vision, NLP, and audio processing * Understand how to implement progressive growing of GANs * Use GANs for image synthesis and speech enhancement * Explore the future of GANs in visual and sonic arts * Implement pix2pixHD to turn semantic label maps into photorealistic images Who this book is for This book is for machine learning practitioners, deep learning researchers, and AI enthusiasts who are looking for a perfect mix of theory and hands-on content in order to implement GANs using Keras. Working knowledge of Python is expected.
目录展开

About Packt

Why subscribe?

Packt.com

Foreword

Contributors

About the author

About the reviewer

Packt is searching for authors like you

Preface

Who this book is for

What this book covers

To get the most out of this book

Download the example code files

Download the color images

Conventions used

Get in touch

Reviews

Section 1: Introduction and Environment Setup

Deep Learning Basics and Environment Setup

Deep learning basics

Artificial Neural Networks (ANNs)

The parameter estimation

Backpropagation

Loss functions

L1 loss

L2 loss

Categorical crossentropy loss

Non-linearities

Sigmoid

Tanh

ReLU

A fully connected layer

The convolution layer

The max pooling layer

Deep learning environment setup

Installing Anaconda and Python

Setting up a virtual environment in Anaconda

Installing TensorFlow

Installing Keras

Installing data visualization and machine learning libraries

The matplotlib library

The Jupyter library

The scikit-learn library

NVIDIA's CUDA Toolkit and cuDNN

The deep learning environment test

Summary

Introduction to Generative Models

Discriminative and generative models compared

Comparing discriminative and generative models

Generative models

Autoregressive models

Variational autoencoders

Reversible flows

Generative adversarial networks

GANs – building blocks

The discriminator

The generator

Real and fake data

Random noise

Discriminator and generator loss

GANs – strengths and weaknesses

Summary

Section 2: Training GANs

Implementing Your First GAN

Technical requirements

Imports

Implementing a Generator and Discriminator

Generator

Discriminator

Auxiliary functions

Training your GAN

Summary

Further reading

Evaluating Your First GAN

The evaluation of GANs

Image quality

Image variety

Domain specifications

Qualitative methods

k-nearest neighbors

Mode analysis

Other methods

Quantitative methods

The Inception score

The Frechét Inception Distance

Precision, Recall, and the F1 Score

GANs and the birthday paradox

Summary

Improving Your First GAN

Technical requirements

Challenges in training GANs

Mode collapse and mode drop

Training instability

Sensitivity to hyperparameter initialization

Vanishing gradients

Tricks of the trade

Tracking failure

Working with labels

Working with discrete inputs

Adding noise

Input normalization

Modified objective function

Distribute latent vector

Weight normalization

Avoid sparse gradients

Use a different optimizer

Learning rate schedule

GAN model architectures

ResNet GAN

GAN algorithms and loss functions

Least Squares GAN

Wasserstein GAN

Wasserstein GAN with gradient penalty

Relativistic GAN

Summary

Section 3: Application of GANs in Computer Vision, Natural Language Processing, and Audio

Progressive Growing of GANs

Technical requirements

Progressive Growing of GANs

Increasing variation using minibatch standard deviation

Normalization in the generator and the discriminator

Pixelwise feature vector normalization in the generator

Experimental setup

Training

Helper functions

Initializations

Training loops

Model implementation

Custom layers

The discriminator

The generator

GANs

Summary

Generation of Discrete Sequences Using GANs

Technical requirements

Natural language generation with GANs

Experimental setup

Data

Auxiliary training functions

Training

Imports and global variables

Initializations

Training loop

Logging

Model implementation

Helper functions

Discriminator

Generator

Inference

Model trained on words

Model trained on characters

Summary

Text-to-Image Synthesis with GANs

Technical Requirements

Text-to-image synthesis

Experimental setup

Data utils

Logging utils

Training

Initial setup

The training loop

Model implementation

Wrapper

Discriminator

Generator

Improving the baseline model

Training

Inference

Sampling the generator

Interpolation in the Latent Space

Interpolation in the text-embedding space

Inferencing with arithmetic in the text-embedding space

Summary

TequilaGAN - Identifying GAN Samples

Technical requirements

Identifying GAN samples

Related work

Feature extraction

Centroid

Slope

Metrics

Jensen-Shannon divergence

Kolgomorov-Smirnov Two-Sample test

Experiments

MNIST

Summary

References

Whats next in GANs

What we've GANed so far

Generative models

Architectures

Loss functions

Tricks of the trade

Implementations

Unanswered questions in GANs

Are some losses better than others?

Do GANs do distribution learning?

All about that inductive bias

How can you kill a GAN?

Artistic GANs

Visual arts

GANGogh

Image inpainting

Vid2Vid

GauGAN

Sonic arts

MuseGAN

GANSynth

Recent and yet-to-be-explored GAN topics

Summary

Closing remarks

Further reading

累计评论(0条) 0个书友正在讨论这本书 发表评论

发表评论

发表评论,分享你的想法吧!

买过这本书的人还买过

读了这本书的人还在读

回顶部