售 价:¥
温馨提示:数字商品不支持退换货,不提供源文件,不支持导出打印
为你推荐
Machine Learning for Finance
Machine Learning for Finance
Why subscribe?
Packt.com
Contributors
About the author
About the reviewer
Preface
Who this book is for
What this book covers
To get the most out of this book
Download the example code files
Download the color images
Conventions used
Get in touch
Reviews
1. Neural Networks and Gradient-Based Optimization
Our journey in this book
What is machine learning?
Supervised learning
Unsupervised learning
Reinforcement learning
The unreasonable effectiveness of data
All models are wrong
Setting up your workspace
Using Kaggle kernels
Running notebooks locally
Installing TensorFlow
Installing Keras
Using data locally
Using the AWS deep learning AMI
Approximating functions
A forward pass
A logistic regressor
Python version of our logistic regressor
Optimizing model parameters
Measuring model loss
Gradient descent
Backpropagation
Parameter updates
Putting it all together
A deeper network
A brief introduction to Keras
Importing Keras
A two-layer model in Keras
Stacking layers
Compiling the model
Training the model
Keras and TensorFlow
Tensors and the computational graph
Exercises
Summary
2. Applying Machine Learning to Structured Data
The data
Heuristic, feature-based, and E2E models
The machine learning software stack
The heuristic approach
Making predictions using the heuristic model
The F1 score
Evaluating with a confusion matrix
The feature engineering approach
A feature from intuition – fraudsters don't sleep
Expert insight – transfer, then cash out
Statistical quirks – errors in balances
Preparing the data for the Keras library
One-hot encoding
Entity embeddings
Tokenizing categories
Creating input models
Training the model
Creating predictive models with Keras
Extracting the target
Creating a test set
Creating a validation set
Oversampling the training data
Building the model
Creating a simple baseline
Building more complex models
A brief primer on tree-based methods
A simple decision tree
A random forest
XGBoost
E2E modeling
Exercises
Summary
3. Utilizing Computer Vision
Convolutional Neural Networks
Filters on MNIST
Adding a second filter
Filters on color images
The building blocks of ConvNets in Keras
Conv2D
Kernel size
Stride size
Padding
Input shape
Simplified Conv2D notation
ReLU activation
MaxPooling2D
Flatten
Dense
Training MNIST
The model
Loading the data
Compiling and training
More bells and whistles for our neural network
Momentum
The Adam optimizer
Regularization
L2 regularization
L1 regularization
Regularization in Keras
Dropout
Batchnorm
Working with big image datasets
Working with pretrained models
Modifying VGG-16
Random image augmentation
Augmentation with ImageDataGenerator
The modularity tradeoff
Computer vision beyond classification
Facial recognition
Bounding box prediction
Exercises
Summary
4. Understanding Time Series
Visualization and preparation in pandas
Aggregate global feature statistics
Examining the sample time series
Different kinds of stationarity
Why stationarity matters
Making a time series stationary
When to ignore stationarity issues
Fast Fourier transformations
Autocorrelation
Establishing a training and testing regime
A note on backtesting
Median forecasting
ARIMA
Kalman filters
Forecasting with neural networks
Data preparation
Weekdays
Conv1D
Dilated and causal convolution
Simple RNN
LSTM
The carry
Recurrent dropout
Bayesian deep learning
Exercises
Summary
5. Parsing Textual Data with Natural Language Processing
An introductory guide to spaCy
Named entity recognition
Fine-tuning the NER
Part-of-speech (POS) tagging
Rule-based matching
Adding custom functions to matchers
Adding the matcher to the pipeline
Combining rule-based and learning-based systems
Regular expressions
Using Python's regex module
Regex in pandas
When to use regexes and when not to
A text classification task
Preparing the data
Sanitizing characters
Lemmatization
Preparing the target
Preparing the training and test sets
Bag-of-words
TF-IDF
Topic modeling
Word embeddings
Preprocessing for training with word vectors
Loading pretrained word vectors
Time series models with word vectors
Document similarity with word embeddings
A quick tour of the Keras functional API
Attention
Seq2seq models
Seq2seq architecture overview
The data
Encoding characters
Creating inference models
Making translations
Exercises
Summary
6. Using Generative Models
Understanding autoencoders
Autoencoder for MNIST
Autoencoder for credit cards
Visualizing latent spaces with t-SNE
Variational autoencoders
MNIST example
Using the Lambda layer
Kullback–Leibler divergence
Creating a custom loss
Using a VAE to generate data
VAEs for an end-to-end fraud detection system
VAEs for time series
GANs
A MNIST GAN
Understanding GAN latent vectors
GAN training tricks
Using less data – active learning
Using labeling budgets efficiently
Leveraging machines for human labeling
Pseudo labeling for unlabeled data
Using generative models
SGANs for fraud detection
Exercises
Summary
7. Reinforcement Learning for Financial Markets
Catch – a quick guide to reinforcement learning
Q-learning turns RL into supervised learning
Defining the Q-learning model
Training to play Catch
Markov processes and the bellman equation – A more formal introduction to RL
The Bellman equation in economics
Advantage actor-critic models
Learning to balance
Learning to trade
Evolutionary strategies and genetic algorithms
Practical tips for RL engineering
Designing good reward functions
Careful, manual reward shaping
Inverse reinforcement learning
Learning from human preferences
Robust RL
Frontiers of RL
Multi-agent RL
Learning how to learn
Understanding the brain through RL
Exercises
Summary
8. Privacy, Debugging, and Launching Your Products
Debugging data
How to find out whether your data is up to the task
What to do if you don't have enough data
Unit testing data
Keeping data private and complying with regulations
Preparing the data for training
Understanding which inputs led to which predictions
Debugging your model
Hyperparameter search with Hyperas
Efficient learning rate search
Learning rate scheduling
Monitoring training with TensorBoard
Exploding and vanishing gradients
Deployment
Launching fast
Understanding and monitoring metrics
Understanding where your data comes from
Performance tips
Using the right hardware for your problem
Making use of distributed training with TF estimators
Using optimized layers such as CuDNNLSTM
Optimizing your pipeline
Speeding up your code with Cython
Caching frequent requests
Exercises
Summary
9. Fighting Bias
Sources of unfairness in machine learning
Legal perspectives
Observational fairness
Training to be fair
Causal learning
Obtaining causal models
Instrument variables
Non-linear causal models
Interpreting models to ensure fairness
Unfairness as complex system failure
Complex systems are intrinsically hazardous systems
Catastrophes are caused by multiple failures
Complex systems run in degraded mode
Human operators both cause and prevent accidents
Accident-free operation requires experience with failure
A checklist for developing fair models
What is the goal of the model developers?
Is the data biased?
Are errors biased?
How is feedback incorporated?
Can the model be interpreted?
What happens to models after deployment?
Exercises
Summary
10. Bayesian Inference and Probabilistic Programming
An intuitive guide to Bayesian inference
Flat prior
<50% prior
Prior and posterior
Markov Chain Monte Carlo
Metropolis-Hastings MCMC
From probabilistic programming to deep probabilistic programming
Summary
Farewell
Further reading
General data analysis
Sound science in machine learning
General machine learning
General deep learning
Reinforcement learning
Bayesian machine learning
Other Books You May Enjoy
Leave a review - let other readers know what you think
Index
买过这本书的人还买过
读了这本书的人还在读
同类图书排行榜