万本电子书0元读

万本电子书0元读

顶部广告

Python Machine Learning电子书

售       价:¥

52人正在读 | 0人评论 6.2

作       者:Sebastian Raschka

出  版  社:Packt Publishing

出版时间:2015-09-23

字       数:178.0万

所属分类: 进口书 > 外文原版书 > 电脑/网络

温馨提示:数字商品不支持退换货,不提供源文件,不支持导出打印

为你推荐

  • 读书简介
  • 目录
  • 累计评论(0条)
  • 读书简介
  • 目录
  • 累计评论(0条)
Unlock deeper insights into Machine Leaning with this vital guide to cutting-edge predictive analytics About This Book Leverage Python’s most powerful open-source libraries for deep learning, data wrangling, and data visualization Learn effective strategies and best practices to improve and optimize machine learning systems and algorithms Ask – and answer – tough questions of your data with robust statistical models, built for a range of datasets Who This Book Is For If you want to find out how to use Python to start answering critical questions of your data, pick up Python Machine Learning – whether you want to get started from scratch or want to extend your data science knowledge, this is an essential and unmissable resource. What You Will Learn Explore how to use different machine learning models to ask different questions of your data Learn how to build neural networks using Keras and Theano Find out how to write clean and elegant Python code that will optimize the strength of your algorithms Discover how to embed your machine learning model in a web application for increased accessibility Predict continuous target outcomes using regression analysis Uncover hidden patterns and structures in data with clustering Organize data using effective pre-processing techniques Get to grips with sentiment analysis to delve deeper into textual and social media data In Detail Machine learning and predictive analytics are transforming the way businesses and other organizations operate. Being able to understand trends and patterns in complex data is critical to success, becoming one of the key strategies for unlocking growth in a challenging contemporary marketplace. Python can help you deliver key insights into your data – its unique capabilities as a language let you build sophisticated algorithms and statistical models that can reveal new perspectives and answer key questions that are vital for success. Python Machine Learning gives you access to the world of predictive analytics and demonstrates why Python is one of the world’s leading data science languages. If you want to ask better questions of data, or need to improve and extend the capabilities of your machine learning systems, this practical data science book is invaluable. Covering a wide range of powerful Python libraries, including scikit-learn, Theano, and Keras, and featuring guidance and tips on everything from sentiment analysis to neural networks, you’ll soon be able to answer some of the most important questions facing you and your organization. Style and approach Python Machine Learning connects the fundamental theoretical principles behind machine learning to their practical application in a way that focuses you on asking and answering the right questions. It walks you through the key elements of Python and its powerful machine learning libraries, while demonstrating how to get to grips with a range of statistical models.
目录展开

Python Machine Learning

Table of Contents

Python Machine Learning

Credits

Foreword

About the Author

About the Reviewers

www.PacktPub.com

Support files, eBooks, discount offers, and more

Why subscribe?

Free access for Packt account holders

Preface

What this book covers

What you need for this book

Who this book is for

Conventions

Reader feedback

Customer support

Downloading the example code

Errata

Piracy

Questions

1. Giving Computers the Ability to Learn from Data

Building intelligent machines to transform data into knowledge

The three different types of machine learning

Making predictions about the future with supervised learning

Classification for predicting class labels

Regression for predicting continuous outcomes

Solving interactive problems with reinforcement learning

Discovering hidden structures with unsupervised learning

Finding subgroups with clustering

Dimensionality reduction for data compression

An introduction to the basic terminology and notations

A roadmap for building machine learning systems

Preprocessing – getting data into shape

Training and selecting a predictive model

Evaluating models and predicting unseen data instances

Using Python for machine learning

Installing Python packages

Summary

2. Training Machine Learning Algorithms for Classification

Artificial neurons – a brief glimpse into the early history of machine learning

Implementing a perceptron learning algorithm in Python

Training a perceptron model on the Iris dataset

Adaptive linear neurons and the convergence of learning

Minimizing cost functions with gradient descent

Implementing an Adaptive Linear Neuron in Python

Large scale machine learning and stochastic gradient descent

Summary

3. A Tour of Machine Learning Classifiers Using Scikit-learn

Choosing a classification algorithm

First steps with scikit-learn

Training a perceptron via scikit-learn

Modeling class probabilities via logistic regression

Logistic regression intuition and conditional probabilities

Learning the weights of the logistic cost function

Training a logistic regression model with scikit-learn

Tackling overfitting via regularization

Maximum margin classification with support vector machines

Maximum margin intuition

Dealing with the nonlinearly separable case using slack variables

Alternative implementations in scikit-learn

Solving nonlinear problems using a kernel SVM

Using the kernel trick to find separating hyperplanes in higher dimensional space

Decision tree learning

Maximizing information gain – getting the most bang for the buck

Building a decision tree

Combining weak to strong learners via random forests

K-nearest neighbors – a lazy learning algorithm

Summary

4. Building Good Training Sets – Data Preprocessing

Dealing with missing data

Eliminating samples or features with missing values

Imputing missing values

Understanding the scikit-learn estimator API

Handling categorical data

Mapping ordinal features

Encoding class labels

Performing one-hot encoding on nominal features

Partitioning a dataset in training and test sets

Bringing features onto the same scale

Selecting meaningful features

Sparse solutions with L1 regularization

Sequential feature selection algorithms

Assessing feature importance with random forests

Summary

5. Compressing Data via Dimensionality Reduction

Unsupervised dimensionality reduction via principal component analysis

Total and explained variance

Feature transformation

Principal component analysis in scikit-learn

Supervised data compression via linear discriminant analysis

Computing the scatter matrices

Selecting linear discriminants for the new feature subspace

Projecting samples onto the new feature space

LDA via scikit-learn

Using kernel principal component analysis for nonlinear mappings

Kernel functions and the kernel trick

Implementing a kernel principal component analysis in Python

Example 1 – separating half-moon shapes

Example 2 – separating concentric circles

Projecting new data points

Kernel principal component analysis in scikit-learn

Summary

6. Learning Best Practices for Model Evaluation and Hyperparameter Tuning

Streamlining workflows with pipelines

Loading the Breast Cancer Wisconsin dataset

Combining transformers and estimators in a pipeline

Using k-fold cross-validation to assess model performance

The holdout method

K-fold cross-validation

Debugging algorithms with learning and validation curves

Diagnosing bias and variance problems with learning curves

Addressing overfitting and underfitting with validation curves

Fine-tuning machine learning models via grid search

Tuning hyperparameters via grid search

Algorithm selection with nested cross-validation

Looking at different performance evaluation metrics

Reading a confusion matrix

Optimizing the precision and recall of a classification model

Plotting a receiver operating characteristic

The scoring metrics for multiclass classification

Summary

7. Combining Different Models for Ensemble Learning

Learning with ensembles

Implementing a simple majority vote classifier

Combining different algorithms for classification with majority vote

Evaluating and tuning the ensemble classifier

Bagging – building an ensemble of classifiers from bootstrap samples

Leveraging weak learners via adaptive boosting

Summary

8. Applying Machine Learning to Sentiment Analysis

Obtaining the IMDb movie review dataset

Introducing the bag-of-words model

Transforming words into feature vectors

Assessing word relevancy via term frequency-inverse document frequency

Cleaning text data

Processing documents into tokens

Training a logistic regression model for document classification

Working with bigger data – online algorithms and out-of-core learning

Summary

9. Embedding a Machine Learning Model into a Web Application

Serializing fitted scikit-learn estimators

Setting up a SQLite database for data storage

Developing a web application with Flask

Our first Flask web application

Form validation and rendering

Turning the movie classifier into a web application

Deploying the web application to a public server

Updating the movie review classifier

Summary

10. Predicting Continuous Target Variables with Regression Analysis

Introducing a simple linear regression model

Exploring the Housing Dataset

Visualizing the important characteristics of a dataset

Implementing an ordinary least squares linear regression model

Solving regression for regression parameters with gradient descent

Estimating the coefficient of a regression model via scikit-learn

Fitting a robust regression model using RANSAC

Evaluating the performance of linear regression models

Using regularized methods for regression

Turning a linear regression model into a curve – polynomial regression

Modeling nonlinear relationships in the Housing Dataset

Dealing with nonlinear relationships using random forests

Decision tree regression

Random forest regression

Summary

11. Working with Unlabeled Data – Clustering Analysis

Grouping objects by similarity using k-means

K-means++

Hard versus soft clustering

Using the elbow method to find the optimal number of clusters

Quantifying the quality of clustering via silhouette plots

Organizing clusters as a hierarchical tree

Performing hierarchical clustering on a distance matrix

Attaching dendrograms to a heat map

Applying agglomerative clustering via scikit-learn

Locating regions of high density via DBSCAN

Summary

12. Training Artificial Neural Networks for Image Recognition

Modeling complex functions with artificial neural networks

Single-layer neural network recap

Introducing the multi-layer neural network architecture

Activating a neural network via forward propagation

Classifying handwritten digits

Obtaining the MNIST dataset

Implementing a multi-layer perceptron

Training an artificial neural network

Computing the logistic cost function

Training neural networks via backpropagation

Developing your intuition for backpropagation

Debugging neural networks with gradient checking

Convergence in neural networks

Other neural network architectures

Convolutional Neural Networks

Recurrent Neural Networks

A few last words about neural network implementation

Summary

13. Parallelizing Neural Network Training with Theano

Building, compiling, and running expressions with Theano

What is Theano?

First steps with Theano

Configuring Theano

Working with array structures

Wrapping things up – a linear regression example

Choosing activation functions for feedforward neural networks

Logistic function recap

Estimating probabilities in multi-class classification via the softmax function

Broadening the output spectrum by using a hyperbolic tangent

Training neural networks efficiently using Keras

Summary

Index

累计评论(0条) 0个书友正在讨论这本书 发表评论

发表评论

发表评论,分享你的想法吧!

买过这本书的人还买过

读了这本书的人还在读

回顶部