万本电子书0元读

万本电子书0元读

顶部广告

Building Machine Learning Systems with Python电子书

售       价:¥

5人正在读 | 0人评论 9.8

作       者:Luis Pedro Coelho,Wilhelm Richert,Matthieu Brucher

出  版  社:Packt Publishing

出版时间:2018-07-31

字       数:48.6万

所属分类: 进口书 > 外文原版书 > 电脑/网络

温馨提示:数字商品不支持退换货,不提供源文件,不支持导出打印

为你推荐

  • 读书简介
  • 目录
  • 累计评论(0条)
  • 读书简介
  • 目录
  • 累计评论(0条)
Quickly learn and employ practical recipes for developing real-world, cross-platform applications using Delphi. Key Features *Get to grips with Delphi to build and deploy various cross-platform applications *Design and deploy real-world apps by implementing a single source codebase *Build robust and optimized GUI applications with ease Book Description Delphi is a cross-platform integrated development environment (IDE) that supports rapid application development on different platforms, saving you the pain of wandering amid GUI widget details or having to tackle inter-platform incompatibilities. Delphi Cookbook begins with the basics of Delphi and gets you acquainted with JSON format strings, XSLT transformations, Unicode encodings, and various types of streams. You’ll then move on to more advanced topics such as developing higher-order functions and using enumerators and run-time type information (RTTI). As you make your way through the chapters, you’ll understand Delphi RTL functions, use FireMonkey in a VCL application, and cover topics such as multithreading, using aparallel programming library and deploying Delphi on a server. You’ll take a look at the new feature of WebBroker Apache modules, join the mobile revolution with FireMonkey, and learn to build data-driven mobile user interfaces using the FireDAC database access framework. This book will also show you how to integrate your apps with Internet of Things (IoT). By the end of the book, you will have become proficient in Delphi by exploring its different aspects such as building cross-platforms and mobile applications, designing server-side programs, and integrating these programs with IoT. What you will learn * Develop visually stunning applications using FireMonkey *Deploy LiveBinding effectively with the right object-oriented programming (OOP) approach *Create RESTful web services that run on Linux or Windows *Build mobile apps that read data from a remote server efficiently *Call platform native API on Android and iOS for an unpublished API *Manage software customization by making better use of an extended RTTI *Integrate your application with IOT Who this book is for Delphi Cookbook is for intermediate developers with a basic knowledge of Delphi who want to discover and understand all the development possibilities offered by it.
目录展开

Title Page

Copyright and Credits

Building Machine Learning Systems with Python Third Edition

Packt Upsell

Why subscribe?

PacktPub.com

Contributors

About the authors

About the reviewers

Packt is searching for authors like you

Preface

Who this book is for

What this book covers

To get the most out of this book

Download the example code files

Download the color images

Conventions used

Get in touch

Reviews

Getting Started with Python Machine Learning

Machine learning and Python – a dream team

What the book will teach you – and what it will not

How to best read this book

What to do when you are stuck

Getting started

Introduction to NumPy, SciPy, Matplotlib, and TensorFlow

Installing Python

Chewing data efficiently with NumPy and intelligently with SciPy

Learning NumPy

Indexing

Handling nonexistent values

Comparing the runtime

Learning SciPy

Fundamentals of machine learning

Asking a question

Getting answers

Our first (tiny) application of machine learning

Reading in the data

Preprocessing and cleaning the data

Choosing the right model and learning algorithm

Before we build our first model

Starting with a simple straight line

Toward more complex models

Stepping back to go forward - another look at our data

Training and testing

Answering our initial question

Summary

Classifying with Real-World Examples

The Iris dataset

Visualization is a good first step

Classifying with scikit-learn

Building our first classification model

Evaluation – holding out data and cross-validation

How to measure and compare classifiers

A more complex dataset and the nearest-neighbor classifier

Learning about the seeds dataset

Features and feature engineering

Nearest neighbor classification

Looking at the decision boundaries

Which classifier to use

Summary

Regression

Predicting house prices with regression

Multidimensional regression

Cross-validation for regression

Penalized or regularized regression

L1 and L2 penalties

Using Lasso or ElasticNet in scikit-learn

Visualizing the Lasso path

P-greater-than-N scenarios

An example based on text documents

Setting hyperparameters in a principled way

Regression with TensorFlow

Summary

Classification I – Detecting Poor Answers

Sketching our roadmap

Learning to classify classy answers

Tuning the instance

Tuning the classifier

Fetching the data

Slimming the data down to chewable chunks

Preselecting and processing attributes

Defining what a good answer is

Creating our first classifier

Engineering the features

Training the classifier

Measuring the classifier's performance

Designing more features

Deciding how to improve the performance

Bias, variance and their trade-off

Fixing high bias

Fixing high variance

High or low bias?

Using logistic regression

A bit of math with a small example

Applying logistic regression to our post-classification problem

Looking behind accuracy – precision and recall

Slimming the classifier

Ship it!

Classification using Tensorflow

Summary

Dimensionality Reduction

Sketching our roadmap

Selecting features

Detecting redundant features using filters

Correlation

Mutual information

Asking the model about the features using wrappers

Other feature selection methods

Feature projection

Principal component analysis

Sketching PCA

Applying PCA

Limitations of PCA and how LDA can help

Multidimensional scaling

Autoencoders, or neural networks for dimensionality reduction

Summary

Clustering – Finding Related Posts

Measuring the relatedness of posts

How not to do it

How to do it

Preprocessing – similarity measured as a similar number of common words

Converting raw text into a bag of words

Counting words

Normalizing word count vectors

Removing less important words

Stemming

Installing and using NLTK

Extending the vectorizer with NLTK's stemmer

Stop words on steroids

Our achievements and goals

Clustering

K-means

Getting test data to evaluate our ideas

Clustering posts

Solving our initial challenge

Another look at noise

Tweaking the parameters

Summary

Recommendations

Rating predictions and recommendations

Splitting into training and testing

Normalizing the training data

A neighborhood approach to recommendations

A regression approach to recommendations

Combining multiple methods

Basket analysis

Obtaining useful predictions

Analyzing supermarket shopping baskets

Association rule mining

More advanced basket analysis

Summary

Artificial Neural Networks and Deep Learning

Using TensorFlow

TensorFlow API

Graphs

Sessions

Useful operations

Saving and restoring neural networks

Training neural networks

Convolutional neural networks

Recurrent neural networks

LSTM for predicting text

LSTM for image processing

Summary

Classification II – Sentiment Analysis

Sketching our roadmap

Fetching the Twitter data

Introducing the Naïve Bayes classifier

Getting to know the Bayes theorem

Being naïve

Using Naïve Bayes to classify

Accounting for unseen words and other oddities

Accounting for arithmetic underflows

Creating our first classifier and tuning it

Solving an easy problem first

Using all classes

Tuning the classifier's parameters

Cleaning tweets

Taking the word types into account

Determining the word types

Successfully cheating using SentiWordNet

Our first estimator

Putting everything together

Summary

Topic Modeling

Latent Dirichlet allocation

Building a topic model

Comparing documents by topic

Modeling the whole of Wikipedia

Choosing the number of topics

Summary

Classification III – Music Genre Classification

Sketching our roadmap

Fetching the music data

Converting into WAV format

Looking at music

Decomposing music into sine-wave components

Using FFT to build our first classifier

Increasing experimentation agility

Training the classifier

Using a confusion matrix to measure accuracy in multiclass problems

An alternative way to measure classifier performance using receiver-operator characteristics

Improving classification performance with mel frequency cepstral coefficients

Music classification using Tensorflow

Summary

Computer Vision

Introducing image processing

Loading and displaying images

Thresholding

Gaussian blurring

Putting the center in focus

Basic image classification

Computing features from images

Writing your own features

Using features to find similar images

Classifying a harder dataset

Local feature representations

Image generation with adversarial networks

Summary

Reinforcement Learning

Types of reinforcement learning

Policy and value network

Q-network

Excelling at games

A small example

Using Tensorflow for the text game

Playing breakout

Summary

Bigger Data

Learning about big data

Using jug to break up your pipeline into tasks

An introduction to tasks in jug

Looking under the hood

Using jug for data analysis

Reusing partial results

Using Amazon Web Services

Creating your first virtual machines

Installing Python packages on Amazon Linux

Running jug on our cloud machine

Automating the generation of clusters with cfncluster

Summary

Where to Learn More About Machine Learning

Online courses

Books

Blogs

Data sources

Getting competitive

All that was left out

Summary

Other Books You May Enjoy

Leave a review - let other readers know what you think

累计评论(0条) 0个书友正在讨论这本书 发表评论

发表评论

发表评论,分享你的想法吧!

买过这本书的人还买过

读了这本书的人还在读

回顶部