万本电子书0元读

万本电子书0元读

顶部广告

Hands-On Big Data Analytics with PySpark电子书

售       价:¥

8人正在读 | 0人评论 9.8

作       者:Rudy Lai

出  版  社:Packt Publishing

出版时间:2019-03-29

字       数:19.1万

所属分类: 进口书 > 外文原版书 > 电脑/网络

温馨提示:数字商品不支持退换货,不提供源文件,不支持导出打印

为你推荐

  • 读书简介
  • 目录
  • 累计评论(0条)
  • 读书简介
  • 目录
  • 累计评论(0条)
Use PySpark to easily crush messy data at-scale and discover proven techniques to create testable, immutable, and easily parallelizable Spark jobs Key Features * Work with large amounts of agile data using distributed datasets and in-memory caching * Source data from all popular data hosting platforms, such as HDFS, Hive, JSON, and S3 * Employ the easy-to-use PySpark API to deploy big data Analytics for production Book Description Apache Spark is an open source parallel-processing framework that has been around for quite some time now. One of the many uses of Apache Spark is for data analytics applications across clustered computers. In this book, you will not only learn how to use Spark and the Python API to create high-performance analytics with big data, but also discover techniques for testing, immunizing, and parallelizing Spark jobs. You will learn how to source data from all popular data hosting platforms, including HDFS, Hive, JSON, and S3, and deal with large datasets with PySpark to gain practical big data experience. This book will help you work on prototypes on local machines and subsequently go on to handle messy data in production and at scale. This book covers installing and setting up PySpark, RDD operations, big data cleaning and wrangling, and aggregating and summarizing data into useful reports. You will also learn how to implement some practical and proven techniques to improve certain aspects of programming and administration in Apache Spark. By the end of the book, you will be able to build big data analytical solutions using the various PySpark offerings and also optimize them effectively. What you will learn * Get practical big data experience while working on messy datasets * Analyze patterns with Spark SQL to improve your business intelligence * Use PySpark's interactive shell to speed up development time * Create highly concurrent Spark programs by leveraging immutability * Discover ways to avoid the most expensive operation in the Spark API: the shuffle operation * Re-design your jobs to use reduceByKey instead of groupBy * Create robust processing pipelines by testing Apache Spark jobs Who this book is for This book is for developers, data scientists, business analysts, or anyone who needs to reliably analyze large amounts of large-scale, real-world data. Whether you're tasked with creating your company's business intelligence function or creating great data platforms for your machine learning models, or are looking to use code to magnify the impact of your business, this book is for you.
目录展开

About Packt

Why subscribe?

Packt.com

Contributors

About the authors

Packt is searching for authors like you

Preface

Who this book is for

What this book covers

To get the most out of this book

Download the example code files

Download the color images

Conventions used

Get in touch

Reviews

Installing Pyspark and Setting up Your Development Environment

An overview of PySpark

Spark SQL

Setting up Spark on Windows and PySpark

Core concepts in Spark and PySpark

SparkContext

Spark shell

SparkConf

Summary

Getting Your Big Data into the Spark Environment Using RDDs

Loading data on to Spark RDDs

The UCI machine learning repository

Getting the data from the repository to Spark

Getting data into Spark

Parallelization with Spark RDDs

What is parallelization?

Basics of RDD operation

Summary

Big Data Cleaning and Wrangling with Spark Notebooks

Using Spark Notebooks for quick iteration of ideas

Sampling/filtering RDDs to pick out relevant data points

Splitting datasets and creating some new combinations

Summary

Aggregating and Summarizing Data into Useful Reports

Calculating averages with map and reduce

Faster average computations with aggregate

Pivot tabling with key-value paired data points

Summary

Powerful Exploratory Data Analysis with MLlib

Computing summary statistics with MLlib

Using Pearson and Spearman correlations to discover correlations

The Pearson correlation

The Spearman correlation

Computing Pearson and Spearman correlations

Testing our hypotheses on large datasets

Summary

Putting Structure on Your Big Data with SparkSQL

Manipulating DataFrames with Spark SQL schemas

Using Spark DSL to build queries

Summary

Transformations and Actions

Using Spark transformations to defer computations to a later time

Avoiding transformations

Using the reduce and reduceByKey methods to calculate the results

Performing actions that trigger computations

Reusing the same rdd for different actions

Summary

Immutable Design

Delving into the Spark RDD's parent/child chain

Extending an RDD

Chaining a new RDD with the parent

Testing our custom RDD

Using RDD in an immutable way

Using DataFrame operations to transform

Immutability in the highly concurrent environment

Using the Dataset API in an immutable way

Summary

Avoiding Shuffle and Reducing Operational Expenses

Detecting a shuffle in a process

Testing operations that cause a shuffle in Apache Spark

Changing the design of jobs with wide dependencies

Using keyBy() operations to reduce shuffle

Using a custom partitioner to reduce shuffle

Summary

Saving Data in the Correct Format

Saving data in plain text format

Leveraging JSON as a data format

Tabular formats – CSV

Using Avro with Spark

Columnar formats – Parquet

Summary

Working with the Spark Key/Value API

Available actions on key/value pairs

Using aggregateByKey instead of groupBy()

Actions on key/value pairs

Available partitioners on key/value data

Implementing a custom partitioner

Summary

Testing Apache Spark Jobs

Separating logic from Spark engine-unit testing

Integration testing using SparkSession

Mocking data sources using partial functions

Using ScalaCheck for property-based testing

Testing in different versions of Spark

Summary

Leveraging the Spark GraphX API

Creating a graph from a data source

Creating the loader component

Revisiting the graph format

Loading Spark from file

Using the Vertex API

Constructing a graph using the vertex

Creating couple relationships

Using the Edge API

Constructing the graph using edge

Calculating the degree of the vertex

The in-degree

The out-degree

Calculating PageRank

Loading and reloading data about users and followers

Summary

Other Books You May Enjoy

Leave a review - let other readers know what you think

累计评论(0条) 0个书友正在讨论这本书 发表评论

发表评论

发表评论,分享你的想法吧!

买过这本书的人还买过

读了这本书的人还在读

回顶部