万本电子书0元读

万本电子书0元读

顶部广告

Ceph: Designing and Implementing Scalable Storage Systems电子书

售       价:¥

14人正在读 | 0人评论 9.8

作       者:Michael Hackett

出  版  社:Packt Publishing

出版时间:2019-01-31

字       数:59.6万

所属分类: 进口书 > 外文原版书 > 电脑/网络

温馨提示:数字商品不支持退换货,不提供源文件,不支持导出打印

为你推荐

  • 读书简介
  • 目录
  • 累计评论(0条)
  • 读书简介
  • 目录
  • 累计评论(0条)
Get to grips with the unified, highly scalable distributed storage system and learn how to design and implement it. Key Features * Explore Ceph's architecture in detail * Implement a Ceph cluster successfully and gain deep insights into its best practices * Leverage the advanced features of Ceph, including erasure coding, tiering, and BlueStore Book Description This Learning Path takes you through the basics of Ceph all the way to gaining in-depth understanding of its advanced features. You’ll gather skills to plan, deploy, and manage your Ceph cluster. After an introduction to the Ceph architecture and its core projects, you’ll be able to set up a Ceph cluster and learn how to monitor its health, improve its performance, and troubleshoot any issues. By following the step-by-step approach of this Learning Path, you’ll learn how Ceph integrates with OpenStack, Glance, Manila, Swift, and Cinder. With knowledge of federated architecture and CephFS, you’ll use Calamari and VSM to monitor the Ceph environment. In the upcoming chapters, you’ll study the key areas of Ceph, including BlueStore, erasure coding, and cache tiering. More specifically, you’ll discover what they can do for your storage system. In the concluding chapters, you will develop applications that use Librados and distributed computations with shared object classes, and see how Ceph and its supporting infrastructure can be optimized. By the end of this Learning Path, you'll have the practical knowledge of operating Ceph in a production environment. This Learning Path includes content from the following Packt products: * Ceph Cookbook by Michael Hackett, Vikhyat Umrao and Karan Singh * Mastering Ceph by Nick Fisk * Learning Ceph, Second Edition by Anthony D'Atri, Vaibhav Bhembre and Karan Singh What you will learn * Understand the benefits of using Ceph as a storage solution * Combine Ceph with OpenStack, Cinder, Glance, and Nova components * Set up a test cluster with Ansible and virtual machine with VirtualBox * Develop solutions with Librados and shared object classes * Configure BlueStore and see its interaction with other configurations * Tune, monitor, and recover storage systems effectively * Build an erasure-coded pool by selecting intelligent parameters Who this book is for If you are a developer, system administrator, storage professional, or cloud engineer who wants to understand how to deploy a Ceph cluster, this Learning Path is ideal for you. It will help you discover ways in which Ceph features can solve your data storage problems. Basic knowledge of storage systems and GNU/Linux will be beneficial.
目录展开

Title Page

Copyright

Ceph: Designing and Implementing Scalable Storage Systems

About Packt

Why Subscribe?

Packt.com

Contributors

About the Authors

Packt Is Searching for Authors Like You

Preface

Who This Book Is For

What This Book Covers

To Get the Most out of This Book

Download the Example Code Files

Conventions Used

Get in Touch

Reviews

Ceph - Introduction and Beyond

Introduction

Ceph – the beginning of a new era

Software-defined storage – SDS

Cloud storage

Unified next-generation storage architecture

RAID – the end of an era

RAID rebuilds are painful

RAID spare disks increases TCO

RAID can be expensive and hardware dependent

The growing RAID group is a challenge

The RAID reliability model is no longer promising

Ceph – the architectural overview

Planning a Ceph deployment

Setting up a virtual infrastructure

Getting ready

How to do it...

Installing and configuring Ceph

Creating the Ceph cluster on ceph-node1

How to do it...

Scaling up your Ceph cluster

How to do it…

Using the Ceph cluster with a hands-on approach

How to do it...

Working with Ceph Block Device

Introduction

Configuring Ceph client

How to do it...

Creating Ceph Block Device

How to do it...

Mapping Ceph Block Device

How to do it...

Resizing Ceph RBD

How to do it...

Working with RBD snapshots

How to do it...

Working with RBD clones

How to do it...

Disaster recovery replication using RBD mirroring

How to do it...

Configuring pools for RBD mirroring with one way replication

How to do it...

Configuring image mirroring

How to do it...

Configuring two-way mirroring

How to do it...

See also

Recovering from a disaster!

How to do it...

Working with Ceph and OpenStack

Introduction

Ceph – the best match for OpenStack

Setting up OpenStack

How to do it...

Configuring OpenStack as Ceph clients

How to do it...

Configuring Glance for Ceph backend

How to do it…

Configuring Cinder for Ceph backend

How to do it...

Configuring Nova to boot instances from Ceph RBD

How to do it…

Configuring Nova to attach Ceph RBD

How to do it...

Working with Ceph Object Storage

Introduction

Understanding Ceph object storage

RADOS Gateway standard setup, installation, and configuration

Setting up the RADOS Gateway node

How to do it…

Installing and configuring the RADOS Gateway

How to do it…

Creating the radosgw user

How to do it…

See also…

Accessing the Ceph object storage using S3 API

How to do it…

Configuring DNS

Configuring the s3cmd client

Configure the S3 client (s3cmd) on client-node1

Accessing the Ceph object storage using the Swift API

How to do it...

Integrating RADOS Gateway with OpenStack Keystone

How to do it...

Integrating RADOS Gateway with Hadoop S3A plugin

How to do it...

Working with Ceph Object Storage Multi-Site v2

Introduction

Functional changes from Hammer federated configuration

RGW multi-site v2 requirement

Installing the Ceph RGW multi-site v2 environment

How to do it...

Configuring Ceph RGW multi-site v2

How to do it...

Configuring a master zone

Configuring a secondary zone

Checking the synchronization status

Testing user, bucket, and object sync between master and secondary sites

How to do it...

Working with the Ceph Filesystem

Introduction

Understanding the Ceph Filesystem and MDS

Deploying Ceph MDS

How to do it...

Accessing Ceph FS through kernel driver

How to do it...

Accessing Ceph FS through FUSE client

How to do it...

Exporting the Ceph Filesystem as NFS

How to do it...

Ceph FS – a drop-in replacement for HDFS

Operating and Managing a Ceph Cluster

Introduction

Understanding Ceph service management

Managing the cluster configuration file

How to do it...

Adding monitor nodes to the Ceph configuration file

Adding an MDS node to the Ceph configuration file

Adding OSD nodes to the Ceph configuration file

Running Ceph with systemd

How to do it...

Starting and stopping all daemons

Querying systemd units on a node

Starting and stopping all daemons by type

Starting and stopping a specific daemon

Scale-up versus scale-out

Scaling out your Ceph cluster

How to do it...

Adding the Ceph OSD

Adding the Ceph MON

There's more...

Scaling down your Ceph cluster

How to do it...

Removing the Ceph OSD

Removing the Ceph MON

Replacing a failed disk in the Ceph cluster

How to do it...

Upgrading your Ceph cluster

How to do it...

Maintaining a Ceph cluster

How to do it...

How it works...

Throttle the backfill and recovery:

Ceph under the Hood

Introduction

Ceph scalability and high availability

Understanding the CRUSH mechanism

CRUSH map internals

How to do it...

How it works...

CRUSH tunables

The evolution of CRUSH tunables

Argonaut – legacy

Bobtail – CRUSH_TUNABLES2

Firefly – CRUSH_TUNABLES3

Hammer – CRUSH_V4

Jewel – CRUSH_TUNABLES5

Ceph and kernel versions that support given tunables

Warning when tunables are non-optimal

A few important points

Ceph cluster map

High availability monitors

Ceph authentication and authorization

Ceph authentication

Ceph authorization

How to do it…

I/O path from a Ceph client to a Ceph cluster

Ceph Placement Group

How to do it…

Placement Group states

Creating Ceph pools on specific OSDs

How to do it...

The Virtual Storage Manager for Ceph

Introductionc

Understanding the VSM architecture

The VSM controller

The VSM agent

Setting up the VSM environment

How to do it...

Getting ready for VSM

How to do it...

Installing VSM

How to do it...

Creating a Ceph cluster using VSM

How to do it...

Exploring the VSM dashboard

Upgrading the Ceph cluster using VSM

VSM roadmap

VSM resources

More on Ceph

Introduction

Disk performance baseline

Single disk write performance

How to do it...

Multiple disk write performance

How to do it...

Single disk read performance

How to do it...

Multiple disk read performance

How to do it...

Results

Baseline network performance

How to do it...

Ceph rados bench

How to do it...

How it works...

RADOS load-gen

How to do it...

How it works...

There's more...

Benchmarking the Ceph Block Device

How to do it...

How it works...

Benchmarking Ceph RBD using FIO

How to do it...

Ceph admin socket

How to do it...

Using the ceph tell command

How to do it...

Ceph REST API

How to do it...

Profiling Ceph memory

How to do it...

The ceph-objectstore-tool

How to do it...

How it works...

Using ceph-medic

How to do it...

How it works...

See also

Deploying the experimental Ceph BlueStore

How to do it...

See Also

Deploying Ceph

Preparing your environment with Vagrant and VirtualBox

System requirements

Obtaining and installing VirtualBox

Setting up Vagrant

The ceph-deploy tool

Orchestration

Ansible

Installing Ansible

Creating your inventory file

Variables

Testing

A very simple playbook

Adding the Ceph Ansible modules

Deploying a test cluster with Ansible

Change and configuration management

Summary

BlueStore

What is BlueStore?

Why was it needed?

Ceph's requirements

Filestore limitations

Why is BlueStore the solution?

How BlueStore works

RocksDB

Deferred writes

BlueFS

How to use BlueStore

Upgrading an OSD in your test cluster

Summary

Erasure Coding for Better Storage Efficiency

What is erasure coding?

K+M

How does erasure coding work in Ceph?

Algorithms and profiles

Jerasure

ISA

LRC

SHEC

Where can I use erasure coding?

Creating an erasure-coded pool

Overwrites on erasure code pools with Kraken

Demonstration

Troubleshooting the 2147483647 error

Reproducing the problem

Summary

Developing with Librados

What is librados?

How to use librados?

Example librados application

Example of the librados application with atomic operations

Example of the librados application that uses watchers and notifiers

Summary

Distributed Computation with Ceph RADOS Classes

Example applications and the benefits of using RADOS classes

Writing a simple RADOS class in Lua

Writing a RADOS class that simulates distributed computing

Preparing the build environment

RADOS class

Client librados applications

Calculating MD5 on the client

Calculating MD5 on the OSD via RADOS class

Testing

RADOS class caveats

Summary

Tiering with Ceph

Tiering versus caching

How Cephs tiering functionality works

What is a bloom filter

Tiering modes

Writeback

Forward

Read-forward

Proxy

Read-proxy

Uses cases

Creating tiers in Ceph

Tuning tiering

Flushing and eviction

Promotions

Promotion throttling

Monitoring parameters

Tiering with erasure-coded pools

Alternative caching mechanisms

Summary

Troubleshooting

Repairing inconsistent objects

Full OSDs

Ceph logging

Slow performance

Causes

Increased client workload

Down OSDs

Recovery and backfilling

Scrubbing

Snaptrimming

Hardware or driver issues

Monitoring

iostat

htop

atop

Diagnostics

Extremely slow performance or no IO

Flapping OSDs

Jumbo frames

Failing disks

Slow OSDs

Investigating PGs in a down state

Large monitor databases

Summary

Disaster Recovery

What is a disaster?

Avoiding data loss

What can cause an outage or data loss?

RBD mirroring

The journal

The rbd-mirror daemon

Configuring RBD mirroring

Performing RBD failover

RBD recovery

Lost objects and inactive PGs

Recovering from a complete monitor failure

Using the Cephs object store tool

Investigating asserts

Example assert

Summary

Operations and Maintenance

Topology

The 40,000 foot view

Drilling down

OSD dump

OSD list

OSD find

CRUSH dump

Pools

Monitors

CephFS

Configuration

Cluster naming and configuration

The Ceph configuration file

Admin sockets

Injection

Configuration management

Scrubs

Logs

MON logs

OSD logs

Debug levels

Common tasks

Installation

Ceph-deploy

Flags

Service management

Systemd: the wave (tsunami?) of the future

Upstart

sysvinit

Component failures

Expansion

Balancing

Upgrades

Working with remote hands

Summary

Monitoring Ceph

Monitoring Ceph clusters

Ceph cluster health

Watching cluster events

Utilizing your cluster

OSD variance and fillage

Cluster status

Cluster authentication

Monitoring Ceph MONs

MON status

MON quorum status

Monitoring Ceph OSDs

OSD tree lookup

OSD statistics

OSD CRUSH map

Monitoring Ceph placement groups

PG states

Monitoring Ceph MDS

Open source dashboards and tools

Kraken

Ceph-dash

Decapod

Rook

Calamari

Ceph-mgr

Prometheus and Grafana

Summary

Performance and Stability Tuning

Ceph performance overview

Kernel settings

pid_max

kernel.threads-max, vm.max_map_count

XFS filesystem settings

Virtual memory settings

Network settings

Jumbo frames

TCP and network core

iptables and nf_conntrack

Ceph settings

max_open_files

Recovery

OSD and FileStore settings

MON settings

Client settings

Benchmarking

RADOS bench

CBT

FIO

Fill volume, then random 1M writes for 96 hours, no read verification:

Fill volume, then small block writes for 96 hours, no read verification:

Fill volume, then 4k random writes for 96 hours, occasional read verification:

Summary

Other Books You May Enjoy

Leave a review - let other readers know what you think

累计评论(0条) 0个书友正在讨论这本书 发表评论

发表评论

发表评论,分享你的想法吧!

买过这本书的人还买过

读了这本书的人还在读

回顶部