万本电子书0元读

万本电子书0元读

顶部广告

Mastering Ceph电子书

售       价:¥

8人正在读 | 0人评论 6.2

作       者:Nick Fisk

出  版  社:Packt Publishing

出版时间:2019-03-05

字       数:38.6万

所属分类: 进口书 > 外文原版书 > 电脑/网络

温馨提示:数字商品不支持退换货,不提供源文件,不支持导出打印

为你推荐

  • 读书简介
  • 目录
  • 累计评论(0条)
  • 读书简介
  • 目录
  • 累计评论(0条)
Discover the unified, distributed storage system and improve the performance of applications Key Features * Explore the latest features of Ceph's Mimic release * Get to grips with advanced disaster and recovery practices for your storage * Harness the power of Reliable Autonomic Distributed Object Store (RADOS) to help you optimize storage systems Book Description Ceph is an open source distributed storage system that is scalable to Exabyte deployments. This second edition of Mastering Ceph takes you a step closer to becoming an expert on Ceph. You’ll get started by understanding the design goals and planning steps that should be undertaken to ensure successful deployments. In the next sections, you’ll be guided through setting up and deploying the Ceph cluster with the help of orchestration tools. This will allow you to witness Ceph’s scalability, erasure coding (data protective) mechanism, and automated data backup features on multiple servers. You’ll then discover more about the key areas of Ceph including BlueStore, erasure coding and cache tiering with the help of examples. Next, you’ll also learn some of the ways to export Ceph into non-native environments and understand some of the pitfalls that you may encounter. The book features a section on tuning that will take you through the process of optimizing both Ceph and its supporting infrastructure. You’ll also learn to develop applications, which use Librados and distributed computations with shared object classes. Toward the concluding chapters, you’ll learn to troubleshoot issues and handle various scenarios where Ceph is not likely to recover on its own. By the end of this book, you’ll be able to master storage management with Ceph and generate solutions for managing your infrastructure. What you will learn * Plan, design and deploy a Ceph cluster * Get well-versed with different features and storage methods * Carry out regular maintenance and daily operations with ease * Tune Ceph for improved ROI and performance * Recover Ceph from a range of issues * Upgrade clusters to BlueStore Who this book is for If you are a storage professional, system administrator, or cloud engineer looking for guidance on building powerful storage solutions for your cloud and on-premise infrastructure, this book is for you.
目录展开

Title Page

Copyright and Credits

Mastering Ceph Second Edition

About Packt

Why subscribe?

Packt.com

Contributors

About the author

About the reviewer

Packt is searching for authors like you

Preface

Who this book is for

What this book covers

To get the most out of this book

Download the example code files

Download the color images

Conventions used

Get in touch

Reviews

Section 1: Planning And Deployment

Planning for Ceph

What is Ceph?

How Ceph works

Ceph use cases

Specific use cases

OpenStack or KVM based virtualization

Large bulk block storage

Object storage

Object storage with custom applications

Distributed filesystem – web farm

Distributed filesystem – NAS or fileserver replacement

Big data

Infrastructure design

SSDs

Enterprise SSDs

Enterprise – read-intensive

Enterprise – general usage

Enterprise – write-intensive

Memory

CPU

Disks

Networking

10 G requirement

Network design

OSD node sizes

Failure domains

Price

Power supplies

How to plan a successful Ceph implementation

Understanding your requirements and how they relate to Ceph

Defining goals so that you can gauge whether the project is a success

Joining the Ceph community

Choosing your hardware

Training yourself and your team to use Ceph

Running a PoC to determine whether Ceph has met the requirements

Following best practices to deploy your cluster

Defining a change management process

Creating a backup and recovery plan

Summary

Questions

Deploying Ceph with Containers

Technical requirements

Preparing your environment with Vagrant and VirtualBox

How to install VirtualBox

How to set up Vagrant

Ceph-deploy

Orchestration

Ansible

Installing Ansible

Creating your inventory file

Variables

Testing

A very simple playbook

Adding the Ceph Ansible modules

Deploying a test cluster with Ansible

Change and configuration management

Ceph in containers

Containers

Kubernetes

Deploying a Ceph cluster with Rook

Summary

Questions

BlueStore

What is BlueStore?

Why was it needed?

Ceph's requirements

Filestore limitations

Why is BlueStore the solution?

How BlueStore works

RocksDB

Compression

Checksums

BlueStore cache tuning

Deferred writes

BlueFS

ceph-volume

How to use BlueStore

Strategies for upgrading an existing cluster to BlueStore

Upgrading an OSD in your test cluster

Summary

Questions

Ceph and Non-Native Protocols

Block

File

Examples

Exporting Ceph RBDs via iSCSI

Exporting CephFS via Samba

Exporting CephFS via NFS

ESXi hypervisor

Clustering

Split brain

Fencing

Pacemaker and corosync

Creating a highly available NFS share backed by CephFS

Summary

Questions

Section 2: Operating and Tuning

RADOS Pools and Client Access

Pools

Replicated pools

Erasure code pools

What is erasure coding?

K+M

How does erasure coding work in Ceph?

Algorithms and profiles

Jerasure

ISA

LRC

SHEC

Overwrite support in erasure-coded pools

Creating an erasure-coded pool

Troubleshooting the 2147483647 error

Reproducing the problem

Scrubbing

Ceph storage types

RBD

Thin provisioning

Snapshots and clones

Object maps

Exclusive locking

CephFS

MDSes and their states

Creating a CephFS filesystem

How is data stored in CephFS?

File layouts

Snapshots

Multi-MDS

RGW

Deploying RGW

Summary

Questions

Developing with Librados

What is librados?

How to use librados

Example librados application

Example of the librados application with atomic operations

Example of the librados application that uses watchers and notifiers

Summary

Questions

Distributed Computation with Ceph RADOS Classes

Example applications and the benefits of using RADOS classes

Writing a simple RADOS class in Lua

Writing a RADOS class that simulates distributed computing

Preparing the build environment

RADOS classes

Client librados applications

Calculating MD5 on the client

Calculating MD5 on the OSD via the RADOS class

Testing

RADOS class caveats

Summary

Questions

Monitoring Ceph

Why it is important to monitor Ceph

What should be monitored

Ceph health

Operating system and hardware

Smart stats

Network

Performance counters

The Ceph dashboard

PG states – the good, the bad, and the ugly

The good ones

The active state

The clean state

Scrubbing and deep scrubbing

The bad ones

The inconsistent state

The backfilling, backfill_wait, recovering, and recovery_wait states

The degraded state

Remapped

Peering

The ugly ones

The incomplete state

The down state

The backfill_toofull and recovery_toofull state

Monitoring Ceph with collectd

Graphite

Grafana

collectd

Deploying collectd with Ansible

Sample Graphite queries for Ceph

Number of Up and In OSDs

Showing the most deviant OSD usage

Total number of IOPs across all OSDs

Total MBps across all OSDs

Cluster capacity and usage

Average latency

Custom Ceph collectd plugins

Summary

Questions

Tuning Ceph

Latency

Client to Primary OSD

Primary OSD to Replica OSD(s)

Primary OSD to Client

Benchmarking

Benchmarking tools

Network benchmarking

Disk benchmarking

RADOS benchmarking

RBD benchmarking

Recommended tunings

CPU

BlueStore

WAL deferred writes

Filestore

VFS cache pressure

WBThrottle and/or nr_requests

Throttling filestore queues

filestore_queue_low_threshhold

filestore_queue_high_threshhold

filestore_expected_throughput_ops

filestore_queue_high_delay_multiple

filestore_queue_max_delay_multiple

Splitting PGs

Scrubbing

OP priorities

The network

General system tuning

Kernel RBD

Queue depth

readahead

Tuning CephFS

RBDs and erasure-coded pools

PG distributions

Summary

Questions

Tiering with Ceph

Tiering versus caching

How Ceph's tiering functionality works

What is a bloom filter?

Tiering modes

Writeback

Forward

Read-forward

Proxy

Read-proxy

Uses cases

Creating tiers in Ceph

Tuning tiering

Flushing and eviction

Promotions

Promotion throttling

Monitoring parameters

Alternative caching mechanisms

Summary

Questions

Section 3: Troubleshooting and Recovery

Troubleshooting

Repairing inconsistent objects

Full OSDs

Ceph logging

Slow performance

Causes

Increased client workload

Down OSDs

Recovery and backfilling

Scrubbing

Snaptrimming

Hardware or driver issues

Monitoring

iostat

htop

atop

Diagnostics

Extremely slow performance or no IO

Flapping OSDs

Jumbo frames

Failing disks

Slow OSDs

Out of capacity

Investigating PGs in a down state

Large monitor databases

Summary

Questions

Disaster Recovery

What is a disaster?

Avoiding data loss

What can cause an outage or data loss?

RBD mirroring

The journal

The rbd-mirror daemon

Configuring RBD mirroring

Performing RBD failover

RBD recovery

Filestore

BlueStore

RBD assembly – filestore

RBD assembly – BlueStore

Confirmation of recovery

RGW Multisite

CephFS recovery

Creating the disaster

CephFS metadata recovery

Lost objects and inactive PGs

Recovering from a complete monitor failure

Using the Ceph object-store tool

Investigating asserts

Example assert

Summary

Questions

Assessments

Chapter 1, Planning for Ceph

Chapter 2, Deploying Ceph with Containers

Chapter 3, BlueStore

Chapter 4, Ceph and Non-Native Protocols

Chapter 5, RADOS Pools and Client Access

Chapter 6, Developing with Librados

Chapter 7, Distributed Computation with Ceph RADOS Classes

Chapter 8, Monitoring Ceph

Chapter 9, Tuning Ceph

Chapter 10, Tiering with Ceph

Chapter 11, Troubleshooting

Chapter 12, Disaster Recovery

Other Books You May Enjoy

Leave a review - let other readers know what you think

累计评论(0条) 0个书友正在讨论这本书 发表评论

发表评论

发表评论,分享你的想法吧!

买过这本书的人还买过

读了这本书的人还在读

回顶部