Atjaunināt sīkdatņu piekrišanu

E-grāmata: Learning Ray

3.86/5 (14 ratings by Goodreads)
  • Formāts: 274 pages
  • Izdošanas datums: 13-Feb-2023
  • Izdevniecība: O'Reilly Media
  • Valoda: eng
  • ISBN-13: 9781098117191
  • Formāts - PDF+DRM
  • Cena: 46,20 €*
  • * ši ir gala cena, t.i., netiek piemērotas nekādas papildus atlaides
  • Ielikt grozā
  • Pievienot vēlmju sarakstam
  • Šī e-grāmata paredzēta tikai personīgai lietošanai. E-grāmatas nav iespējams atgriezt un nauda par iegādātajām e-grāmatām netiek atmaksāta.
  • Formāts: 274 pages
  • Izdošanas datums: 13-Feb-2023
  • Izdevniecība: O'Reilly Media
  • Valoda: eng
  • ISBN-13: 9781098117191

DRM restrictions

  • Kopēšana (kopēt/ievietot):

    nav atļauts

  • Drukāšana:

    nav atļauts

  • Lietošana:

    Digitālo tiesību pārvaldība (Digital Rights Management (DRM))
    Izdevējs ir piegādājis šo grāmatu šifrētā veidā, kas nozīmē, ka jums ir jāinstalē bezmaksas programmatūra, lai to atbloķētu un lasītu. Lai lasītu šo e-grāmatu, jums ir jāizveido Adobe ID. Vairāk informācijas šeit. E-grāmatu var lasīt un lejupielādēt līdz 6 ierīcēm (vienam lietotājam ar vienu un to pašu Adobe ID).

    Nepieciešamā programmatūra
    Lai lasītu šo e-grāmatu mobilajā ierīcē (tālrunī vai planšetdatorā), jums būs jāinstalē šī bezmaksas lietotne: PocketBook Reader (iOS / Android)

    Lai lejupielādētu un lasītu šo e-grāmatu datorā vai Mac datorā, jums ir nepieciešamid Adobe Digital Editions (šī ir bezmaksas lietotne, kas īpaši izstrādāta e-grāmatām. Tā nav tas pats, kas Adobe Reader, kas, iespējams, jau ir jūsu datorā.)

    Jūs nevarat lasīt šo e-grāmatu, izmantojot Amazon Kindle.

Get started with Ray, the open source distributed computing framework that simplifies the process of scaling compute-intensive Python workloads. With this practical book, Python programmers, data engineers, and data scientists will learn how to leverage Ray locally and spin up compute clusters. You'll be able to use Ray to structure and run machine learning programs at scale.

Authors Max Pumperla, Edward Oakes, and Richard Liaw show you how to build machine learning applications with Ray. You'll understand how Ray fits into the current landscape of machine learning tools and discover how Ray continues to integrate ever more tightly with these tools. Distributed computation is hard, but by using Ray you'll find it easy to get started.

  • Learn how to build your first distributed applications with Ray Core
  • Conduct hyperparameter optimization with Ray Tune
  • Use the Ray RLlib library for reinforcement learning
  • Manage distributed training with the Ray Train library
  • Use Ray to perform data processing with Ray Datasets
  • Learn how work with Ray Clusters and serve models with Ray Serve
  • Build end-to-end machine learning applications with Ray AIR

Foreword xi
Preface xiii
1 An Overview of Ray
1(22)
What Is Ray?
2(4)
What Led to Ray?
2(2)
Ray's Design Principles
4(1)
Three Layers: Core, Libraries, and Ecosystem
5(1)
A Distributed Computing Framework
6(2)
A Suite of Data Science Libraries
8(12)
Ray AIR and the Data Science Workflow
8(2)
Data Processing with Ray Datasets
10(2)
Model Training
12(4)
Hyperparameter Tuning
16(2)
Model Serving
18(2)
A Growing Ecosystem
20(1)
Summary
21(2)
2 Getting Started with Ray Core
23(26)
An Introduction to Ray Core
24(12)
A First Example Using the Ray API
25(10)
An Overview of the Ray Core API
35(1)
Understanding Ray System Components
36(5)
Scheduling and Executing Work on a Node
36(3)
The Head Node
39(1)
Distributed Scheduling and Execution
39(2)
A Simple Map Reduce Example with Ray
41(6)
Mapping and Shuffling Document Data
43(2)
Reducing Word Counts
45(2)
Summary
47(2)
3 Building Your First Distributed Application
49(20)
Introducing Reinforcement Learning
49(1)
Setting Up a Simple Maze Problem
50(5)
Building a Simulation
55(4)
Training a Reinforcement Learning Model
59(3)
Building a Distributed Ray App
62(4)
Recapping RL Terminology
66(1)
Summary
67(2)
4 Reinforcement Learning with Ray RLlib
69(32)
An Overview of RLlib
70(1)
Getting Started with RLlib
71(11)
Building a Gym Environment
71(2)
Running the RLlib CLI
73(2)
Using the RLlib Python API
75(7)
Configuring RLlib Experiments
82(3)
Resource Configuration
83(1)
Rollout Worker Configuration
83(1)
Environment Configuration
84(1)
Working with RLlib Environments
85(8)
An Overview of RLlib Environments
85(1)
Working with Multiple Agents
86(4)
Working with Policy Servers and Clients
90(3)
Advanced Concepts
93(6)
Building an Advanced Environment
94(1)
Applying Curriculum Learning
95(2)
Working with Offline Data
97(1)
Other Advanced Topics
98(1)
Summary
99(2)
5 Hyperparameter Optimization with Ray Tune
101(20)
Tuning Hyperparameters
102(3)
Building a Random Search Example with Ray
102(2)
Why Is HPO Hard?
104(1)
An Introduction to Tune
105(1)
How Does Tune Work?
106(9)
Configuring and Running Tune
110(5)
Machine Learning with Tune
115(4)
Using RLlib with Tune
115(1)
Tuning Keras Models
116(3)
Summary
119(2)
6 Data Processing with Ray
121(18)
Ray Datasets
122(12)
Ray Datasets Basics
123(3)
Computing Over Ray Datasets
126(1)
Dataset Pipelines
127(3)
Example: Training Copies of a Classifier in Parallel
130(4)
External Library Integrations
134(2)
Building an ML Pipeline
136(2)
Summary
138(1)
7 Distributed Training with Ray Train
139(18)
The Basics of Distributed Model Training
139(2)
Introduction to Ray Train by Example
141(7)
Predicting Big Tips in NYC Taxi Rides
141(1)
Loading, Preprocessing, and Featurization
142(1)
Denning a Deep Learning Model
143(1)
Distributed Training with Ray Train
144(3)
Distributed Batch Inference
147(1)
More on Trainers in Ray Train
148(8)
Migrating to Ray Train with Minimal Code Changes
150(2)
Scaling Out Trainers
152(1)
Preprocessing with Ray Train
153(1)
Integrating Trainers with Ray Tune
154(2)
Using Callbacks to Monitor Training
156(1)
Summary
156(1)
8 Online Inference with Ray Serve
157(22)
Key Characteristics of Online Inference
158(2)
ML Models Are Compute Intensive
158(1)
ML Models Aren't Useful in Isolation
159(1)
An Introduction to Ray Serve
160(10)
Architectural Overview
160(1)
Defining a Basic HTTP Endpoint
161(2)
Scaling and Resource Allocation
163(2)
Request Batching
165(1)
Multimodel Inference Graphs
166(4)
End-to-End Example: Building an NLP-Powered API
170(6)
Fetching Content and Preprocessing
172(1)
NLP Models
172(1)
HTTP Handling and Driver Logic
173(2)
Putting It All Together
175(1)
Summary
176(3)
9 Ray Clusters
179(16)
Manually Creating a Ray Cluster
180(2)
Deployment on Kubernetes
182(8)
Setting Up Your First KubeRay Cluster
183(1)
Interacting with the KubeRay Cluster
184(2)
Exposing KubeRay
186(1)
Configuring KubeRay
187(2)
Configuring Logging for KubeRay
189(1)
Using the Ray Cluster Launcher
190(2)
Configuring Your Ray Cluster
190(1)
Using the Cluster Launcher CLI
191(1)
Interacting with a Ray Cluster
191(1)
Working with Cloud Clusters
192(2)
AWS
192(1)
Using Other Cloud Providers
193(1)
Autoscaling
194(1)
Summary
194(1)
10 Getting Started with the Ray Al Runtime
195(20)
Why Use AIR?
195(2)
Key AIR Concepts by Example
197(10)
Ray Datasets and Preprocessors
198(1)
Trainers
199(2)
Tuners and Checkpoints
201(2)
Batch Predictors
203(1)
Deployments
204(3)
Workloads That Are Suited for AIR
207(6)
AIR Workload Execution
209(2)
AIR Memory Management
211(1)
AIR Failure Model
212(1)
Autoscaling AIR Workloads
213(1)
Summary
213(2)
11 Ray's Ecosystem and Beyond
215(20)
A Growing Ecosystem
216(11)
Data Loading and Processing
216(2)
Model Training
218(4)
Model Serving
222(3)
Building Custom Integrations
225(1)
An Overview of Ray's Integrations
226(1)
Ray and Other Systems
227(4)
Distributed Python Frameworks
227(1)
Ray AIR and the Broader ML Ecosystem
228(2)
How to Integrate AIR into Your ML Platform
230(1)
Where to Go from Here?
231(1)
Summary
232(3)
Index 235
Max Pumperla is a data science professor and software engineer located in Hamburg, Germany. He's an active open source contributor, maintainer of several Python packages, and author of machine learning books. He currently works as software engineer at Anyscale. As head of product research at Pathmind Inc. he was developing reinforcement learning solutions for industrial applications at scale using Ray RLlib, Serve and Tune. Edward Oakes (ed.nmi.oakes@gmail.com), writing chapters 7 (data) & 9 (serving): "Edward is a software engineer and team lead at Anyscale, where he leads the development of Ray Serve and is one of the top open source contributors to Ray. Prior to Anyscale, he was a graduate student in the EECS department at UC Berkeley." RIchard Liaw (rliaw@berkeley.edu), writing chapters 6 (training) & 8 (clusters): Richard Liaw is a software engineer at Anyscale, working on open source tools for distributed machine learning. He is on leave from the PhD program at the Computer Science Department at UC Berkeley, advised by Joseph Gonzalez, Ion Stoica, and Ken Goldberg.