Atjaunināt sīkdatņu piekrišanu

Tiny ML [Mīkstie vāki]

3.92/5 (52 ratings by Goodreads)
  • Formāts: Paperback / softback, 501 pages, height x width x depth: 240x175x30 mm
  • Izdošanas datums: 21-Jan-2020
  • Izdevniecība: O'Reilly Media
  • ISBN-10: 1492052043
  • ISBN-13: 9781492052043
Citas grāmatas par šo tēmu:
  • Mīkstie vāki
  • Cena: 46,50 €*
  • * ši ir gala cena, t.i., netiek piemērotas nekādas papildus atlaides
  • Standarta cena: 54,71 €
  • Ietaupiet 15%
  • Grāmatu piegādes laiks ir 3-4 nedēļas, ja grāmata ir uz vietas izdevniecības noliktavā. Ja izdevējam nepieciešams publicēt jaunu tirāžu, grāmatas piegāde var aizkavēties.
  • Daudzums:
  • Ielikt grozā
  • Piegādes laiks - 4-6 nedēļas
  • Pievienot vēlmju sarakstam
  • Formāts: Paperback / softback, 501 pages, height x width x depth: 240x175x30 mm
  • Izdošanas datums: 21-Jan-2020
  • Izdevniecība: O'Reilly Media
  • ISBN-10: 1492052043
  • ISBN-13: 9781492052043
Citas grāmatas par šo tēmu:
Deep learning networks are getting smaller. Much smaller. The Google Assistant team can detect words with a model just 14 kilobytes in size--small enough to run on a microcontroller. With this practical book you'll enter the field of TinyML, where deep learning and embedded systems combine to make astounding things possible with tiny devices. As of early 2022, the supplemental code files are available at https: //oreil.ly/XuIQ4.

Pete Warden and Daniel Situnayake explain how you can train models small enough to fit into any environment. Ideal for software and hardware developers who want to build embedded systems using machine learning, this guide walks you through creating a series of TinyML projects, step-by-step. No machine learning or microcontroller experience is necessary.





Build a speech recognizer, a camera that detects people, and a magic wand that responds to gestures Work with Arduino and ultra-low-power microcontrollers Learn the essentials of ML and how to train your own models Train models to understand audio, image, and accelerometer data Explore TensorFlow Lite for Microcontrollers, Google's toolkit for TinyML Debug applications and provide safeguards for privacy and security Optimize latency, energy usage, and model and binary size
Preface xiii
1 Introduction 1(4)
Embedded Devices
3(1)
Changing Landscape
4(1)
2 Getting Started 5(6)
Who Is This Book Aimed At?
5(1)
What Hardware Do You Need?
6(1)
What Software Do You Need?
7(1)
What Do We Hope You'll Learn?
8(3)
3 Getting Up to Speed on Machine Learning 11(18)
What Machine Learning Actually Is
12(1)
The Deep Learning Workflow
13(15)
Decide on a Goal
14(1)
Collect a Dataset
14(2)
Design a Model Architecture
16(5)
Train the Model
21(5)
Convert the Model
26(1)
Run Inference
26(1)
Evaluate and Troubleshoot
27(1)
Wrapping Up
28(1)
4 The "Hello World" of TinyML: Building and Training a Model 29(38)
What We're Building
30(2)
Our Machine Learning Toolchain
32(2)
Python and Jupyter Notebooks
32(1)
Google Colaboratory
33(1)
TensorFlow and Keras
33(1)
Building Our Model
34(12)
Importing Dependencies
35(3)
Generating Data
38(3)
Splitting the Data
41(1)
Defining a Basic Model
42(4)
Training Our Model
46(14)
Training Metrics
48(1)
Graphing the History
49(5)
Improving Our Model
54(4)
Testing
58(2)
Converting the Model for TensorFlow Lite
60(5)
Converting to a C File
64(1)
Wrapping Up
65(2)
5 The "Hello World" of TinyML: Building an Application 67(28)
Walking Through the Tests
68(17)
Including Dependencies
69(1)
Setting Up the Test
70(1)
Getting Ready to Log Data
70(2)
Mapping Our Model
72(2)
Creating an AllOpsResolver
74(1)
Defining a Tensor Arena
74(1)
Creating an Interpreter
75(1)
Inspecting the Input Tensor
75(3)
Running Inference on an Input
78(2)
Reading the Output
80(2)
Running the Tests
82(3)
Project File Structure
85(1)
Walking Through the Source
86(7)
Starting with main_functions.cc
87(3)
Handling Output with output_handler.cc
90(1)
Wrapping Up main_functions.cc
91(1)
Understanding main.cc
91(1)
Running Our Application
92(1)
Wrapping Up
93(2)
6 The "Hello World" of TinyML: Deploying to Microcontrollers 95(32)
What Exactly Is a Microcontroller?
96(1)
Arduino
97(9)
Handling Output on Arduino
98(3)
Running the Example
101(5)
Making Your Own Changes
106(1)
SparkFun Edge
106(13)
Handling Output on SparkFun Edge
107(3)
Running the Example
110(7)
Testing the Program
117(1)
Viewing Debug Data
118(1)
Making Your Own Changes
118(1)
ST Microelectronics STM32F746G Discovery Kit
119(7)
Handling Output on STM32F746G
119(5)
Running the Example
124(2)
Making Your Own Changes
126(1)
Wrapping Up
126(1)
7 Wake-Word Detection: Building an Application 127(54)
What We're Building
128(1)
Application Architecture
129(4)
Introducing Our Model
130(2)
All the Moving Parts
132(1)
Walking Through the Tests
133(19)
The Basic Flow
134(4)
The Audio Provider
138(1)
The Feature Provider
139(6)
The Command Recognizer
145(6)
The Command Responder
151(1)
Listening for Wake Words
152(4)
Running Our Application
156(1)
Deploying to Microcontrollers
156(24)
Arduino
157(8)
SparkFun Edge
165(10)
ST Microelectronics STM32F746G Discovery Kit
175(5)
Wrapping Up
180(1)
8 Wake-Word Detection: Training a Model 181(40)
Training Our New Model
182(15)
Training in Colab
182(15)
Using the Model in Our Project
197(5)
Replacing the Model
197(1)
Updating the Labels
198(1)
Updating command_responder.cc
198(3)
Other Ways to Run the Scripts
201(1)
How the Model Works
202(12)
Visualizing the Inputs
202(4)
How Does Feature Generation Work?
206(2)
Understanding the Model Architecture
208(5)
Understanding the Model Output
213(1)
Training with Your Own Data
214(5)
The Speech Commands Dataset
215(1)
Training on Your Own Dataset
216(1)
How to Record Your Own Audio
216(2)
Data Augmentation
218(1)
Model Architectures
219(1)
Wrapping Up
219(2)
9 Person Detection: Building an Application 221(38)
What We're Building
222(2)
Application Architecture
224(3)
Introducing Our Model
224(1)
All the Moving Parts
225(2)
Walking Through the Tests
227(6)
The Basic Flow
227(4)
The Image Provider
231(1)
The Detection Responder
232(1)
Detecting People
233(3)
Deploying to Microcontrollers
236(21)
Arduino
236(10)
SparkFun Edge
246(11)
Wrapping Up
257(2)
10 Person Detection: Training a Model 259(20)
Picking a Machine
259(1)
Setting Up a Google Cloud Platform Instance
260(8)
Training Framework Choice
268(1)
Building the Dataset
269(1)
Training the Model
270(2)
TensorBoard
272(2)
Evaluating the Model
274(1)
Exporting the Model to TensorFlow Lite
274(3)
Exporting to a GraphDef Protobuf File
274(1)
Freezing the Weights
275(1)
Quantizing and Converting to TensorFlow Lite
275(1)
Converting to a C Source File
276(1)
Training for Other Categories
277(1)
Understanding the Architecture
277(1)
Wrapping Up
278(1)
11 Magic Wand: Building an Application 279(50)
What We're Building
282(1)
Application Architecture
283(2)
Introducing Our Model
284(1)
All the Moving Parts
284(1)
Walking Through the Tests
285(10)
The Basic Flow
286(3)
The Accelerometer Handler
289(2)
The Gesture Predictor
291(3)
The Output Handler
294(1)
Detecting Gestures
295(3)
Deploying to Microcontrollers
298(29)
Arduino
298(14)
SparkFun Edge
312(15)
Wrapping Up
327(2)
12 Magic Wand: Training a Model 329(26)
Training a Model
330(9)
Training in Colab
330(9)
Other Ways to Run the Scripts
339(1)
How the Model Works
339(10)
Visualizing the Input
339(3)
Understanding the Model Architecture
342(7)
Training with Your Own Data
349(4)
Capturing Data
349(3)
Modifying the Training Scripts
352(1)
Training
352(1)
Using the New Model
352(1)
Wrapping Up
353(2)
Learning Machine Learning
353(1)
What's Next
354(1)
13 TensorFlow Lite for Microcontrollers 355(38)
What Is TensorFlow Lite for Microcontrollers?
355(6)
TensorFlow
355(1)
TensorFlow Lite
356(1)
TensorFlow Lite for Microcontrollers
356(1)
Requirements
357(2)
Why Is the Model Interpreted?
359(1)
Project Generation
360(1)
Build Systems
361(9)
Specializing Code
362(4)
Makefiles
366(3)
Writing Tests
369(1)
Supporting a New Hardware Platform
370(6)
Printing to a Log
371(2)
Implementing DebugLog()
373(2)
Running All the Targets
375(1)
Integrating with the Makefile Build
376(1)
Supporting a New IDE or Build System
376(1)
Integrating Code Changes Between Projects and Repositories
377(2)
Contributing Back to Open Source
379(1)
Supporting New Hardware Accelerators
380(1)
Understanding the File Format
381(7)
FlatBuffers
382(6)
Porting TensorFlow Lite Mobile Ops to Micro
388(4)
Separate the Reference Code
389(1)
Create a Micro Copy of the Operator
389(1)
Port the Test to the Micro Framework
390(1)
Build a Bazel Test
391(1)
Add Your Op to AllOpsResolver
391(1)
Build a Makefile Test
391(1)
Wrapping Up
392(1)
14 Designing Your Own TinyML Applications 393(8)
The Design Process
393(1)
Do You Need a Microcontroller, or Would a Larger Device Work?
394(1)
Understanding What's Possible
395(1)
Follow in Someone Else's Footsteps
395(1)
Find Some Similar Models to Train
396(1)
Look at the Data
397(1)
Wizard of Oz-ing
398(1)
Get It Working on the Desktop First
399(2)
15 Optimizing Latency 401(14)
First Make Sure It Matters
401(1)
Hardware Changes
402(1)
Model Improvements
402(2)
Estimating Model Latency
403(1)
How to Speed Up Your Model
404(1)
Quantization
404(2)
Product Design
406(1)
Code Optimizations
407(2)
Performance Profiling
407(2)
Optimizing Operations
409(5)
Look for Implementations That Are Already Optimized
409(1)
Write Your Own Optimized Implementation
409(3)
Taking Advantage of Hardware Features
412(1)
Accelerators and Coprocessors
413(1)
Contributing Back to Open Source
414(1)
Wrapping Up
414(1)
16 Optimizing Energy Usage 415(8)
Developing Intuition
415(4)
Typical Component Power Usage
416(1)
Hardware Choice
417(2)
Measuring Real Power Usage
419(1)
Estimating Power Usage for a Model
419(1)
Improving Power Usage
420(2)
Duty Cycling
420(1)
Cascading Design
421(1)
Wrapping Up
422(1)
17 Optimizing Model and Binary Size 423(14)
Understanding Your System's Limits
423(1)
Estimating Memory Usage
424(2)
Flash Usage
424(1)
RAM Usage
425(1)
Ballpark Figures for Model Accuracy and Size on Different Problems
426(2)
Speech Wake-Word Model
427(1)
Accelerometer Predictive Maintenance Model
427(1)
Person Presence Detection
427(1)
Model Choice
428(1)
Reducing the Size of Your Executable
428(6)
Measuring Code Size
429(1)
How Much Space Is Tensorflow Lite for Microcontrollers Taking?
429(1)
OpResolver
430(1)
Understanding the Size of Individual Functions
431(3)
Framework Constants
434(1)
Truly Tiny Models
434(1)
Wrapping Up
435(2)
18 Debugging 437(10)
Accuracy Loss Between Training and Deployment
437(3)
Preprocessing Differences
437(2)
Debugging Preprocessing
439(1)
On-Device Evaluation
440(1)
Numerical Differences
440(2)
Are the Differences a Problem?
440(1)
Establish a Metric
441(1)
Compare Against a Baseline
441(1)
Swap Out Implementations
442(1)
Mysterious Crashes and Hangs
442(3)
Desktop Debugging
443(1)
Log Tracing
443(1)
Shotgun Debugging
444(1)
Memory Corruption
444(1)
Wrapping Up
445(2)
19 Porting Models from TensorFlow to TensorFlow Lite 447(6)
Understand What Ops Are Needed
447(1)
Look at Existing Op Coverage in Tensorflow Lite
448(1)
Move Preprocessing and Postprocessing into Application Code
449(1)
Implement Required Ops if Necessary
450(1)
Optimize Ops
450(1)
Wrapping Up
450(3)
20 Privacy, Security, and Deployment 453(8)
Privacy
453(3)
The Privacy Design Document
454(2)
Using a PDD
456(1)
Security
456(2)
Protecting Models
457(1)
Deployment
458(1)
Moving from a Development Board to a Product
458(1)
Wrapping Up
459(2)
21 Learning More 461(4)
The TinyML Foundation
461(1)
SIG Micro
461(1)
The TensorFlow Website
462(1)
Other Frameworks
462(1)
Twitter
462(1)
Friends of TinyML
462(1)
Wrapping Up
463(2)
A Using and Generating an Arduino Library Zip 465(2)
B Capturing Audio on Arduino 467(8)
Index 475