Atjaunināt sīkdatņu piekrišanu

E-grāmata: C4.5: Programs for Machine Learning

  • Formāts: PDF+DRM
  • Izdošanas datums: 28-Jun-2014
  • Izdevniecība: Morgan Kaufmann Publishers In
  • Valoda: eng
  • ISBN-13: 9780080500584
  • Formāts - PDF+DRM
  • Cena: 57,85 €*
  • * ši ir gala cena, t.i., netiek piemērotas nekādas papildus atlaides
  • Ielikt grozā
  • Pievienot vēlmju sarakstam
  • Šī e-grāmata paredzēta tikai personīgai lietošanai. E-grāmatas nav iespējams atgriezt un nauda par iegādātajām e-grāmatām netiek atmaksāta.
  • Formāts: PDF+DRM
  • Izdošanas datums: 28-Jun-2014
  • Izdevniecība: Morgan Kaufmann Publishers In
  • Valoda: eng
  • ISBN-13: 9780080500584

DRM restrictions

  • Kopēšana (kopēt/ievietot):

    nav atļauts

  • Drukāšana:

    nav atļauts

  • Lietošana:

    Digitālo tiesību pārvaldība (Digital Rights Management (DRM))
    Izdevējs ir piegādājis šo grāmatu šifrētā veidā, kas nozīmē, ka jums ir jāinstalē bezmaksas programmatūra, lai to atbloķētu un lasītu. Lai lasītu šo e-grāmatu, jums ir jāizveido Adobe ID. Vairāk informācijas šeit. E-grāmatu var lasīt un lejupielādēt līdz 6 ierīcēm (vienam lietotājam ar vienu un to pašu Adobe ID).

    Nepieciešamā programmatūra
    Lai lasītu šo e-grāmatu mobilajā ierīcē (tālrunī vai planšetdatorā), jums būs jāinstalē šī bezmaksas lietotne: PocketBook Reader (iOS / Android)

    Lai lejupielādētu un lasītu šo e-grāmatu datorā vai Mac datorā, jums ir nepieciešamid Adobe Digital Editions (šī ir bezmaksas lietotne, kas īpaši izstrādāta e-grāmatām. Tā nav tas pats, kas Adobe Reader, kas, iespējams, jau ir jūsu datorā.)

    Jūs nevarat lasīt šo e-grāmatu, izmantojot Amazon Kindle.

Classifier systems play a major role in machine learning and knowledge-based systems. This is a complete guide to the C4.5 system as implemented in C for the UNIX environment. It contains a comprehensive guide to its use, the source code (about 8,800 lines), and implementation notes. Chapters discuss constructing decision trees, windowing, grouping attribute values, and interacting with classification models. The source code and sample data sets are available on disks separately for Sun workstations. Annotation copyright Book News, Inc. Portland, Or.

Classifier systems play a major role in machine learning and knowledge-based systems, and Ross Quinlan's work on ID3 and C4.5 is widely acknowledged to have made some of the most significant contributions to their development. This book is a complete guide to the C4.5 system as implemented in C for the UNIX environment. It contains a comprehensive guide to the system's use , the source code (about 8,800 lines), and implementation notes. The source code and sample datasets are also available for download (see below).



C4.5 starts with large sets of cases belonging to known classes. The cases, described by any mixture of nominal and numeric properties, are scrutinized for patterns that allow the classes to be reliably discriminated. These patterns are then expressed as models, in the form of decision trees or sets of if-then rules, that can be used to classify new cases, with emphasis on making the models understandable as well as accurate. The system has been applied successfully to tasks involving tens of thousands of cases described by hundreds of properties. The book starts from simple core learning methods and shows how they can be elaborated and extended to deal with typical problems such as missing data and over hitting. Advantages and disadvantages of the C4.5 approach are discussed and illustrated with several case studies.



This book and software should be of interest to developers of classification-based intelligent systems and to students in machine learning and expert systems courses.

Preface vii
How to Obtain the C4.5 Software ix
Introduction
1(16)
Example: Labor negotiation settlements
3(9)
Other kinds of classification models
12(4)
What lies ahead
16(1)
Constructing Decision Trees
17(10)
Divide and conquer
17(3)
Evaluating tests
20(4)
Possible tests considered
24(1)
Tests on continuous attributes
25(2)
Unknown Attribute Values
27(8)
Adapting the previous algorithms
28(2)
Play/Don't Play example again
30(2)
Recapitulation
32(3)
Pruning Decision Trees
35(10)
When to simplify?
36(1)
Error-based pruning
37(4)
Example: Democrats and Republicans
41(1)
Estimating error rates for trees
42(3)
From Trees to Rules
45(12)
Generalizing single rules
47(3)
Class rulesets
50(4)
Ranking classes and choosing a default
54(1)
Summary
55(2)
Windowing
57(6)
Example: Hypothyroid conditions revisited
58(1)
Why retain windowing?
58(2)
Example: The multiplexor
60(3)
Grouping Attribute Values
63(8)
Finding value groups by merging
64(1)
Example: Soybean diseases
65(1)
When to form groups?
66(1)
Example: The Monk's problems
67(2)
Uneasy reflections
69(2)
Interacting with Classification Models
71(10)
Decision tree models
71(7)
Production rule models
78(2)
Caveat
80(1)
Guide to Using the System
81(14)
Files
81(3)
Running the programs
84(5)
Conducting experiments
89(2)
Using options: A credit approval example
91(4)
Limitations
95(8)
Geometric interpretation
95(1)
Nonrectangular regions
96(2)
Poorly delineated regions
98(2)
Fragmented regions
100(2)
A more cheerful note
102(1)
Desirable Additions
103(6)
Continuous classes
103(1)
Ordered discrete attributes
104(1)
Structured attributes
104(1)
Structured induction
105(1)
Incremental induction
106(1)
Prospectus
107(2)
Appendix: Program Listings
109(182)
Brief descriptions of the contents of files
110(2)
Notes on some important data structures
112(3)
Source code for the system
115(173)
Alphabetic index of routines
288(3)
References and Bibliography 291(6)
Author Index 297(2)
Subject Index 299
J. Ross Quinlan, University of New South Wales