Atjaunināt sīkdatņu piekrišanu

Support Vector Machines and Perceptrons: Learning, Optimization, Classification, and Application to Social Networks 1st ed. 2016 [Mīkstie vāki]

  • Formāts: Paperback / softback, 95 pages, height x width: 235x155 mm, weight: 1825 g, 25 Illustrations, black and white; XIII, 95 p. 25 illus., 1 Paperback / softback
  • Sērija : SpringerBriefs in Computer Science
  • Izdošanas datums: 25-Aug-2016
  • Izdevniecība: Springer International Publishing AG
  • ISBN-10: 3319410628
  • ISBN-13: 9783319410623
  • Mīkstie vāki
  • Cena: 46,91 €*
  • * ši ir gala cena, t.i., netiek piemērotas nekādas papildus atlaides
  • Standarta cena: 55,19 €
  • Ietaupiet 15%
  • Grāmatu piegādes laiks ir 3-4 nedēļas, ja grāmata ir uz vietas izdevniecības noliktavā. Ja izdevējam nepieciešams publicēt jaunu tirāžu, grāmatas piegāde var aizkavēties.
  • Daudzums:
  • Ielikt grozā
  • Piegādes laiks - 4-6 nedēļas
  • Pievienot vēlmju sarakstam
  • Formāts: Paperback / softback, 95 pages, height x width: 235x155 mm, weight: 1825 g, 25 Illustrations, black and white; XIII, 95 p. 25 illus., 1 Paperback / softback
  • Sērija : SpringerBriefs in Computer Science
  • Izdošanas datums: 25-Aug-2016
  • Izdevniecība: Springer International Publishing AG
  • ISBN-10: 3319410628
  • ISBN-13: 9783319410623
This work reviews the state of the art in SVM and perceptron classifiers. A Support Vector Machine (SVM) is easily the most popular tool for dealing with a variety of machine-learning tasks, including classification. SVMs are associated with maximizing the margin between two classes. The concerned optimization problem is a convex optimization guaranteeing a globally optimal solution. The weight vector associated with SVM is obtained by a linear combination of some of the boundary and noisy vectors. Further, when the data are not linearly separable, tuning the coefficient of the regularization term becomes crucial. Even though SVMs have popularized the kernel trick, in most of the practical applications that are high-dimensional, linear SVMs are popularly used. The text examines applications to social and information networks. The work also discusses another popular linear classifier, the perceptron, and compares its performance with that of the SVM in different application areas.

IntroductionLinear Discriminant FunctionPerceptronLinear Support Vector MachinesKernel Based SVMApplication to Social NetworksConclusion

Dr. M. Narasimha Murty is a professor in the Department of Computer Science and Automation at the Indian Institute of Science, Bangalore.

Recenzijas

The book deals primarily with classification, focused on linear classifiers. It is intended to senior undergraduate and graduate students and researchers working in machine learning, data mining and pattern recognition. (Smaranda Belciug, zbMATH 1365.68003, 2017) 

1 Introduction
1(14)
1.1 Terminology
1(2)
1.1.1 What Is a Pattern?
1(1)
1.1.2 Why Pattern Representation?
2(1)
1.1.3 What Is Pattern Representation?
2(1)
1.1.4 How to Represent Patterns?
2(1)
1.1.5 Why Represent Patterns as Vectors?
2(1)
1.1.6 Notation
3(1)
1.2 Proximity Function
3(3)
1.2.1 Distance Function
3(1)
1.2.2 Similarity Function
4(1)
1.2.3 Relation Between Dot Product and Cosine Similarity
5(1)
1.3 Classification
6(1)
1.3.1 Class
6(1)
1.3.2 Representation of a Class
6(1)
1.3.3 Choice of G(X)
7(1)
1.4 Classifiers
7(7)
1.4.1 Nearest Neighbor Classifier (NNC)
7(1)
1.4.2 K-Nearest Neighbor Classifier (KNNC)
7(1)
1.4.3 Minimum-Distance Classifier (MDC)
8(1)
1.4.4 Minimum Mahalanobis Distance Classifier
9(1)
1.4.5 Decision Tree Classifier: (DTC)
10(2)
1.4.6 Classification Based on a Linear Discriminant Function
12(1)
1.4.7 Nonlinear Discriminant Function
12(1)
1.4.8 Naive Bayes Classifier: (NBC)
13(1)
1.5 Summary
14(1)
References
14(1)
2 Linear Discriminant Function
15(12)
2.1 Introduction
15(2)
2.1.1 Associated Terms
15(2)
2.2 Linear Classifier
17(2)
2.3 Linear Discriminant Function
19(4)
2.3.1 Decision Boundary
19(1)
2.3.2 Negative Half Space
19(1)
2.3.3 Positive Half Space
19(1)
2.3.4 Linear Separability
20(1)
2.3.5 Linear Classification Based on a Linear Discriminant Function
20(3)
2.4 Example Linear Classifiers
23(4)
2.4.1 Minimum-Distance Classifier (MDC)
23(1)
2.4.2 Naive Bayes Classifier (NBC)
23(1)
2.4.3 Nonlinear Discriminant Function
24(1)
References
25(2)
3 Perceptron
27(14)
3.1 Introduction
27(1)
3.2 Perceptron Learning Algorithm
28(4)
3.2.1 Learning Boolean Functions
28(2)
3.2.2 W Is Not Unique
30(1)
3.2.3 Why Should the Learning Algorithm Work?
30(1)
3.2.4 Convergence of the Algorithm
31(1)
3.3 Perceptron Optimization
32(2)
3.3.1 Incremental Rule
33(1)
3.3.2 Nonlinearly Separable Case
33(1)
3.4 Classification Based on Perceptrons
34(4)
3.4.1 Order of the Perceptron
35(2)
3.4.2 Permutation Invariance
37(1)
3.4.3 Incremental Computation
37(1)
3.5 Experimental Results
38(1)
3.6 Summary
39(2)
References
40(1)
4 Linear Support Vector Machines
41(16)
4.1 Introduction
41(2)
4.1.1 Similarity with Perceptron
41(1)
4.1.2 Differences Between Perceptron and SVM
42(1)
4.1.3 Important Properties of SVM
42(1)
4.2 Linear SVM
43(6)
4.2.1 Linear Separability
43(1)
4.2.2 Margin
44(2)
4.2.3 Maximum Margin
46(1)
4.2.4 An Example
47(2)
4.3 Dual Problem
49(2)
4.3.1 An Example
50(1)
4.4 Multiclass Problems
51(1)
4.5 Experimental Results
52(2)
4.5.1 Results on Multiclass Classification
52(2)
4.6 Summary
54(3)
References
56(1)
5 Kernel-Based SVM
57(12)
5.1 Introduction
57(2)
5.1.1 What Happens if the Data Is Not Linearly Separable?
57(1)
5.1.2 Error in Classification
58(1)
5.2 Soft Margin Formulation
59(1)
5.2.1 The Solution
59(1)
5.2.2 Computing b
60(1)
5.2.3 Difference Between the Soft and Hard Margin Formulations
60(1)
5.3 Similarity Between SVM and Perceptron
60(2)
5.4 Nonlinear Decision Boundary
62(2)
5.4.1 Why Transformed Space?
63(1)
5.4.2 Kernel Trick
63(1)
5.4.3 An Example
64(1)
5.4.4 Example Kernel Functions
64(1)
5.5 Success of SVM
64(1)
5.6 Experimental Results
65(2)
5.6.1 Iris Versicolour and Iris Virginica
65(1)
5.6.2 Handwritten Digit Classification
66(1)
5.6.3 Multiclass Classification with Varying Values of the Parameter C
66(1)
5.7 Summary
67(2)
References
67(2)
6 Application to Social Networks
69(16)
6.1 Introduction
69(3)
6.1.1 What Is a Network?
69(1)
6.1.2 How Do We Represent It?
69(3)
6.2 What Is a Social Network?
72(2)
6.2.1 Citation Networks
73(1)
6.2.2 Coauthor Networks
73(1)
6.2.3 Customer Networks
73(1)
6.2.4 Homogeneous and Heterogeneous Networks
73(1)
6.3 Important Properties of Social Networks
74(1)
6.4 Characterization of Communities
75(2)
6.4.1 What Is a Community?
75(1)
6.4.2 Clustering Coefficient of a Subgraph
76(1)
6.5 Link Prediction
77(2)
6.5.1 Similarity Between a Pair of Nodes
78(1)
6.6 Similarity Functions
79(4)
6.6.1 Example
80(1)
6.6.2 Global Similarity
81(1)
6.6.3 Link Prediction based on Supervised Learning
82(1)
6.7 Summary
83(2)
References
83(2)
7 Conclusion
85(4)
Glossary 89(2)
Index 91