Atjaunināt sīkdatņu piekrišanu

E-grāmata: Memory-Based Language Processing

Citas grāmatas par šo tēmu:
  • Formāts - PDF+DRM
  • Cena: 44,00 €*
  • * ši ir gala cena, t.i., netiek piemērotas nekādas papildus atlaides
  • Ielikt grozā
  • Pievienot vēlmju sarakstam
  • Šī e-grāmata paredzēta tikai personīgai lietošanai. E-grāmatas nav iespējams atgriezt un nauda par iegādātajām e-grāmatām netiek atmaksāta.
Citas grāmatas par šo tēmu:

DRM restrictions

  • Kopēšana (kopēt/ievietot):

    nav atļauts

  • Drukāšana:

    nav atļauts

  • Lietošana:

    Digitālo tiesību pārvaldība (Digital Rights Management (DRM))
    Izdevējs ir piegādājis šo grāmatu šifrētā veidā, kas nozīmē, ka jums ir jāinstalē bezmaksas programmatūra, lai to atbloķētu un lasītu. Lai lasītu šo e-grāmatu, jums ir jāizveido Adobe ID. Vairāk informācijas šeit. E-grāmatu var lasīt un lejupielādēt līdz 6 ierīcēm (vienam lietotājam ar vienu un to pašu Adobe ID).

    Nepieciešamā programmatūra
    Lai lasītu šo e-grāmatu mobilajā ierīcē (tālrunī vai planšetdatorā), jums būs jāinstalē šī bezmaksas lietotne: PocketBook Reader (iOS / Android)

    Lai lejupielādētu un lasītu šo e-grāmatu datorā vai Mac datorā, jums ir nepieciešamid Adobe Digital Editions (šī ir bezmaksas lietotne, kas īpaši izstrādāta e-grāmatām. Tā nav tas pats, kas Adobe Reader, kas, iespējams, jau ir jūsu datorā.)

    Jūs nevarat lasīt šo e-grāmatu, izmantojot Amazon Kindle.

Memory-based language processing - a machine learning and problem solving method for language technology - is based on the idea that the direct reuse of examples using analogical reasoning is more suited for solving language processing problems than the application of rules extracted from those examples. This book discusses the theory and practice of memory-based language processing, showing its comparative strengths over alternative methods of language modelling. Language is complex, with few generalizations, many sub-regularities and exceptions, and the advantage of memory-based language processing is that it does not abstract away from this valuable low-frequency information. By applying the model to a range of benchmark problems, the authors show that for linguistic areas ranging from phonology to semantics, it produces excellent results. They also describe TiMBL, a software package for memory-based language processing. The first comprehensive overview of the approach, this book will be invaluable for computational linguists, psycholinguists and language engineers.

Papildus informācija

This book discusses the theory and practice of memory-based language processing - a machine learning method for modelling language.
Preface 1(2)
Memory-Based Learning in Natural Language Processing
3(12)
Natural language processing as classification
6(3)
A linguistic example
9(3)
Roadmap and software
12(2)
Further reading
14(1)
Inspirations from linguistics and artificial intelligence
15(11)
Inspirations from linguistics
15(6)
Inspirations from artificial intelligence
21(1)
Memory-based language processing literature
22(2)
Conclusion
24(2)
Memory and Similarity
26(31)
German plural formation
27(1)
Similarity metric
28(17)
Information-theoretic feature weighting
29(2)
Alternative feature weighting methods
31(1)
Getting started with TiMBL
32(4)
Feature weighting in TiMBL
36(2)
Modified value difference metric
38(1)
Value clustering in TiMBL
39(3)
Distance-weighted class voting
42(2)
Distance-weighted class voting in TiMBL
44(1)
Analyzing the output of MBLP
45(1)
Displaying nearest neighbors in TiMBL
45(1)
Implementation issues
46(1)
TiMBL trees
47(1)
Methodology
47(8)
Experimental methodology in TiMBL
48(4)
Additional performance measures in TiMBL
52(3)
Conclusion
55(2)
Application to morpho-phonology
57(28)
Phonemization
59(14)
Memory-based word phonemization
59(1)
TreeTalk
60(7)
IGTree in TiMBL
67(2)
Experiments: applying IGTree to word phonemization
69(2)
TRIBL: trading memory for speed
71(2)
TRIBL in TiMBL
73(1)
Morphological analysis
73(7)
Dutch morphology
74(1)
Feature and class encoding
74(2)
Experiments: MBMA on Dutch wordforms
76(4)
Conclusion
80(3)
Further reading
83(2)
Application to shallow parsing
85(19)
Part-of-speech tagging
86(10)
Memory-based tagger architecture
87(1)
Results
88(2)
Memory-based tagging with Mbt and Mbtg
90(6)
Constituent chunking
96(3)
Results
96(1)
Using MBT and MBTG for chunking
97(2)
Relation finding
99(2)
Relation finder architecture
99(1)
Results
100(1)
Conclusion
101(1)
Further reading
102(2)
Abstraction and generalization
104(44)
Lazy versus eager learning
106(9)
Benchmark language learning tasks
107(4)
Forgetting by rule induction is harmful in language learning
111(4)
Editing examples
115(8)
Why forgetting examples can be harmful
123(5)
Generalizing examples
128(15)
Careful abstraction in memory-based learning
128(7)
Getting started with FAMBL
135(2)
Experiments with FAMBL
137(6)
Conclusion
143(2)
Further reading
145(3)
Extensions
148(20)
Wrapped progressive sampling
149(7)
The wrapped progressive sampling algorithm
150(2)
Getting started with wrapped progressive sampling
152(2)
Wrapped progressive sampling results
154(2)
Optimizing output sequences
156(8)
Stacking
157(3)
Predicting class n-grams
160(2)
Combining stacking and class n-grams
162(2)
Summary
164(1)
Conclusion
164(1)
Further reading
165(3)
Bibliography 168(18)
Index 186
Walter Daelemans is Professor of Computational Linguistics and AI in the Department of Linguistics, University of Antwerp. Antal van den Bosch is Assistant Professor in the Department of Computational Linguistics and AI, Tilburg University.