Atjaunināt sīkdatņu piekrišanu

E-grāmata: Subtitling Through Speech Recognition: Respeaking

  • Formāts: 196 pages
  • Sērija : Translation Practices Explained
  • Izdošanas datums: 30-Sep-2020
  • Izdevniecība: St Jerome Publishing
  • Valoda: eng
  • ISBN-13: 9781000154696
Citas grāmatas par šo tēmu:
  • Formāts - EPUB+DRM
  • Cena: 77,63 €*
  • * ši ir gala cena, t.i., netiek piemērotas nekādas papildus atlaides
  • Ielikt grozā
  • Pievienot vēlmju sarakstam
  • Šī e-grāmata paredzēta tikai personīgai lietošanai. E-grāmatas nav iespējams atgriezt un nauda par iegādātajām e-grāmatām netiek atmaksāta.
  • Formāts: 196 pages
  • Sērija : Translation Practices Explained
  • Izdošanas datums: 30-Sep-2020
  • Izdevniecība: St Jerome Publishing
  • Valoda: eng
  • ISBN-13: 9781000154696
Citas grāmatas par šo tēmu:

DRM restrictions

  • Kopēšana (kopēt/ievietot):

    nav atļauts

  • Drukāšana:

    nav atļauts

  • Lietošana:

    Digitālo tiesību pārvaldība (Digital Rights Management (DRM))
    Izdevējs ir piegādājis šo grāmatu šifrētā veidā, kas nozīmē, ka jums ir jāinstalē bezmaksas programmatūra, lai to atbloķētu un lasītu. Lai lasītu šo e-grāmatu, jums ir jāizveido Adobe ID. Vairāk informācijas šeit. E-grāmatu var lasīt un lejupielādēt līdz 6 ierīcēm (vienam lietotājam ar vienu un to pašu Adobe ID).

    Nepieciešamā programmatūra
    Lai lasītu šo e-grāmatu mobilajā ierīcē (tālrunī vai planšetdatorā), jums būs jāinstalē šī bezmaksas lietotne: PocketBook Reader (iOS / Android)

    Lai lejupielādētu un lasītu šo e-grāmatu datorā vai Mac datorā, jums ir nepieciešamid Adobe Digital Editions (šī ir bezmaksas lietotne, kas īpaši izstrādāta e-grāmatām. Tā nav tas pats, kas Adobe Reader, kas, iespējams, jau ir jūsu datorā.)

    Jūs nevarat lasīt šo e-grāmatu, izmantojot Amazon Kindle.

Based on sound research and first-hand experience in the field, Subtitling through Speech Recognition: Respeaking is the first book to present a comprehensive overview of the production of subtitles through speech recognition in Europe. Topics covered include the origins of subtitling for the deaf and hard of hearing, the different methods used to provide live subtitles and the training and professional practice of respeaking around the world. The core of the book is devoted to elaborating an in-depth respeaking course, including the skills required before, during and after the respeaking process. The volume also offers detailed analysis of the reception of respeaking, featuring information about viewers preferences, comprehension and perception of respoken subtitles obtained with eye-tracking technology.

Accompanying downloadable resources feature a wealth of video clips and documents designed to illustrate the material in the book and to serve as a basis for the exercises included at the end of each chapter. The working language of the book is English, but the downloadable resources also contain sample material in Dutch, French, Galician, German, Italian and Spanish.

Subtitling through Speech Recognition: Respeaking is designed for use as a coursebook for classroom practice or as a handbook for self-learning. It will be of interest to undergraduate and postgraduate students as well as freelance and in-house language professionals. It will also find a reading public among broadcasters, cinema, theatre and museum managers, as well as the deaf and members of deaf associations, who may use the volume to support future campaigns and enhance the quality of the speech-to-text accessibility they provide to their members.

Recenzijas

' a must have for students, trainers and professionals. not only the first, but probably the ultimate live subtitling textbook.' Aline Remael, Artesis University College Antwerp

'... thorough and comprehensive ... a brave and pioneering work bound to become a classic from the word go. Inspiring, engaging, superbly written, it offers a state-of-the-art account of a field notoriously under-researched. A prime example of solid research and scholarship, a must read for anyone who wants to keep abreast with all the new developments taking place in Audiovisual Translation.' Jorge Dķaz Cintas, Imperial College London, UK

List of abbreviations, figures and tables
Contents of Accompanying DVD
Acknowledgements
How to Use this Book and DVD
1 Introduction to Respeaking
1(5)
1.1 What is respeaking?
1(1)
1.2 The name game
2(3)
1.3 Discussion points
5(1)
1.3.1 Definition
5(1)
1.3.2 Respeaking terminology in English
5(1)
1.3.3 Respeaking terminology in other languages
5(1)
2 Live Subtitling
6(16)
2.0 Introduction
6(1)
2.1 Origins of SDH and live subtitling
6(3)
2.2 Legislation and developments
9(2)
2.3 Classification and methods
11(7)
2.3.1 Programme type: live, as-live, pre-recorded
12(1)
2.3.2 Production approach: live, semi-live, pre-recorded
12(1)
2.3.3 Language: intralingual or interlingual
12(1)
2.3.4 Transcription method: QWERTY, Velotype, dual, stenotype and SR (respeaking)
13(3)
2.3.5 Correction method: no correction, self-correction, parallel correction
16(1)
2.3.6 Editing policy: verbatim, reduced
16(1)
2.3.7 Display mode: blocks, scrolling
17(1)
2.3.8 SDH features: none, character ID, sound information
17(1)
2.4 Discussion points and exercises
18(4)
2.4.1 Legislation
18(1)
2.4.2 Users
18(1)
2.4.3 Guidelines
18(1)
2.4.4 Different approaches to live subtitling
19(1)
2.4.5 Assessment of live subtitles
20(2)
3 Respeaking as a Professional Practice
22(23)
3.0 Introduction
22(1)
3.1 Respeaking on TV
22(18)
3.1.1 Respeaking in the UK
22(1)
3.1.1.1 Companies
22(1)
3.1.1.2 Working conditions
23(1)
3.1.1.3 Recruitment and training
24(1)
3.1.1.4 Respoken subtitles
25(2)
3.1.2 Respeaking in Spain
27(1)
3.1.3 Respeaking in Flanders
28(2)
3.1.4 Respeaking in Switzerland
30(4)
3.1.5 Respeaking in Denmark
34(1)
3.1.6 Respeaking in France
35(1)
3.1.7 Respeaking in Italy
36(1)
3.1.8 Respeaking in Canada
37(1)
3.1.9 Voice Writing in the US
38(2)
3.2 Respeaking training at University
40(2)
3.2.1 Universitat Autonoma de Barcelona
40(1)
3.2.2 Roehampton University
41(1)
3.2.3 Higher Institute for Translation and Interpreting of Artesis University College (Antwerp, Belgium)
41(1)
3.3 Respeaking training in the US
42(1)
3.4 Discussion points and exercises
43(2)
3.4.1 Respeaking in the UK
43(1)
3.4.2 Respeaking in Flanders
43(1)
3.4.3 Respeaking in Switzerland and Italy
44(1)
3.4.4 Respeaking in Canada, France and the US
44(1)
3.4.5 Respeaking at University
44(1)
4 Respeaking Skills
45(11)
4.0 Introduction
45(1)
4.1 Respeaking and interpreting
45(2)
4.2 Respeaking and subtitling
47(1)
4.3 The specificity of respeaking
48(2)
4.4 Respeaking skills summarized
50(5)
4.5 Discussion points and exercises
55(1)
4.5.1 Respeaking skills as viewed by respeakers and employers
55(1)
4.5.2 Research
55(1)
5 Respeaking Skills Applied before the Process I: General Knowledge of SR
56(18)
5.0 Introduction
56(1)
5.1 How it works: main components and process
57(3)
5.1.1 Main components
57(1)
5.1.2 Process
58(2)
5.2 How it works for respeakers
60(1)
5.3 The origins of SR
61(2)
5.4 The present: state of the art and software available
63(8)
5.4.1 Viascribe
63(1)
5.4.2 Windows Speech Recognition
64(1)
5.4.3 Via Voice
65(1)
5.4.4 Dragon NaturallySpeaking
66(1)
5.4.4.1 Dragon 10
66(2)
5.4.4.2 Dragon 11
68(1)
5.4.5 Speaker-independent SR: Google, LLC and MIT
69(1)
5.4.6 Subtitling software to use with SR
70(1)
5.4.7 Screencasting software to use with SR
70(1)
5.5 The future of SR
71(1)
5.6 Discussion points and exercises
71(3)
5.6.1 Viascribe
71(1)
5.6.2 Windows Speech Recognition, Via Voice and Dragon
72(1)
5.6.3 Speaker-independent SR
72(1)
5.6.4 Automatic punctuation in SR
73(1)
6 Respeaking Skills Applied before the Process II: Preparation of the Software -- Respeaking with Dragon
74(21)
6.0 Introduction
74(1)
6.1 Choosing and using a microphone
74(1)
6.1.1 Type of microphone
74(1)
6.1.2 Setup
75(1)
6.2 Creating a user profile
75(1)
6.3 Dictating to SR software
76(2)
6.4 Improving the user profile
78(14)
6.4.1 Speed settings: faster display in Dragon
78(2)
6.4.2 Initial dictation and use of commands
80(4)
6.4.3 How to correct errors
84(2)
6.4.4 Refining the acoustic model
86(1)
6.4.5 Refining the language model: customisation of the vocabulary
86(1)
6.4.5.1 Adding new words
86(1)
6.4.5.2 Adding words/phrases from lists
87(1)
6.4.5.3 Adding words from documents & adapting to writing style
88(1)
6.4.5.4 Use of macros: the vocabulary Editor
89(3)
6.4.5.5 The Dragon Vocabulary Tool (Voctool) and the middle slot
92(1)
6.5 Dragon 11
92(1)
6.6 Exercises
93(2)
6.6.1 Creating a user profile
93(1)
6.6.2 Dictating to SR software and improving the user profile
94(1)
7 Respeaking Skills Applied During the Process I
95(28)
7.0 Introduction
95(1)
7.1 Split attention: dealing with simultaneous but non-overlapping inputs
95(6)
7.1.1 Listening and speaking (and listening again)
95(2)
7.1.2 Watching: reading and keeping the audiovisual coherence
97(2)
7.1.3 Typing
99(1)
7.1.4 Dealing with simultaneous but non-overlapping inputs
100(1)
7.2 Punctuation
101(6)
7.2.1 Automatic vs. non-automatic punctuation
101(1)
7.2.2 Punctuation in respeaking
102(1)
7.2.3 The use of the comma in respeaking
103(4)
7.3 Rhythm: respeaking units and the salami technique
107(5)
7.3.1 Decalage and units of meaning in interpreting
107(1)
7.3.2 Unit level: respeaking units
108(1)
7.3.3 Sentence level: the salami technique
109(3)
7.4 Speed: Edited vs. verbatim respeaking
112(8)
7.4.1 More to speed than meets the eye: the parties involved
112(2)
7.4.2 Speech rates
114(1)
7.4.3 Reading rates
114(2)
7.4.4 Respeaking rates
116(2)
7.4.5 Edited vs. verbatim respeaking
118(1)
7.4.6 Training respeaking speed
119(1)
7.5 Exercises
120(3)
7.5.1 Split attention
120(1)
7.5.2 Punctuation
121(1)
7.5.3 Rhythm
121(1)
7.5.4 Speed
122(1)
8 Respeaking Skills Applied during the Process II: Respeaking Different Genres
123(15)
8.0 Introduction
123(1)
8.1 Sports
123(2)
8.2 News
125(5)
8.2.1 Headlines
126(2)
8.2.2 News reports
128(1)
8.2.3 Weather forecasts
129(1)
8.2.4 News summary
130(1)
8.3 Interviews, debates and chat shows
130(2)
8.4 Exercises
132(6)
8.4.1 Sports
132(2)
8.4.2 News
134(1)
8.4.2.1 Headlines
134(1)
8.4.2.2 News
134(1)
8.4.2.3 Weather
135(1)
8.4.2.4 News Summary
136(1)
8.4.3 Interviews and debates
136(2)
9 Respeaking Skills Applied during the Process III: Respeaking in other Settings
138(12)
9.0 Introduction
138(1)
9.1 Respeaking in museums and other arts venues
138(4)
9.2 Respeaking in the classroom
142(1)
9.3 Respeaking in conferences and churches
143(1)
9.4 Respeaking in live webcasts and telephones
144(2)
9.5 Discussion points and exercises
146(4)
9.5.1 Respeaking in museums and other arts venues
146(1)
9.5.2 Respeaking in the classroom
147(1)
9.5.3 Respeaking in conferences, churches, live webcasts/telephones
148(2)
10 Respeaking Skills Applied after the Process: Accuracy Rate - the NERD model
150(12)
10.0 Introduction
150(1)
10.1 Basic requirements
150(1)
10.2 Traditional WER methods
151(1)
10.3 The CRIM method
152(1)
10.4 The NERD model
152(2)
10.5 Application of the NERD model
154(7)
10.6 Exercises and discussion points
161(1)
11 The Reception of Respeaking
162(15)
11.0 Introduction
162(1)
11.1 Viewers' comprehension of respoken subtitles
162(4)
11.1.1 Description of the experiment
163(1)
11.1.2 Findings
164(1)
11.1.3 Discussion
165(1)
11.2 Viewers' perception of respoken subtitles
166(5)
11.2.1 Eye-tracking and subtitling
166(2)
11.2.2 Description of the experiment
168(1)
11.2.3 Findings
168(1)
11.2.4 Discussion
169(2)
11.3 Viewers' opinion about respoken subtitles
171(4)
11.3.1 Introduction
171(1)
11.3.2 Description of the survey
172(1)
11.3.3 Results of the survey
173(2)
11.4 Discussion points and exercises
175(2)
11.4.1 Processing subtitles: eye-tracking
175(1)
11.4.2 The viewers' opinion
176(1)
12 Final Thoughts
177(2)
Glossary 179(4)
References 183(6)
Index 189
ablo Romero-Fresco is a Senior Lecturer in Audiovisual Translation at Roehampton University, London, UK. He has worked as a respeaker for the National Gallery in the UK and has provided respeaking training to different universities and companies around the world. As a member of the research group TransMedia Catalonia, he has published and carried out research on dubbing, subtitling and audio-description, and has coordinated the subtitling part of DTV4ALL, an EU-funded research project exploring the possibility of providing a common standard for Subtitling for the Deaf and Hard of Hearing in Europe.