Atjaunināt sīkdatņu piekrišanu

ChatGPT: Principles and Architecture [Mīkstie vāki]

(Dr Cheng is the Deputy Director of the Technology Transfer Center at Xiangtan University and the Vice Dean of the JD Intelligent City and Big Data Research Institute in Xiangtan, China.)
  • Formāts: Paperback / softback, 230 pages, height x width: 229x152 mm, weight: 450 g
  • Izdošanas datums: 17-Jun-2025
  • Izdevniecība: Elsevier - Health Sciences Division
  • ISBN-10: 0443274363
  • ISBN-13: 9780443274367
Citas grāmatas par šo tēmu:
  • Mīkstie vāki
  • Cena: 192,55 €
  • Grāmatu piegādes laiks ir 3-4 nedēļas, ja grāmata ir uz vietas izdevniecības noliktavā. Ja izdevējam nepieciešams publicēt jaunu tirāžu, grāmatas piegāde var aizkavēties.
  • Daudzums:
  • Ielikt grozā
  • Piegādes laiks - 4-6 nedēļas
  • Pievienot vēlmju sarakstam
  • Formāts: Paperback / softback, 230 pages, height x width: 229x152 mm, weight: 450 g
  • Izdošanas datums: 17-Jun-2025
  • Izdevniecība: Elsevier - Health Sciences Division
  • ISBN-10: 0443274363
  • ISBN-13: 9780443274367
Citas grāmatas par šo tēmu:
ChatGPT: Principles and Architecture bridges the knowledge gap between theoretical AI concepts and their practical applications. It equips industry professionals and researchers with a deeper understanding of large language models, enabling them to effectively leverage these technologies in their respective fields. In addition, it tackles the complexity of understanding large language models and their practical applications by demystifying underlying technologies and strategies used in developing ChatGPT and similar models. By combining theoretical knowledge with real-world examples, the book enables readers to grasp the nuances of AI technologies, thus paving the way for innovative applications and solutions in their professional domains.

Sections focus on the principles, architecture, pretraining, transfer learning, and middleware programming techniques of ChatGPT, providing a useful resource for the research and academic communities. It is ideal for the needs of industry professionals, researchers, and students in the field of AI and computer science who face daily challenges in understanding and implementing complex large language model technologies.
1. The New Milestone in AI - ChatGPT
2. In-Depth Understanding of Transformer Architecture
3. Generative Pretraining
4. Unsupervised Multi-task and Zero-shot Learning
5. Sparse Attention and Content-based Learning in GPT-3
6. Pretraining Strategies for Large Language Models
7. Proximal Policy Optimization Algorithms
8. Human Feedback Reinforcement Learning
9. Low-Compute Domain Transfer for Large Language Models
10. Middleware Programming
11. The Future Path of Large Language Models