Hands-On LLM Serving and Optimization, Chi Wang, Peiheng Hu (9798341621497) — Readings Books

Become a Readings Member to make your shopping experience even easier. Sign in or sign up for free!

Become a Readings Member. Sign in or sign up for free!

Hello Readings Member! Go to the member centre to view your orders, change your details, or view your lists, or sign out.

Hello Readings Member! Go to the member centre or sign out.

We can't guarantee delivery by Christmas, but there's still time to get a great gift! Visit one of our shops or buy a digital gift card.

Hands-On LLM Serving and Optimization
Paperback

Hands-On LLM Serving and Optimization

$179.99
Sign in or become a Readings Member to add this title to your wishlist.

Large language models (LLMs) are rapidly becoming the backbone of AI-driven applications. Without proper optimization, however, LLMs can be expensive to run, slow to serve, and prone to performance bottlenecks. As the demand for real-time AI applications grows, along comes Hands-On Serving and Optimizing LLM Models, a comprehensive guide to the complexities of deploying and optimizing LLMs at scale.

In this hands-on book, authors Chi Wang and Peiheng Hu take a real-world approach backed by practical examples and code, and assemble essential strategies for designing robust infrastructures that are equal to the demands of modern AI applications. Whether you're building high-performance AI systems or looking to enhance your knowledge of LLM optimization, this indispensable book will serve as a pillar of your success.

Learn the key principles for designing a model-serving system tailored to popular business scenarios Understand the common challenges of hosting LLMs at scale while minimizing costs Pick up practical techniques for optimizing LLM serving performance Build a model-serving system that meets specific business requirements Improve LLM serving throughput and reduce latency Host LLMs in a cost-effective manner, balancing performance and resource efficiency

Read More
In Shop
Out of stock
Shipping & Delivery

$9.00 standard shipping within Australia
FREE standard shipping within Australia for orders over $100.00
Express & International shipping calculated at checkout

MORE INFO

Stock availability can be subject to change without notice. We recommend calling the shop or contacting our online team to check availability of low stock items. Please see our Shopping Online page for more details.

Format
Paperback
Publisher
O'Reilly Media
Country
United States
Date
2 June 2026
Pages
300
ISBN
9798341621497

Large language models (LLMs) are rapidly becoming the backbone of AI-driven applications. Without proper optimization, however, LLMs can be expensive to run, slow to serve, and prone to performance bottlenecks. As the demand for real-time AI applications grows, along comes Hands-On Serving and Optimizing LLM Models, a comprehensive guide to the complexities of deploying and optimizing LLMs at scale.

In this hands-on book, authors Chi Wang and Peiheng Hu take a real-world approach backed by practical examples and code, and assemble essential strategies for designing robust infrastructures that are equal to the demands of modern AI applications. Whether you're building high-performance AI systems or looking to enhance your knowledge of LLM optimization, this indispensable book will serve as a pillar of your success.

Learn the key principles for designing a model-serving system tailored to popular business scenarios Understand the common challenges of hosting LLMs at scale while minimizing costs Pick up practical techniques for optimizing LLM serving performance Build a model-serving system that meets specific business requirements Improve LLM serving throughput and reduce latency Host LLMs in a cost-effective manner, balancing performance and resource efficiency

Read More
Format
Paperback
Publisher
O'Reilly Media
Country
United States
Date
2 June 2026
Pages
300
ISBN
9798341621497