MLOps Engineering at Scale

MLOps Engineering at Scale
Carl Osipov
RRP:
NZ$ 123.99
Our Price:
NZ$ 117.79
Paperback
h238 x 186mm - 250pg
9 Mar 2022 US
International import eta 10-19 days
9781617297762
Out Of Stock
Currently no stock in-store, stock is sourced to your order
Deploying a machine learning model into a fully realized production system usually requires painstaking work by an operations team creating and managing custom servers. Cloud Native Machine Learning helps you bridge that gap by using the pre-built services provided by cloud platforms like Azure and AWS to assemble your ML system' s infrastructure. Following a real-world use case for calculating taxi fares, you' ll learn how to get a serverless ML pipeline up and running using AWS services. Clear and detailed tutorials show you how to develop reliable, flexible, and scalable machine learning systems without time-consuming management tasks or the costly overheads of physical hardware. about the technologyYour new machine learning model is ready to put into production, and suddenly all your time is taken up by setting up your server infrastructure. Serverless machine learning offers a productivity-boosting alternative. It eliminates the time-consuming operations tasks from your machine learning lifecycle, letting out-of-the-box cloud services take over launching, running, and managing your ML systems. With the serverless capabilities of major cloud vendors handling your infrastructure, you' re free to focus on tuning and improving your models. about the book Cloud Native Machine Learning is a guide to bringing your experimental machine learning code to production using serverless capabilities from major cloud providers. You' ll start with best practices for your datasets, learning to bring VACUUM data-quality principles to your projects, and ensure that your datasets can be reproducibly sampled. Next, you' ll learn to implement machine learning models with PyTorch, discovering how to scale up your models in the cloud and how to use PyTorch Lightning for distributed ML training. Finally, you' ll tune and engineer your serverless machine learning pipeline for scalability, elasticity, and ease of monitoring with the built-in notification tools of your cloud platform. When you' re done, you' ll have the tools to easily bridge the gap between ML models and a fully functioning production system. what' s inside Extracting, transforming, and loading datasets Querying datasets with SQL Understanding automatic differentiation in PyTorch Deploying trained models and pipelines as a service endpoint Monitoring and managing your pipeline' s life cycle Measuring performance improvements about the readerFor data professionals with intermediate Python skills and basic familiarity with machine learning. No cloud experience required. about the author Carl Osipov has spent over 15 years working on big data processing and machine learning in multi-core, distributed systems, such as service-oriented architecture and cloud computing platforms. While at IBM, Carl helped IBM Software Group to shape its strategy around the use of Docker and other container-based technologies for serverless computing using IBM Cloud and Amazon Web Services. At Google, Carl learned from the world' s foremost experts in machine learning and also helped manage the company' s efforts to democratize artificial intelligence. You can learn more about Carl from his blog Clouds With Carl.
Carl Osipov has been working in the information technology industry since 2001, with a focus on projects in big data analytics and machine learning in multi-core, distributed systems, such as service-oriented architecture and cloud computing platforms. While at IBM, Carl helped IBM Software Group to shape its strategy around the use of Docker and other container-based technologies for serverless cloud computing using IBM Cloud and Amazon Web Services. At Google, Carl learned from the world' s foremost experts in machine learning and helped manage the company' s efforts to democratize artificial intelligence with Google Cloud and TensorFlow. Carl is an author of over 20 articles in professional, trade, and academic journals; an inventor with six patents at USPTO; and the holder of three corporate technology awards from IBM.

In stock - for items in stock we aim to dispatch the next business day. For delivery in NZ allow 2-5 business days, with rural taking a wee bit longer.

Locally sourced in NZ - stock comes from a NZ supplier with an approximate delivery of 7-15 business days.

International Imports - stock is imported into NZ, depending on air or sea shipping option from the international supplier stock can take 10-30 working days to arrive into NZ. 

Pre-order Titles - delivery will vary depending on where the title is published, if local stock is available in NZ then 5-7 business days, for international imports it can be 10-30 business days. In all cases we will access the quickest supply option.

Delivery Packaging - we ship all items in cardboard sleeves or by box with either packing paper or corn starch chips. (We avoid using plastics bubble bags)

Tracking - Orders are delivered by track and trace courier and are fully insured, tracking information will be sent by email once dispatched.

View our full Order & Delivery information