
Ultimate ONNX for Deep Learning Optimization
PAYBACK Punkte
18 °P sammeln!
Bringing Deep Learning Models to the Edge Efficiently Using ONNX. Key Features ¿ Master end-to-end ONNX workflows from framework export models to edge deployment. ¿ Hands-on optimization techniques like quantization, pruning and knowledge distillation for real-world edge AI performance. ¿ Production-grade case studies across vision, speech, and language models on edge devices. Book Description ONNX has emerged as the de facto standard for deploying portable, framework-agnostic machine learning models across diverse hardware platforms. Ultimate ONNX for Deep Learning Optimization provides a ...
Bringing Deep Learning Models to the Edge Efficiently Using ONNX. Key Features ¿ Master end-to-end ONNX workflows from framework export models to edge deployment. ¿ Hands-on optimization techniques like quantization, pruning and knowledge distillation for real-world edge AI performance. ¿ Production-grade case studies across vision, speech, and language models on edge devices. Book Description ONNX has emerged as the de facto standard for deploying portable, framework-agnostic machine learning models across diverse hardware platforms. Ultimate ONNX for Deep Learning Optimization provides a structured, end-to-end guide to the ONNX ecosystem, starting with ONNX fundamentals, model representation, and framework integration. You will learn how to export models from PyTorch, TensorFlow, and Scikit-Learn, inspect and modify ONNX graphs, and leverage ONNX Runtime and ONNX Simplifier for inference optimization. Each chapter builds technical depth, equipping you with the tools required to move models beyond experimentation. The book focuses on performance-critical optimization techniques, including quantization, pruning, and knowledge distillation, followed by practical deployment on edge devices such as Raspberry Pi. Through complete, real-world case studies covering object detection, speech recognition, and compact language models, you can implement custom operators, follow deployment best practices, and understand production constraints. Thus, by the end of this book, you will be capable of designing, optimizing, and deploying efficient ONNX-based AI systems for edge environments. What you will learn ¿ Design and understand ONNX models, graphs, operators, and runtimes. ¿ Convert and integrate models from PyTorch, TensorFlow, and Scikit-Learn. ¿ Optimize inference using graph simplification, quantization, and pruning. ¿ Apply knowledge distillation to retain accuracy on constrained devices. ¿ Deploy and benchmark ONNX models on Raspberry Pi and edge hardware. ¿ Build custom ONNX operators, and extend models beyond standard layers. Table of Contents 1. Introduction to ONNX and Edge Computing 2. Getting Started with ONNX 3. ONNX Integration with Deep Learning Frameworks 4. Model Optimization Using ONNX Simplifier and ONNX Runtime 5. Model Quantization Using ONNX Runtime 6. Model Pruning in Pytorch and Exporting to ONNX 7. Knowledge Distillation for Edge AI 8. Deploying ONNX Models on Edge Devices 9. End to End Execution of YOLOv12 10. End to End Execution of Whisper Speech Recognition Model 11. End to End Execution of SmolLM Model 12. ONNX Model from Scratch and Custom Operators 13. Real-World Applications, Best Practices, Security, and Future Trends in ONNX for Edge AI Index About the Authors Meet Patel is a machine learning engineer with over seven years of expertise dedicated to a singular challenge, that is, making Artificial Intelligence (AI) faster, smaller, and more efficient. His passion lies in unlocking the potential of AI on resource-constrained devices, pushing models from the lab into the real world. His transition into AI from a mechanical engineering background underscores a journey fueled by curiosity and self-motivation. He was driven by a passion to master the intricacies of machine learning. Meet has extensive hands-on experience in taking models from initial research and training through advanced optimization techniques such as quantization, pruning, and knowledge distillation, all the way to compiler level enhancements and final deployment.