最新消息:请大家多多支持

Building Deploying and Scaling LLM Powered Applications

其他教程 dsgsd 60浏览 0评论

Published 10/2023
Created by LLM Developer
MP4 | Video: h264, 1280×720 | Audio: AAC, 44.1 KHz, 2 Ch
Genre: eLearning | Language: English | Duration: 19 Lectures ( 2h 27m ) | Size: 1.42 GB

Course1 – Building and Scaling Text Summarization Service using Langchain, OpenAI and Amazon Web Services

What you’ll learn
You will Learn to Build a Complete Scalable Software Application Which Is Powered By a Large Language Model And Deploy It At Scale on Amazon Web Services
You Will learn to Integrate Your Application’s LLM Powered Backend with Streamlit UI Frontend
You Will First Learn To Locally Test Your Application , Then Package It Using Docker And Finally Learn The Best Practices For Using Streamlit Inside Docker
You Will Learn a Template & Best Practices to Inject your OpenAI’s API Keys Into Your Containerized Application At Run Time
You Will Learn To Address Vulnerabilities In Your Containerized Application And Best Practices To Resolve Them
You Will Learn to Design Your System’s Architecture Based On The Components And Design Choices In Your Application
You Will Learn the Differences Between Horizontal Scaling and Vertical Scaling
You Will Learn in Depth to Apply Serverless Deployment and Learn To apply Load Balancers and Auto Scaling To Your Application
You Will Be Able To Apply Your Learnings To Build Deploy & Scale other LLM Powered Langchain Applications

Requirements
Users of this course must know how to write code in Python, Basic Knowledge of Langchain ( though, it will be discussed in the course videos ), Basic Knowledge of AWS. Additionally basic knowledge of Docker is preferred but not required as the required information to package applications for deployment will be taught in the course

Description
Are you ready to dive deep into the world of Machine Learning Engineering and build powerful software applications? Our Machine Learning Engineering course is designed to equip you with the skills and knowledge to harness the full potential of Langchain, integrate the OpenAI API, deploy applications on AWS Elastic Container Service, and efficiently manage scaling using Load Balancers and Auto Scaling Groups.In this hands-on course, you’ll learn how to create robust ML applications from the ground up. We’ll start by mastering Langchain, a cutting-edge language model, and demonstrate how to seamlessly inject your OpenAI API key into the prediction pipeline at runtime. You’ll gain proficiency in designing and developing ML applications that can understand, process, and generate human-like text.As you progress, we’ll explore the fundamental concepts of Horizontal Scaling and Vertical Scaling, providing a clear understanding of when and how to implement each strategy. You’ll then discover how to scale your ML application with ease by deploying Application Load Balancers and Auto Scaling Groups on AWS, ensuring high availability and fault tolerance.By the end of this course, you’ll be well-versed in building ML-driven software applications, deploying them on AWS, and scaling them to meet the demands of your users. Join us on this exciting journey into the world of Machine Learning Engineering and become a skilled practitioner in this rapidly evolving field.


Password/解压密码www.tbtos.com

资源下载此资源仅限VIP下载,请先

转载请注明:0daytown » Building Deploying and Scaling LLM Powered Applications

您必须 登录 才能发表评论!