最新消息:请大家多多支持

Hands-on Generative AI Engineering with Large Language Model

未分类 dsgsd 15浏览 0评论

Published 8/2024
Created by Quang Tan DUONG
MP4 | Video: h264, 1280×720 | Audio: AAC, 44.1 KHz, 2 Ch
Genre: eLearning | Language: English | Duration: 160 Lectures ( 6h 18m ) | Size: 2.76 GB

Implementing Transformer, Training, Fine-tuning | GenAI applications: AI Assistant, Chatbot, RAG, Agent | Deployment

What you’ll learn:
Understanding how to build, implement, train, and perform inference on a Large Language Model, such as Transformer (Attention Is All You Need) from scratch.
Gaining knowledge of the different components, tools, and frameworks required to build an LLM-based application.
Learning how to serve and deploy your LLM-based application from scratch.
Engaging in hands-on technical implementations: Notebook, Python scripts, building model as as Python package, train, infer, fine-tune, deploy & more.
Receiving guidance on advanced engineering topics in Generative AI with Large Language Models.

Requirements:
No prior experience in Generative AI, Large Language Models, Natural Language Processing, or Python is needed. This course will provide you with everything you need to enter this field with enthusiasm and curiosity. Concepts and components are first explained theoretically and through documentation, followed by hands-on technical implementations. All code snippets are explained step-by-step, with accompanying Notebook playgrounds and complete Python source code, structured to ensure a clear and comprehensive understanding.

Description:
Dive into the rapidly evolving world of Generative AI with our comprehensive course, designed for learners eager to build, train, and deploy Large Language Models (LLMs) from scratch.This course equips you with a wide range of tools, frameworks, and techniques to create your GenAI applications using Large Language Models, including Python, PyTorch, LangChain, LlamaIndex, Hugging Face, FAISS, Chroma, Tavily, Streamlit, Gradio, FastAPI, Docker, and more.This hands-on course covers essential topics such as implementing Transformers, fine-tuning models, prompt engineering, vector embeddings, vector stores, and creating cutting-edge AI applications like AI Assistants, Chatbots, Retrieval-Augmented Generation (RAG) systems, autonomous agents, and deploying your GenAI applications from scratch using REST APIs and Docker containerization.By the end of this course, you will have the practical skills and theoretical knowledge needed to engineer and deploy your own LLM-based applications.Let’s look at our table of contents:Introduction to the CourseCourse ObjectivesCourse StructureLearning PathsPart 1: Software Prerequisites for Python ProjectsIDEVS CodePyCharmTerminalWindows: PowerShell, etc.macOS: iTerm2, etc.Linux: Bash, etc.Python InstallationPython installerAnaconda distributionPython EnvironmentvenvcondaPython Package InstallationPyPI, pipAnaconda, condaSoftware Used in This CoursePart 2: Introduction to TransformersIntroduction to NLP Before and After the Transformer’s ArrivalMastering Transformers Block by BlockTransformer Training ProcessTransformer Inference ProcessPart 3: Implementing Transformers from Scratch with PyTorchIntroduction to the Training Process ImplementationImplementing a Transformer as a Python PackageCalling the Training and Inference ProcessesExperimenting with NotebooksPart 4: Generative AI with the Hugging Face EcosystemIntroduction to Hugging FaceHugging Face HubsModelsDatasetsSpacesHugging Face LibrariesTransformersDatasetsEvaluate, etc.Practical Guides with Hugging FaceFine-Tuning a Pre-trained Language Model with Hugging FaceEnd-to-End Fine-Tuning ExampleSharing Your ModelPart 5: Components to Build LLM-Based Web ApplicationsBackend ComponentsLLM Orchestration Frameworks: LangChain, LlamaIndexOpen-Source vs. Proprietary LLMsVector EmbeddingVector DatabasePrompt EngineeringFrontend ComponentsPython-Based Frontend Frameworks: Streamlit, GradioPart 6: Building LLM-Based Web ApplicationsTask-Specific AI AssistantsCulinary AI AssistantMarketing AI AssistantCustomer AI AssistantSQL-Querying AI AssistantTravel AI AssistantSummarization AI AssistantInterview AI AssistantSimple AI ChatbotRAG (Retrieval-Augmented Generation) Based AI ChatbotChat with PDF, DOCX, CSV, TXT, WebpageAgent-Based AI ChatbotAI Chatbot with Math ProblemsAI Chatbot with Search ProblemsPart 7: Serving LLM-Based Web ApplicationsCreating the Frontend and Backend as Two Separate ServicesCommunicating Between Frontend and Backend Using a REST APIServing the Application with DockerInstall, Run, and Enable Communication Between Frontend and Backend in a Single Docker ContainerUse CaseAn LLM-Based Song Recommendation AppConclusions and Next StepsWhat We Have LearnedNext StepsThank You


Password/解压密码www.tbtos.com

资源下载此资源仅限VIP下载,请先

转载请注明:0daytown » Hands-on Generative AI Engineering with Large Language Model

您必须 登录 才能发表评论!