LLM Engineering: Master AI, Large Language Models & Agents
-
Week 1 – Build Your First LLM Product Exploring Top Models & TransformersDay 1 – Cold Open: Jumping Right into LLM Engineering0sDay 1 – Setting Up Ollama for Local LLM Deployment on Windows and Mac0sDay 1 – Unleashing the Power of Local LLMs: Build Spanish Tutor Using Ollama0sDay 1 – LLM Engineering Roadmap: From Beginner to Master in 8 Weeks0sDay 1 – Building LLM Applications: Chatbots, RAG, and Agentic AI Projects0sDay 1 – From Wall Street to AI: Ed Donner’s Path to Becoming an LLM Engineer0sDay 1 – Setting Up Your LLM Development Environment: Tools and Best Practices0sDay 1 – Mac Setup Guide: Jupyter Lab and Conda for LLM Projects0sDay 1 – Setting Up Anaconda for LLM Engineering: Windows Installation Guide0sDay 1 – Alternative Python Setup for LLM Projects: Virtualenv vs. Anaconda Guide0sDay 1- Setting Up OpenAI API for LLM Development: Keys, Pricing & Best Practices0sDay 1 – Creating a .env File for Storing API Keys Safely0sDay 1- Instant Gratification Project: Creating an AI-Powered Web Page Summarizer0sDay 1 – Implementing Text Summarization Using OpenAI’s GPT-4 and Beautiful Soup0sDay 1 – Wrapping Up Day 1: Key Takeaways and Next Steps in LLM Engineering0sDay 2 – Mastering LLM Engineering: Key Skills and Tools for AI Development0sDay 2 – Understanding Frontier Models: GPT, Claude, and Open Source LLMs0sDay 2 – How to Use Ollama for Local LLM Inference: Python Tutorial with Jupyter0sDay 2 – Hands-On LLM Task: Comparing OpenAI and Ollama for Text Summarization0sDay 3 – Frontier AI Models: Comparing GPT-4, Claude, Gemini, and LLAMA0sDay 3 – Comparing Leading LLMs: Strengths and Business Applications0sDay 3 – Exploring GPT-4o vs O1 Preview: Key Differences in Performance0sDay 3 – Creativity and Coding: Leveraging GPT-4o’s Canvas Feature0sDay 3 – Claude 3.5’s Alignment and Artifact Creation: A Deep Dive0sDay 3 – AI Model Comparison: Gemini vs Cohere for Whimsical and Analytical Tasks0sDay 3 – Evaluating Meta AI and Perplexity: Nuances of Model Outputs0sDay 3 – LLM Leadership Challenge: Evaluating AI Models Through Creative Prompts0sDay 4 – Revealing the Leadership Winner: A Fun LLM Challenge0sDay 4 – Exploring the Journey of AI: From Early Models to Transformers0sDay 4 – Understanding LLM Parameters: From GPT-1 to Trillion-Weight Models0sDay 4 – GPT Tokenization Explained: How Large Language Models Process Text Input0sDay 4 – How Context Windows Impact AI Language Models: Token Limits Explained0sDay 4 – Navigating AI Model Costs: API Pricing vs. Chat Interface Subscriptions0sDay 4 – Comparing LLM Context Windows: GPT-4 vs Claude vs Gemini 1.5 Flash0sDay 4 – Wrapping Up Day 4: Key Takeaways and Practical Insights0sDay 5 – Building AI-Powered Marketing Brochures with OpenAI API and Python0sDay 5 – JupyterLab Tutorial: Web Scraping for AI-Powered Company Brochures0sDay 5 – Structured Outputs in LLMs: Optimizing JSON Responses for AI Projects0sDay 5 – Creating and Formatting Responses for Brochure Content0sDay 5 – Final Adjustments: Optimizing Markdown and Streaming in JupyterLab0sDay 5 – Mastering Multi-Shot Prompting: Enhancing LLM Reliability in AI Projects0sDay 5 – Assignment: Developing Your Customized LLM-Based Tutor0sDay 5 – Wrapping Up Week 1: Achievements and Next Steps0s
-
Week 2 – Build a Multi-Modal Chatbot LLMs, Gradio UI, and Agents in ActionDay 1 – Mastering Multiple AI APIs: OpenAI, Claude, and Gemini for LLM Engineers0sDay 1 – Streaming AI Responses: Implementing Real-Time LLM Output in Python0sDay 1 – How to Create Adversarial AI Conversations Using OpenAI and Claude APIs0sDay 1 – AI Tools: Exploring Transformers & Frontier LLMs for Developers0sDay 2 – Building AI UIs with Gradio: Quick Prototyping for LLM Engineers0sDay 2 – Gradio Tutorial: Create Interactive AI Interfaces for OpenAI GPT Models0sDay 2 – Implementing Streaming Responses with GPT and Claude in Gradio UI0sDay 2 – Building a Multi-Model AI Chat Interface with Gradio: GPT vs Claude0sDay 2 – Building Advanced AI UIs: From OpenAI API to Chat Interfaces with Gradio0sDay 3 – Building AI Chatbots: Mastering Gradio for Customer Support Assistants0sDay 3 – Build a Conversational AI Chatbot with OpenAI & Gradio: Step-by-Step0sDay 3 – Enhancing Chatbots with Multi-Shot Prompting and Context Enrichment0sDay 3 – Mastering AI Tools: Empowering LLMs to Run Code on Your Machine0sDay 4 – Using AI Tools with LLMs: Enhancing Large Language Model Capabilities0sDay 4 – Building an AI Airline Assistant: Implementing Tools with OpenAI GPT-40sDay 4 – How to Equip LLMs with Custom Tools: OpenAI Function Calling Tutorial0sDay 4 – Mastering AI Tools: Building Advanced LLM-Powered Assistants with APIs0sDay 5 – Multimodal AI Assistants: Integrating Image and Sound Generation0sDay 5 – Multimodal AI: Integrating DALL-E 3 Image Generation in JupyterLab0sDay 5 – Build a Multimodal AI Agent: Integrating Audio & Image Tools0sDay 5 – How to Build a Multimodal AI Assistant: Integrating Tools and Agents0s
-
Week 3 – Open-Source Gen AI Building Automated Solutions with HuggingFaceDay 1 – Hugging Face Tutorial: Exploring Open-Source AI Models and Datasets0sDay 1 – Exploring HuggingFace Hub: Models, Datasets & Spaces for AI Developers0sDay 1 – Intro to Google Colab: Cloud Jupyter Notebooks for Machine Learning0sDay 1 – Hugging Face Integration with Google Colab: Secrets and API Keys Setup0sDay 1 – Mastering Google Colab: Run Open-Source AI Models with Hugging Face0sDay 2 – Hugging Face Transformers: Using Pipelines for AI Tasks in Python0sDay 2 – Hugging Face Pipelines: Simplifying AI Tasks with Transformers Library0sDay 2 – Mastering HuggingFace Pipelines: Efficient AI Inference for ML Tasks0sDay 3 – Exploring Tokenizers in Open-Source AI: Llama, Phi-2, Qwen, & Starcoder0sDay 3 – Tokenization Techniques in AI: Using AutoTokenizer with LLAMA 3.1 Model0sDay 3 – Comparing Tokenizers: Llama, PHI-3, and QWEN2 for Open-Source AI Models0sDay 3 – Hugging Face Tokenizers: Preparing for Advanced AI Text Generation0sDay 4 – Hugging Face Model Class: Running Inference on Open-Source AI Models0sDay 4 – Hugging Face Transformers: Loading & Quantizing LLMs with Bits & Bytes0sDay 4 – Hugging Face Transformers: Generating Jokes with Open-Source AI Models0sDay 4 – Mastering Hugging Face Transformers: Models, Pipelines, and Tokenizers0sDay 5 – Combining Frontier & Open-Source Models for Audio-to-Text Summarization0sDay 5 – Using Hugging Face & OpenAI for AI-Powered Meeting Minutes Generation0sDay 5 – Build a Synthetic Test Data Generator: Open-Source AI Model for Business0s
-
Week 4 – LLM Showdown Evaluating Models for Code Generation & Business TasksDay 1 – How to Choose the Right LLM: Comparing Open and Closed Source Models0sDay 1 – Chinchilla Scaling Law: Optimizing LLM Parameters and Training Data Size0sDay 1 – Limitations of LLM Benchmarks: Overfitting and Training Data Leakage0sDay 1 – Evaluating Large Language Models: 6 Next-Level Benchmarks Unveiled0sDay 1 – HuggingFace OpenLLM Leaderboard: Comparing Open-Source Language Models0sDay 1 – Master LLM Leaderboards: Comparing Open Source and Closed Source Models0sDay 2 – Comparing LLMs: Top 6 Leaderboards for Evaluating Language Models0sDay 2 – Specialized LLM Leaderboards: Finding the Best Model for Your Use Case0sDay 2 – LLAMA vs GPT-4: Benchmarking Large Language Models for Code Generation0sDay 2 – Human-Rated Language Models: Understanding the LM Sys Chatbot Arena0sDay 2 – Commercial Applications of Large Language Models: From Law to Education0sDay 2 – Comparing Frontier and Open-Source LLMs for Code Conversion Projects0sDay 3 – Leveraging Frontier Models for High-Performance Code Generation in C++0sDay 3 – Comparing Top LLMs for Code Generation: GPT-4 vs Claude 3.5 Sonnet0sDay 3 – Optimizing Python Code with Large Language Models: GPT-4 vs Claude 3.50sDay 3 – Code Generation Pitfalls: When Large Language Models Produce Errors0sDay 3 – Blazing Fast Code Generation: How Claude Outperforms Python by 13,000x0sDay 3 – Building a Gradio UI for Code Generation with Large Language Models0sDay 3 – Optimizing C++ Code Generation: Comparing GPT and Claude Performance0sDay 3 – Comparing GPT-4 and Claude for Code Generation: Performance Benchmarks0sDay 4 – Open Source LLMs for Code Generation: Hugging Face Endpoints Explored0sDay 4 – How to Use HuggingFace Inference Endpoints for Code Generation Models0sDay 4 – Integrating Open-Source Models with Frontier LLMs for Code Generation0sDay 4 – Comparing Code Generation: GPT-4, Claude, and CodeQuen LLMs0sDay 4 – Mastering Code Generation with LLMs: Techniques and Model Selection0sDay 5 – Evaluating LLM Performance: Model-Centric vs Business-Centric Metrics0sDay 5 – Mastering LLM Code Generation: Advanced Challenges for Python Developers0s
-
Mastering RAG Build Advanced Solutions with Vector Embeddings & LangChainDay 1 – RAG Fundamentals: Leveraging External Data to Improve LLM Responses0sDay 1 – Building a DIY RAG System: Implementing Retrieval-Augmented Generation0sDay 1 – Understanding Vector Embeddings: The Key to RAG and LLM Retrieval0sDay 2 – Unveiling LangChain: Simplify RAG Implementation for LLM Applications0sDay 2 – LangChain Text Splitter Tutorial: Optimizing Chunks for RAG Systems0sDay 2 – Preparing for Vector Databases: OpenAI Embeddings and Chroma in RAG0sDay 3 – Mastering Vector Embeddings: OpenAI and Chroma for LLM Engineering0sDay 3 – Visualizing Embeddings: Exploring Multi-Dimensional Space with t-SNE0sDay 3 – Building RAG Pipelines: From Vectors to Embeddings with LangChain0sDay 4 – Mastering Retrieval-Augmented Generation: Hands-On LLM Integration0sDay 4 – Master RAG Pipeline: Building Efficient RAG Systems0sDay 5 – Optimizing RAG Systems: Troubleshooting and Fixing Common Problems0sDay 5 – Switching Vector Stores: FAISS vs Chroma in LangChain RAG Pipelines0sDay 5 – Switching Vector Stores: FAISS vs Chroma in LangChain RAG Pipelines0sDay 5 – Demystifying LangChain: Behind-the-Scenes of RAG Pipeline Construction0sDay 5 – Debugging RAG: Optimizing Context Retrieval in LangChain0sDay 5 – Build Your Personal AI Knowledge Worker: RAG for Productivity Boost0s
-
Week 6 Fine-tuning Frontier Large Language Models with LoRA/QLoRADay 1 – Fine-Tuning Large Language Models: From Inference to Training0sDay 1 – Finding and Crafting Datasets for LLM Fine-Tuning: Sources & Techniques0sDay 1 – Data Curation Techniques for Fine-Tuning LLMs on Product Descriptions0sDay 1 – Optimizing Training Data: Scrubbing Techniques for LLM Fine-Tuning0sDay 1 – Evaluating LLM Performance: Model-Centric vs Business-Centric Metrics0sDay 2 – LLM Deployment Pipeline: From Business Problem to Production Solution0sDay 2 – Prompting, RAG, and Fine-Tuning: When to Use Each Approach0sDay 2 – Productionizing LLMs: Best Practices for Deploying AI Models at Scale0sDay 2 – Optimizing Large Datasets for Model Training: Data Curation Strategies0sDay 2 – How to Create a Balanced Dataset for LLM Training: Curation Techniques0sDay 2 – Finalizing Dataset Curation: Analyzing Price-Description Correlations0sDay 2 – How to Create and Upload a High-Quality Dataset on HuggingFace0sDay 3 – Feature Engineering and Bag of Words: Building ML Baselines for NLP0sDay 3 – Baseline Models in ML: Implementing Simple Prediction Functions0sDay 3: Feature Engineering Techniques for Amazon Product Price Prediction Models0sDay 3 – Optimizing LLM Performance: Advanced Feature Engineering Strategies0sDay 3 – Linear Regression for LLM Fine-Tuning: Baseline Model Comparison0sDay 3 – Bag of Words NLP: Implementing Count Vectorizer for Text Analysis in ML0sDay 3 – Support Vector Regression vs Random Forest: Machine Learning Face-Off0sDay 3 – Comparing Traditional ML Models: From Random to Random Forest0sDay 4 – Evaluating Frontier Models: Comparing Performance to Baseline Frameworks0sDay 4 – Human vs AI: Evaluating Price Prediction Performance in Frontier Models0sDay 4 – GPT-4o Mini: Frontier AI Model Evaluation for Price Estimation Tasks0sDay 4 – Comparing GPT-4 and Claude: Model Performance in Price Prediction Tasks0sDay 4 – Frontier AI Capabilities: LLMs Outperforming Traditional ML Models0sDay 5 – Fine-Tuning LLMs with OpenAI: Preparing Data, Training, and Evaluation0sDay 5 – How to Prepare JSONL Files for Fine-Tuning Large Language Models (LLMs)0sDay 5 – Step-by-Step Guide: Launching GPT Fine-Tuning Jobs with OpenAI API0sDay 5 – Fine-Tuning LLMs: Track Training Loss & Progress with Weights & Biases0sDay 5 – Evaluating Fine-Tuned LLMs Metrics: Analyzing Training & Validation Loss0sDay 5 – LLM Fine-Tuning Challenges: When Model Performance Doesn’t Improve0sDay 5 – Fine-Tuning Frontier LLMs: Challenges & Best Practices for Optimization0s
-
Fine-tuned open-source model to compete with Frontier in price predictionDay 1 – Mastering Parameter-Efficient Fine-Tuning: LoRa, QLoRA & Hyperparameters0sDay 1 – Introduction to LoRA Adaptors: Low-Rank Adaptation Explained0sDay 1 – QLoRA: Quantization for Efficient Fine-Tuning of Large Language Models0sDay 1 – Optimizing LLMs: R, Alpha, and Target Modules in QLoRA Fine-Tuning0sDay 1 – Parameter-Efficient Fine-Tuning: PEFT for LLMs with Hugging Face0sDay 1 – How to Quantize LLMs: Reducing Model Size with 8-bit Precision0sDay 1: Double Quantization & NF4: Advanced Techniques for 4-Bit LLM Optimization0sDay 1 – Exploring PEFT Models: The Role of LoRA Adapters in LLM Fine-Tuning0sDay 1 – Model Size Summary: Comparing Quantized and Fine-Tuned Models0sDay 2 – How to Choose the Best Base Model for Fine-Tuning Large Language Models0sDay 2 – Selecting the Best Base Model: Analyzing HuggingFace’s LLM Leaderboard0sDay 2 – Exploring Tokenizers: Comparing LLAMA, QWEN, and Other LLM Models0sDay 2 – Optimizing LLM Performance: Loading and Tokenizing Llama 3.1 Base Model0sDay 2 – Quantization Impact on LLMs: Analyzing Performance Metrics and Errors0sDay 2 – Comparing LLMs: GPT-4 vs LLAMA 3.1 in Parameter-Efficient Tuning0sDay 3 – QLoRA Hyperparameters: Mastering Fine-Tuning for Large Language Models0sDay 3 – Understanding Epochs and Batch Sizes in Model Training0sDay 3 – Learning Rate, Gradient Accumulation, and Optimizers Explained0sDay 3 – Setting Up the Training Process for Fine-Tuning0sDay 3 – Configuring SFTTrainer for 4-Bit Quantized LoRA Fine-Tuning of LLMs0sDay 3 – Fine-Tuning LLMs: Launching the Training Process with QLoRA0sDay 3 – Monitoring and Managing Training with Weights & Biases0sDay 4 – Keeping Training Costs Low: Efficient Fine-Tuning Strategies0sDay 4 – Efficient Fine-Tuning: Using Smaller Datasets for QLoRA Training0sDay 4 – Visualizing LLM Fine-Tuning Progress with Weights and Biases Charts0sDay 4 – Advanced Weights & Biases Tools and Model Saving on Hugging Face0sDay 4 – End-to-End LLM Fine-Tuning: From Problem Definition to Trained Model0sDay 5 – The Four Steps in LLM Training: From Forward Pass to Optimization0sDay 5 – QLoRA Training Process: Forward Pass, Backward Pass and Loss Calculation0sDay 5 – Understanding Softmax and Cross-Entropy Loss in Model Training0sDay 5 – Monitoring Fine-Tuning: Weights & Biases for LLM Training Analysis0sDay 5 – Revisiting the Podium: Comparing Model Performance Metrics0sDay 5 – Evaluation of our Proprietary, Fine-Tuned LLM against Business Metrics0sDay 5 – Visualization of Results: Did We Beat GPT-4?0sDay 5 – Hyperparameter Tuning for LLMs: Improving Model Accuracy with PEFT0s
-
Week 8 – Build Autonomous multi agent system collaborating with modelsDay 1 – From Fine-Tuning to Multi-Agent Systems: Next-Level LLM Engineering0sDay 1: Building a Multi-Agent AI Architecture for Automated Deal Finding Systems0sDay 1 – Unveiling Modal: Deploying Serverless Models to the Cloud0sDay 1 – LLAMA on the Cloud: Running Large Models Efficiently0sDay 1 – Building a Serverless AI Pricing API: Step-by-Step Guide with Modal0sDay 1 – Multiple Production Models Ahead: Preparing for Advanced RAG Solutions0sDay 2 – Implementing Agentic Workflows: Frontier Models and Vector Stores in RAG0sDay 2 – Building a Massive Chroma Vector Datastore for Advanced RAG Pipelines0sDay 2 – Visualizing Vector Spaces: Advanced RAG Techniques for Data Exploration0sDay 2 – 3D Visualization Techniques for RAG: Exploring Vector Embeddings0sDay 2 – Finding Similar Products: Building a RAG Pipeline without LangChain0sDay 2 – RAG Pipeline Implementation: Enhancing LLMs with Retrieval Techniques0sDay 2 – Random Forest Regression: Using Transformers & ML for Price Prediction0sDay 2 – Building an Ensemble Model: Combining LLM, RAG, and Random Forest0sDay 2 – Wrap-Up: Finalizing Multi-Agent Systems and RAG Integration0sDay 3 – Enhancing AI Agents with Structured Outputs: Pydantic & BaseModel Guide0sDay 3 – Scraping RSS Feeds: Building an AI-Powered Deal Selection System0sDay 3 – Structured Outputs in AI: Implementing GPT-4 for Detailed Deal Selection0sDay 3 – Optimizing AI Workflows: Refining Prompts for Accurate Price Recognition0sDay 3 – Mastering Autonomous Agents: Designing Multi-Agent AI Workflows0sDay 4 – The 5 Hallmarks of Agentic AI: Autonomy, Planning, and Memory0sDay 4 – Building an Agentic AI System: Integrating Pushover for Notifications0sDay 4 Implementing Agentic AI: Creating a Planning Agent for Automated Workflows0sDay 4 – Building an Agent Framework: Connecting LLMs and Python Code0sDay 4 – Completing Agentic Workflows: Scaling for Business Applications0sDay 5 – Autonomous AI Agents: Building Intelligent Systems Without Human Input0sDay 5 – AI Agents with Gradio: Advanced UI Techniques for Autonomous Systems0sDay 5 – Finalizing the Gradio UI for Our Agentic AI Solution0sDay 5 Enhancing AI Agent UI: Gradio Integration for Real-Time Log Visualization0sDay 5 – Analyzing Results: Monitoring Agent Framework Performance0sDay 5 – AI Project Retrospective: 8-Week Journey to Becoming an LLM Engineer0s
Mastering Generative AI and LLMs: An 8-Week Hands-On Journey
Accelerate your career in AI with practical, real-world projects led by industry veteran Ed Donner. Build advanced Generative AI products, experiment with over 20 groundbreaking models, and master state-of-the-art techniques like RAG, QLoRA, and Agents.
What you’ll learn
• Build advanced Generative AI products using cutting-edge models and frameworks.
• Experiment with over 20 groundbreaking AI models, including Frontier and Open-Source models.
• Develop proficiency with platforms like HuggingFace, LangChain, and Gradio.
• Implement state-of-the-art techniques such as RAG (Retrieval-Augmented Generation), QLoRA fine-tuning, and Agents.
• Create real-world AI applications, including:
• A multi-modal customer support assistant that interacts with text, sound, and images.
• An AI knowledge worker that can answer any question about a company based on its shared drive.
• An AI programmer that optimizes software, achieving performance improvements of over 60,000 times.
• An ecommerce application that accurately predicts prices of unseen products.
• Transition from inference to training, fine-tuning both Frontier and Open-Source models.
• Deploy AI products to production with polished user interfaces and advanced capabilities.
• Level up your AI and LLM engineering skills to be at the forefront of the industry.
About the Instructor
I’m Ed Donner, an entrepreneur and leader in AI and technology with over 20 years of experience. I’ve co-founded and sold my own AI startup, started a second one, and led teams in top-tier financial institutions and startups around the world. I’m passionate about bringing others into this exciting field and helping them become experts at the forefront of the industry.
Projects:
Project 1: AI-powered brochure generator that scrapes and navigates company websites intelligently.
Project 2: Multi-modal customer support agent for an airline with UI and function-calling.
Project 3: Tool that creates meeting minutes and action items from audio using both open- and closed-source models.
Project 4: AI that converts Python code to optimized C++, boosting performance by 60,000x!
Project 5: AI knowledge-worker using RAG to become an expert on all company-related matters.
Project 6: Capstone Part A – Predict product prices from short descriptions using Frontier models.
Project 7: Capstone Part B – Fine-tuned open-source model to compete with Frontier in price prediction.
Project 8: Capstone Part C – Autonomous agent system collaborating with models to spot deals and notify you of special bargains.
Why This Course?
• Hands-On Learning: The best way to learn is by doing. You’ll engage in practical exercises, building real-world AI applications that deliver stunning results.
• Cutting-Edge Techniques: Stay ahead of the curve by learning the latest frameworks and techniques, including RAG, QLoRA, and Agents.
• Accessible Content: Designed for learners at all levels. Step-by-step instructions, practical exercises, cheat sheets, and plenty of resources are provided.
• No Advanced Math Required: The course focuses on practical application. No calculus or linear algebra is needed to master LLM engineering.
Course Structure
Week 1: Foundations and First Projects
• Dive into the fundamentals of Transformers.
• Experiment with six leading Frontier Models.
• Build your first business Gen AI product that scrapes the web, makes decisions, and creates formatted sales brochures.
Week 2: Frontier APIs and Customer Service Chatbots
• Explore Frontier APIs and interact with three leading models.
• Develop a customer service chatbot with a sharp UI that can interact with text, images, audio, and utilize tools or agents.
Week 3: Embracing Open-Source Models
• Discover the world of Open-Source models using HuggingFace.
• Tackle 10 common Gen AI use cases, from translation to image generation.
• Build a product to generate meeting minutes and action items from recordings.
Week 4: LLM Selection and Code Generation
• Understand the differences between LLMs and how to select the best one for your business tasks.
• Use LLMs to generate code and build a product that translates code from Python to C++, achieving performance improvements of over 60,000 times.
Week 5: Retrieval-Augmented Generation (RAG)
• Master RAG to improve the accuracy of your solutions.
• Become proficient with vector embeddings and explore vectors in popular open-source vector datastores.
• Build a full business solution similar to real products on the market today.
Week 6: Transitioning to Training
• Move from inference to training.
• Fine-tune a Frontier model to solve a real business problem.
• Build your own specialized model, marking a significant milestone in your AI journey.
Week 7: Advanced Training Techniques
• Dive into advanced training techniques like QLoRA fine-tuning.
• Train an open-source model to outperform Frontier models for specific tasks.
• Tackle challenging projects that push your skills to the next level.
Week 8: Deployment and Finalization
• Deploy your commercial product to production with a polished UI.
• Enhance capabilities using Agents.
• Deliver your first productionized, agentized, fine-tuned LLM model.
• Celebrate your mastery of AI and LLM engineering, ready for a new phase in your career.
What's included
- 25.5 hours on-demand video
- Access on mobile and TV
- Certificate of completion