

Become an AI Engineer in the world’s leading community for people who want to build, ship, and share production LLM applications 🏗️🚢🚀.
Course overview
🧑💻 ON CODING AS AN AI ENGINEER
– Do you code every day?
– If not, are you committed to coding every day as an AI Engineer?
If the answer to both of these questions is “No,” this class is not for you.
💼 ON LEADING AI ENGINEERING TEAMS
– Do you manage a team of people that writes code every day?
If “Yes,” this class might be for them.
——
**KEY CURRICULUM UPDATES FOR COHORT 8**
🛠️ Context Engineering, Level 0 (Prompting), Level 1 (RAG), Level 3 (Agents), and Advanced Context Engineering!
🤯 Why LLMs Hallucinate (Sept 2025)
🧩 The Limitations of Dense Vector Retrieval (Sept 2025)
🌐 Open-source models & embeddings introduced much earlier, both remotely via TogetherAI and locally via ollama
⚡ LangChain v1.0 alpha
🤖 Deep Agents
📊 Evaluating the Evaluators (Aug 2025) & In Defense of Evals (Sep 2025)
🏆 Demo Day Semi-Final Competition Round
A brand new primary cohort use case that builds on itself and runs throughout the cohort: Deep Research from Scratch
**KEY CURRICULUM UPDATES FOR COHORT 7**
🟢 [New Additions]
– Context Engineering from first principles
– A primary cohort use case that builds on itself and runs throughout the cohort
– Expansion of Model Context Protocol (MCP) content
– Using Agent2Agent (A2A) Protocol to build and run two agents, one w/ MCP server and one to mimic a human using our agent
– Dedicated Guardrails session, from best-practices to implementation
🔴 [Deprecated]
– Explicit teaching of code and coding agents (in favor of a more implicit approach of using Claude Code regularly)
– Computer Use agents (in favor of A2A and operating an Agent in production)
– Contextual Retrieval
– Hugging Face smol agents
WHO HAS TAKEN THIS COURSE
Since its launch in January 2024, over 250 people have taken the course! The most successful participants have been Software Engineers and Developers or Data Scientists and Machine Learning/AI Specialists, indicating a focus on analytics, modeling, and artificial intelligence.
Here is a list of companies whose employees have already taken the course:
Google, Meta, Microsoft, Amazon, Samsung, Salesforce, IBM, Oracle, Bloomberg, Deloitte, Atlassian, SAP, Infosys, ServiceNow, Shopify, NetApp, Autodesk, Jio Platforms, JP Morgan Chase (JPMC), Walmart, Chevron, Expedia, NTT Data, Ernst & Young, Fannie Mae, National Grid, State Farm, USAA, BMO, TD Bank, Mercer, Kiewit, BP, CVS, Safran, Appian
The most important thing for anyone taking the course is to take it seriously. This advanced course requires commitment, debugging, resilience, and tenacity. These personal attributes matter much more than your current company or job title.
Whether you’re coming from a software engineering or data science background, to be a world-class AI Engineer, you’ll need to know both. This class will teach you the lego blocks of building production-grade LLM applications software as an AI-Assisted Developer. This is where your journey begins!
🧑💻 Become an AI-Assisted Developer
A vibe coder won’t take your job. An AI-assisted developer will.
🧑🤝🧑 Peer-Supported Live Coding
Work with certified AI Engineers in dedicated small Journey groups to discuss and code throughout the cohort!
🤩 Demo Day!
You will present a unique project live to a cohort of your peers and the public!
🕴️ industry Leading Curriculum
We’re always at The LLM Edge with the latest concepts, code, tools, techniques, and best practices!
💼 Hiring Opportunities
Certified AI Engineers get direct access to job opportunities from our network
Live sessions
Learn directly from “Dr. Greg” Loughnane & Chris “The Wiz 🪄” Alexiuk in a real-time, interactive format.
Lifetime access
Go back to course content and recordings whenever you need to.
Community of peers
Stay accountable and share insights with like-minded professionals.
Certificate of completion
Share your new skills with your employer or on LinkedIn.
Maven Guarantee
This course is backed by the Maven Guarantee. Students are eligible for a full refund up until the halfway point of the course.
– Understand course structure – How to succeed as a certified AI Engineer on Demo Day!
– Meet your cohort, peer supporters, and journey group!
– LLM prototyping best practices, how to spin up quick end-to-end prototypes and vibe check them!
– Prompt Engineering best practices and the LLM Application Stack
– Understand embedding models and similarity search
– Understand Retrieval Augmented Generation = Dense Vector Retrieval + In-Context Learning
– Build a Python RAG app from scratch
– Understand the state of production LLM application use cases in industry
– Ideate with peers & peer supporters
– Build an end-to-end RAG application using everything we’ve learned so far
– Why LangChain, OpenAI, QDrant, LangSmith
– Understand LangChain & LangGraph core constructs
– Introduce primary cohort use case
– Build a RAG system with LangChain and Qdrant
– Intro to LangSmith for evaluation and monitoring
– Answer the question: “What is an agent?”
– Understand how to build production-grade agent applications using LangGraph
– Understand Context Engineering from first principles
– How to use LangSmith to evaluate agentic RAG applications
– Understand what multi-agent systems are and how they operate.
– Extend the primary cohort use case to a multi-agent solution
– Build a production-grade multi-agent applications using LangGraph
– An overview of Synthetic Data Generation (SDG) – How to use SDG for Evaluation
– Generating high-quality synthetic test data sets for RAG applications in general and for our primary cohort use case specifically
– How to use LangSmith to baseline performance, make improvements, and then compare
– Build RAG and Agent applications with LangGraph
– Evaluate RAG and Agent applications quantitatively with the RAG ASsessment (RAGAS) framework
– Use metrics-driven development to improve agentic applications, measurably, with RAGAS
– Understand how advanced retrieval and chunking techniques can enhance RAG
– Compare the performance of retrieval algorithms for RAG
– Understand the fine lines between chunking, retrieval, and ranking for our primary cohort use case
– Learn best practices for retrieval pipelines
– Discuss best-practice use of reasoning models
– Understand planning and reflection agents
– Build an Open-Source Deep Research agent application using LangGraph
– Investigate evaluating complex agent applications with the latest tools
– Introduce the Certification Challenge
– Pitch your problem, audience, and solution
– Network with people outside of your group
– Understand the suite of tools for building agents with OpenAI and the evolution of their tooling
– Core constructs of the Agents SDK and comparison to other agent frameworks
– How to use monitoring and observability tools on the OpenAI platform
– Understand Model Context Protocol (MCP) from client and server sides
– Learn MCP resource types and how to design useful MCP servers
– Build an MCP server that enhances allows us to enhance our search and retrieval toolbox for our primary cohort use case
– Learning how to monitor, visualize, debug, and interact with your LLM applications with LangSmith and LangGraph Studio
– Deploy your applications to APIs directly via LangGraph Platform
– Deploy an agent with MCP server for our primary cohort use case
– Defining LLM Operations (LLM Ops) & Agent Operations (Agent Ops)
– Understand Agent2Agent Protocol (A2A), including how remote agents and client agents interact
– How to enable agent-to-agent communication with Agent cards and MCP
– Deploy an agent with MCP server for our primary cohort use case and then build another agent to act as a user of our application
– Understand guardrails, including the key categories of guardrails
– Understand the importance of caching – How to use Prompt and Embedding caching
– Apply guardrails and caching directly to our primary cohort use case
– Understand how to deploy open LLMs and embeddings to scalable endpoints
– Discuss how to choose inference server
– Build an E2E enterprise agentic RAG application with LCEL
– Introduction to Building On-Prem
– Hardware & compute Considerations
– Local LLM & Embedding Model Hosting Comparison
– How to build and present an On-Prem Solution to stakeholders
– Code Freeze
– Full Dress Rehearsal
Live AI Engineering Demo Day with industry judges, the public, and a cohort of your peers!

Active hands-on learning at the edge
We teach concepts AND code. Never one or the other. AI has accelerated so quickly that anyone who’s been a manager the last few years has not yet seen the code for themselves.
Project-based and community-driven
You’ll be interacting with other learners through breakout rooms and project teams. We can’t wait to see what you build, ship, and share on Demo Day projects 🤩.
Find fellow travelers for your journey
Join a community of like-minded AI practitioners who are all in on Generative AI, and who are heading down the same career path as you are.
There are no reviews yet.
You must be <a href="https://wislibrary.net/my-account/">logged in</a> to post a review.