Apache Airflow has long been the go-to orchestration platform for data engineering teams, but managing the underlying infrastructure remains a persistent challenge. Amazon MWAA Serverless eliminates that burden entirely — no environment sizing, no capacity planning, and no idle costs. In this hands-on workshop, attendees will get a practical introduction to MWAA Serverless and walk away having built and run a real end-to-end ML pipeline on AWS.
In this workshop, we’ll use an agent equipped with MWAA serverless knowledge and Airflow DAG tooling to build a pipeline that takes raw training data from S3, kicks off a SageMaker training job, evaluates the output using Claude on Bedrock, and deploys if evaluation passes. You’ll watch the agent reason about the task, generate a valid YAML workflow using the DAG factory pattern, and deploy it to MWAA Serverless — no hand-written code required. Once deployed, we trigger a run and provide full observability: task-level logs from the SageMaker job, the Bedrock evaluation output, and the deploy decision. If something fails, we show how to identify the broken step, read its isolated logs, and iterate — either asking the agent to fix the YAML or rolling back to a prior version. The goal is the full loop: prompt, deploy, run, observe, debug, iterate.
By the end of the session, attendees will have hands-on experience with the full MWAA Serverless workflow lifecycle. Whether you’re new to MWAA Serverless or looking to accelerate pipeline development with AI tooling, you’ll leave with a repeatable pattern you can apply immediately.
Sriram Ramarathnam
Software Development Manager @ AWS