As per your team needs
It is really hard to get the production-grade Data Pipelines right in the first attempt. Since there are so many moving parts, each update adds to the complexity of the pipeline. Proper orchestration of pipelines is very important for the success of any data driven organization.
This course would help you in productionalizing Data Pipelines with Apache Airflow. First, we’ll explore what Airflow is, its syntax, how to build DAGs, and, finally, how to scale Data Pipelines.
Then, we’ll discover how to make your pipelines more resilient and predictable. At the end, we’ll learn how to distribute tasks with Celery and Kubernetes Executors.
Apache, Apache Kafka, Apache Spark, Apache Trino, Apache Iceberg, Apache Hive, Kafka, Spark, Trino, Iceberg, Hive, and other associated open-source project names are the Apache Software Foundation trademarks. Starburst, Starburst Data, Starburst Enterprise, and Starburst Galaxy are registered trademarks of Starburst Data, Inc. All rights reserved. DataCouch is not affiliated with, endorsed by, or otherwise associated with the Apache Software Foundation (ASF) or any of its projects.