Listen "How Vibrant Planet's Self-Healing Pipelines Revolutionize Data Processing"
Episode Synopsis
Discover the cutting-edge methods Vibrant Planet uses to revolutionize geospatial data processing and resource management.
In this episode, we delve into the intricacies of scaling geospatial data processing and resource allocation with experts from Vibrant Planet. Joining us are Cyrus Dukart, Engineering Lead, and David Sacerdote, Staff Software Engineer, who share their innovative approaches to handling large datasets and optimizing resource use in Airflow.
Key Takeaways:
(00:00) Inefficiencies in resource allocation.
(03:00) Scientific validity of sharded results.
(05:53) Tech-based solutions for resource management.
(06:11) Retry callback process for resource allocation.
(08:00) Running database queries for resource needs.
(10:05) Importance of remembering resource usage.
(13:51) Generating resource predictions.
(14:44) Custom task decorator for resource management.
(20:28) Massive resource usage gap in sharded data.
(21:14) Fail-fast model for long-running tasks.
Resources Mentioned:
Cyrus Dukart -
https://www.linkedin.com/in/cyrus-dukart-6561482/
David Sacerdote -
https://www.linkedin.com/in/davidsacerdote/
Vibrant Planet -
https://www.linkedin.com/company/vibrant-planet/
Apache Airflow -
https://airflow.apache.org/
Kubernetes -
https://kubernetes.io/
Vibrant Planet -
https://vibrantplanet.net/
Thanks for listening to The Data Flowcast: Mastering Airflow for Data Engineering & AI. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss any of the insightful conversations.
#AI #Automation #Airflow #MachineLearning
In this episode, we delve into the intricacies of scaling geospatial data processing and resource allocation with experts from Vibrant Planet. Joining us are Cyrus Dukart, Engineering Lead, and David Sacerdote, Staff Software Engineer, who share their innovative approaches to handling large datasets and optimizing resource use in Airflow.
Key Takeaways:
(00:00) Inefficiencies in resource allocation.
(03:00) Scientific validity of sharded results.
(05:53) Tech-based solutions for resource management.
(06:11) Retry callback process for resource allocation.
(08:00) Running database queries for resource needs.
(10:05) Importance of remembering resource usage.
(13:51) Generating resource predictions.
(14:44) Custom task decorator for resource management.
(20:28) Massive resource usage gap in sharded data.
(21:14) Fail-fast model for long-running tasks.
Resources Mentioned:
Cyrus Dukart -
https://www.linkedin.com/in/cyrus-dukart-6561482/
David Sacerdote -
https://www.linkedin.com/in/davidsacerdote/
Vibrant Planet -
https://www.linkedin.com/company/vibrant-planet/
Apache Airflow -
https://airflow.apache.org/
Kubernetes -
https://kubernetes.io/
Vibrant Planet -
https://vibrantplanet.net/
Thanks for listening to The Data Flowcast: Mastering Airflow for Data Engineering & AI. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss any of the insightful conversations.
#AI #Automation #Airflow #MachineLearning
More episodes of the podcast The Data Flowcast: Mastering Apache Airflow ® for Data Engineering and AI
Why Airflow Became the Scheduling Backbone at Condé Nast Technology Lab with Arun Karthik
15/01/2026
Scaling Airflow to 11,000 DAGs Across Three Regions at Intercom with András Gombosi and Paul Vickers
04/12/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.