Do you have data that you pull from external sources or is generated and appears at your digital doorstep? I bet that data needs processed, filtered, transformed, distributed, and much more. One of the biggest tools to create these data pipelines with Python is Dagster. And we are fortunate to have Pedram Navid on the show this episode. Pedram is the Head of Data Engineering and DevRel at Dagster Labs. And we're talking data pipelines this week at Talk Python.
Episode sponsorsTalk Python Courses
Posit
Links from the showRock Solid Python with Types Course: training.talkpython.fm
Pedram on Twitter: twitter.com
Pedram on LinkedIn: linkedin.com
Ship data pipelines with extraordinary velocity: dagster.io
dagster-open-platform: github.com
The Dagster Master Plan: dagster.io
data load tool (dlt): dlthub.com
DataFrames for the new era: pola.rs
Apache Arrow: arrow.apache.org
DuckDB is a fast in-process analytical database: duckdb.org
Ship trusted data products faster: www.getdbt.com
Watch this episode on YouTube: youtube.com
Episode transcripts: talkpython.fm
--- Stay in touch with us ---
Subscribe to us on YouTube: youtube.com
Follow Talk Python on Mastodon: talkpython
Follow Michael on Mastodon: mkennedy