Related skills
selenium sql python pandas airflow๐ Description
\n- \n
- Design, optimize and own data pipelines ingesting data from marketplaces. \n
- Build monitoring to track latency, uptime, and coverage. \n
- Modernize storage and processing for cost, performance, reliability. \n
- Partner with internal teams to ensure clean, standardized data. \n
๐ฏ Requirements
\n- \n
- 3-4 years of experience in data engineering or related fields. \n
- Strong Python proficiency with at least 3 years of hands-on. \n
- Experience with large-scale data processing using Pandas, Polars, PySpark, or similar. \n
- Hands-on experience with pipeline orchestration tools such as Airflow or Dagster. \n
- Track record of owning at least one data pipeline end-to-end within the past 2 years. \n
- Solid SQL skills for data analysis and transformation. \n
Nice-to-haves
\n- \n
- Experience with web scraping technologies such as Selenium, Puppeteer or Beautiful Soup. \n
- Familiarity with data infrastructure and cloud services, AWS preferred. \n
- Interest in or knowledge of trading cards, collectibles, or alt asset markets. \n
- Experience with LLM based automation tools for data extraction and processing. \n
What Youll Get From Us
\n- \n
- A seat at the table shaping Alt and the alt asset space. \n
- Autonomy and ownership on projects that matter. \n
- $100/month work-from-home stipend. \n
- $200/month wellness stipend. \n
- Flexible vacation policy. \n
- Base salary range: $155,000โ$165,000, plus equity. \n
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Engineering Jobs. Just set your
preferences and Job Copilot will do the rest โ finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!