Added
19 days ago
Type
Full time
Salary
Upgrade to Premium to se...

Related skills

aws postgresql sql python pandas

πŸ“‹ Description

  • Design, develop, and maintain core Python ETL framework.
  • Automate refresh pipelines with AWS Batch, Lambda, Step Functions.
  • Build Python integrations with external systems (SFTP, APIs).
  • Eliminate manual bottlenecks in data onboarding via automation.
  • Develop internal tools (FastAPI, SQLAlchemy, PostgreSQL) for pipelines.

🎯 Requirements

  • Bachelor's degree in related field like Data Engineering, Computer Science, Data Science, Math, Statistics with 3+ years of experience or 5+ years of relevant experience.
  • Experience designing and maintaining production ETL/ELT pipelines with proper error handling, idempotency, and monitoring.
  • Advanced proficiency in Python, with deep experience in Pandas and PySpark (DataFrame API, SQL, performance tuning, distributed joins).
  • Strong SQL skills with PostgreSQL, including query optimization, indexing strategies, and schema design.
  • Hands-on experience with AWS services including S3, Lambda, Batch, SageMaker, and StepFunctions.
  • Experience with PyArrow and columnar data formats (Parquet) and data lake patterns.

🎁 Benefits

  • Schedule and work-from-home flexibility.
  • Health insurance, 401K, and 3 weeks of PTO.
  • Growth opportunities, from skills training to conferences.
  • Platform is free for nonprofits and civic organizations.
Share job

Meet JobCopilot: Your Personal AI Job Hunter

Automatically Apply to Data Jobs. Just set your preferences and Job Copilot will do the rest β€” finding, filtering, and applying while you focus on what matters.

Related Data Jobs

See more Data jobs β†’