Related skills
aws sql nosql python sparkπ Description
- Build, automate, and manage near-real-time data ingestion pipelines for analytics.
- Build and maintain cloud-native big data environments on AWS with SQL/NoSQL/NewSQL.
- Lead data governance and data profiling to ensure data quality and data lineage metadata.
- Partner with data scientists and BI teams to design data models and processing logic.
- Design ETL processes to validate/transform data, compute metrics, and model features (Spark/Python/SQL/AWS).
- Guide build/buy/partner decisions for data infrastructure.
π― Requirements
- Bachelor's degree in CS or IT plus 8+ years of experience.
- Proficient in PLSQL, SQL, Python; strong SQL queries and ETL scripting.
- Expert in AWS: S3, Glue, Data Catalog, Redshift, Redshift Spectrum, Athena.
- Proficient with Postgres, Oracle, MySQL, SQL Server.
- Experience in performance tuning of DB operations.
- Familiarity with data governance and data security best practices.
- Passion for learning new technologies and ETL development.
π Benefits
- Salary range $140,000β$165,000 annually; target $156,750 based on experience.
- Generous retirement package; medical, dental, vision; pre-tax plans; ESOP.
- Hybrid work with multiple U.S. offices (Orange, Oakland, Portland, Chicago, Boston).
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest β finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!