This job is no longer available

The job listing you are looking has expired.
Please browse our latest remote jobs.

See open jobs →

Mid Data Engineer (Bogotá and Medellín)

Added
less than a minute ago
Type
Full time
Salary
Not Specified

Use AI to Automatically Apply!

Let your AI Job Copilot auto-fill application questions
Auto-apply to relevant jobs from 300,000 companies

Auto-apply with JobCopilot Apply manually instead

Related skills

bigquery data engineering etl sql python

We are:Wizeline, a global AI-native technology solutions provider, develops cutting-edge, AI-powered digital products and platforms. We partner with clients to leverage data and AI, accelerating market entry and driving business transformation. As a global community of innovators, we foster a culture of growth, collaboration, and impact.With the right people and the right ideas, there’s no limit to what we can achieve

Are you a fit?Sounds awesome, right? Now, let’s make sure you’re a good fit for the role:Key Responsibilities

  • Data Migration & Pipeline Development: Design, develop, and implement efficient and reliable data pipelines to migrate data from PySpark/Athena/BigQuery to DBT/Snowflake. Translate complex data requirements into actionable dbt models and transformations within Snowflake. Build and maintain Airflow DAGs for orchestrating data ingestion, transformation, and loading processes. Optimize existing data pipelines for performance, scalability, and cost efficiency in the new Snowflake environment.
  • Data Modeling & Transformation: Develop and maintain robust data models in Snowflake using dbt, adhering to best practices for data warehousing and analytics. Write complex SQL queries for data extraction, transformation, and loading. Ensure data quality, accuracy, and consistency throughout the migration and ongoing data operations.
  • Troubleshooting & Optimization: Identify, diagnose, and resolve data-related issues, performance bottlenecks, and data discrepancies. Proactively monitor data pipelines and systems to ensure smooth operation and data availability. Implement performance tuning strategies within Snowflake and dbt to optimize query execution and resource utilization.
  • Collaboration & Documentation: Collaborate closely with Lead Data Engineers, Data Analysts, and other stakeholders to understand data needs and deliver effective solutions. Contribute to the development and maintenance of comprehensive technical documentation for data pipelines, models, and processes. Participate in code reviews and contribute to the team's adherence to coding standards and best practices

Must-have Skills

  • Bachelor's or Master's degree in Computer Science, Engineering, or a related quantitative field.
  • 3+ years of experience in data engineering, with a focus on building and maintaining scalable data pipelines.
  • Solid experience with data migration projects and working with large datasets.
  • Strong hands-on experience with Snowflake, including data loading, querying, and performance optimization.
  • Proficiency in dbt (data build tool) for data transformation and modeling.
  • Proven experience with Apache Airflow for scheduling and orchestrating data workflows.
  • Expert-level SQL skills, including complex joins, window functions, and performance tuning.
  • Proficiency in Python for data manipulation, scripting, and automation for edge cases
  • Familiarity with PySpark, AWS Athena, and Google BigQuery (source systems).
  • Understanding of data warehousing concepts, dimensional modeling, and ELT principles.
  • Knowledge of building CI/CD pipelines for code deployment
  • Experience with version control systems (e.g., Github).
  • Excellent problem-solving, analytical, and communication skills.
  • Ability to work independently and as part of a collaborative team in an agile environment.
  • Must speak and write in English fluently; Effective communicator

Nice-to-have:

  • AI Tooling Proficiency: Leverage one or more AI tools to optimize and augment day-to-day work, including drafting, analysis, research, or process automation. Provide recommendations on effective AI use and identify opportunities to streamline workflows.

What we offer:

  • A High-Impact Environment
  • Commitment to Professional Development
  • Flexible and Collaborative Culture
  • Global Opportunities
  • Vibrant Community
  • Total Rewards

*Specific benefits are determined by the employment type and location.

Find out more about our culture here.

Use AI to Automatically Apply!

Let your AI Job Copilot auto-fill application questions
Auto-apply to relevant jobs from 300,000 companies

Auto-apply with JobCopilot Apply manually instead
Share job

Meet JobCopilot: Your Personal AI Job Hunter

Automatically Apply to Data Jobs. Just set your preferences and Job Copilot will do the rest — finding, filtering, and applying while you focus on what matters.

Related Data Jobs

See more Data jobs →