This job is no longer available

The job listing you are looking has expired.
Please browse our latest remote jobs.

See open jobs →
← Back to all jobs

Senior Data Engineer

Added
3 days ago
Location
Type
Full time
Salary
Not Specified

Use AI to Automatically Apply!

Let your AI Job Copilot auto-fill application questions
Auto-apply to relevant jobs from 300,000 companies

Auto-apply with JobCopilot Apply manually instead
Save job

About Turvo

Turvo provides a collaborative Transportation Management System (TMS) application designed specifically for the supply chain. Turvo Collaboration Cloud connects freight brokers, 3PLs, shippers, and carriers to unite supply chain ecosystems, delivering outstanding customer experiences, real-time collaboration, and accelerated growth. The technology unifies internal and external systems, providing one end-to-end solution that streamlines operations, enhances analytics, and automates business processes while eliminating redundant manual tasks. Turvo’s customers include some of the world’s largest Fortune 500 logistics service providers and shippers as well as small to mid-sized freight brokers.

Turvo is based in Dallas, Texas, with offices in Hyderabad, India. (

www.turvo.com).

Responsibilities:

  • Expertise in designing and implementing scalable data pipelines (e.g., ETL) and processes.
  • Experience building enterprise-scale data warehouse and database models end-to-end.
  • Proven hands-on experience with Snowflake and related ETL technologies.
  • Experience working with Tableau, Power BI or other BI tools.Experience with native AWS technologies for data and analytics such as Redshift, S3, Lambda, Glue, EMR, Kinesis, SNS, CloudWatch, etc.
  • Experience with NoSQL databases (MongoDB, Elasticsearch).
  • Experience working with relational databases and awareness of writing & optimising SQL queries for analytics and reporting.
  • Experience developing scalable data applications and reporting frameworks.
  • Experience working with message queues, preferably Kafka and RabbitMQ.
  • Ability to write code in Python, Java, Scala or other languages.
  • Exposure to columnar databases (e.g., Apache Parquet, ORC, Redshift, Snowflake) and understanding of their role in performance optimization.
  • Experience supporting AI/ML use cases through data preparation, feature engineering, and model-ready data pipelines.
  • Qualification:

  • 5+ years experience in architecture of DW/Data Lake solutions for the Enterprise using multiple platforms.
  • Experience writing high quality, maintainable SQL on large datasets.
  • Expertise in designing and implementing scalable data pipelines (e.g., ETL) and processes in Data Warehouse/Data Lake to support dynamic business demand for data.
  • Experience working on building/optimising logical data models and data pipelines while delivering high quality data solutions that are testable and adhere to SLAs.
  • Excellent knowledge and experience of query optimisation and tuning.
  • Knowledgeable about a variety of strategies for ingesting, modelling, processing, and persisting data.
  • Familiarity with data workflows supporting AI/ML applications and scalable analytics.
  • Understanding of columnar storage formats and their impact on query performance and data processing.
  • Use AI to Automatically Apply!

    Let your AI Job Copilot auto-fill application questions
    Auto-apply to relevant jobs from 300,000 companies

    Auto-apply with JobCopilot Apply manually instead
    Share job

    Meet JobCopilot: Your Personal AI Job Hunter

    Automatically Apply to On site Data Jobs. Just set your preferences and Job Copilot will do the rest—finding, filtering, and applying while you focus on what matters.

    Related Data Jobs

    See more Data jobs →