This job is no longer available

The job listing you are looking has expired.
Please browse our latest remote jobs.

See open jobs →
← Back to all jobs

Staff Data Engineer

Hybrid

Added
1 month ago
Type
Full-time
Salary
Not Specified

OXIO is the world’s first telecom-as-a-service (TaaS) platform. We are democratizing telecom and making it easily accessible for brands and enterprises to fully own and operate proprietary mobile networks designed to support their own customers needs. Our TaaS solution combines multiple existing networks into one single platform that can be seamlessly managed in the cloud as a modern SaaS offering. And it gets better - with full network access comes unparalleled business intelligence and insights to help enterprises better understand customer and machine (M2M) behavior. With a continuous focus on innovation, any company can build a powerful telecom presence with OXIO, and in addition help them glean unique customer insights like never before.

OXIO’s Data team is responsible for powering data-driven decision-making across the entire organization. In order for us to execute our mission effectively, we need to build a solid data foundation and ensure that every area of the business has access to highly reliable data.

We are hiring a talented and experienced Staff Data Engineer to join our small, but growing Data team, playing a critical role in designing and executing a robust and forward-looking data strategy for the company. Our team owns the data pipelines and tools that provide secure, reliable, and accessible data, enabling team members to derive actionable insights. Doing this job well means that we enable the entire organization’s ability to make more informed decisions, innovate faster, and serve our customers better.

In this role, you will work directly with our Data, Engineering, Operations, Data Science, Go-to-Market, and Finance teams to support the organization's data processing and analytics needs. You will serve as the internal expert on all things data engineering, empowering your peers with your expertise to collectively build a world-class data culture. This is a unique opportunity to directly influence not only our data systems, but also our drones and global operations. The ideal candidate will help us design systems that support the company’s needs today and many years into the future.

Key Responsibilities:

  • Help build, maintain, and scale our data pipelines that bring together data from various internal and external systems into our data warehouse.
  • Partner with internal stakeholders to understand analysis needs and consumption patterns.
  • Partner with upstream engineering teams to enhance data logging patterns and best practices.
  • Participate in architectural decisions and help us plan for the company’s data needs as we scale.
  • Adopt and evangelize data engineering best practices for data processing, modeling, and lake/warehouse development.
  • Advise engineers and other cross-functional partners on how to most efficiently use our data tools.
  • Key Qualifications:

  • Have 10+ years experience building large scale data platforms
  • Experience in Data Engineering, and/or Analytics Engineering, building scalable data warehouses
  • Proficient with Dimensional Modeling (Star Schema, Kimball, Inmon) and Data architecture concepts, able to coach and influence others to up-level the craft of Data Engineering
  • Fantastic collaboration and communication skills, demonstrated by successful large-scale projects spanning multiple teams
  • Advanced SQL skills (ease with window functions, defining UDFs)
  • Experienced with Python, Spark for building and maintaining data pipelines & ETL/ELT processes
  • Experienced working with dbt and Snowflake, BigQuery, Redshift or other data warehouses
  • Experience implementing real-time and batch data pipelines with tight SLOs and complex transformation requirements
  • Develop data models, schemas and standards for event dataOptimize data storage and access patterns for fast querying
  • Improve data reliability, discoverability and observability
  • Familiarity to Data Engineering tooling: ingesting, testing transformations, lineage, orchestration, publishing data, metric layers
  • Familiarity with storage layers like Hudi, Delta Lake and Iceberg
  • Aptitude for product analysis, dashboarding, and reporting
  • Familiarity with infrastructure tooling such as Terraform/Pulumi and worked with Kubernetes
  • Proficiency with AWS cloud
  • Nice To Haves:

  • Experience building streaming applications or pipelines using async messaging services or distributed streaming platforms like Apache Kafka
  • Knowledge of Airflow or some other orchestration tool
  • Experience with Spark or PySparkHave hands-on experience with event-driven architecture and streaming data processing frameworks like Kafka, Spark, Flink
  • Experiences with time-series databases like Clickhouse, InfluxDB
  • What We Offer:

  • Competitive salary and stock option incentive program
  • Company contribution towards comprehensive benefit packages
  • Flexible work arrangements
  • Company sponsored team-lunches and company retreats
  • International organization that enables you to work across boundaries, travel to different locations and enjoy the dynamics of a rapidly growing startup
  • The opportunity to work with a talented and supportive team
  • A diverse and inclusive team
  • Share job

    Help us maintain the quality of jobs posted on Empllo!

    Is this position not a remote job?

    Let us know!
    Similar Data Jobs
    See more Data jobs →
    Remote.com logo
    On-site
    Full-Time
    💰 $50K - $110K
    Faire logo
    On-site
    YC Company
    Full-Time
    💰 $190K - $260K
    Nisum logo
    Hybrid
    🇮🇳 India
    +1
    Full-Time
    💰 Salary not provided
    Vendavo logo
    Restricted Remote
    🇮🇳 India
    +1
    Full-Time
    💰 Salary not provided