This job is no longer available

The job listing you are looking has expired.
Please browse our latest remote jobs.

See open jobs →
← Back to all jobs

Staff Data Platform Engineer

Added
12 hours ago
Type
Full time
Salary
Not Specified

Use AI to Automatically Apply!

Let your AI Job Copilot auto-fill application questions
Auto-apply to relevant jobs from 300,000 companies

Auto-apply with JobCopilot Apply manually instead
Save job

About the Company

Gemini is a global crypto and Web3 platform founded by Cameron and Tyler Winklevoss in 2014, offering a wide range of simple, reliable, and secure crypto products and services to individuals and institutions in over 70 countries. Our mission is to unlock the next era of financial, creative, and personal freedom by providing trusted access to the decentralized future. We envision a world where crypto reshapes the global financial system, internet, and money to create greater choice, independence, and opportunity for all — bridging traditional finance with the emerging cryptoeconomy in a way that is more open, fair, and secure. As a publicly traded company, Gemini is poised to accelerate this vision with greater scale, reach, and impact.

The Department: Platform

Our Platform organization’s purpose is to enable Gemini to scale effectively and empower our engineering teams to focus on building innovative financial products and experiences for individuals around the world. Platform focuses around building a scalable and secure foundations platform, enabling Engineering to deploy, validate, and operate their services in production, improve resiliency of the service and increase organizational efficiency by reducing operational toil and increase system efficiency through architectural evolution.

The Platform team engages directly with our other engineering teams to onboard them onto our platform systems, reviewing and recommending design and architectural decisions, and guiding our engineering teams on how to implement the tooling provided by the larger Platform organization required to ensure systems can scale and react to changing conditions, with continuous improvement loops.

The Role: Staff Data Platform Engineer

As a Staff Data Platform Engineer, you're a part of the Data/Database Engineering Team and you’ll be instrumental in leading building, scaling, and maintaining our data infrastructure with a focus on architecture, reliability, availability, and performance. This role will work closely with both data engineering and product engineering teams, providing a robust infrastructure foundation that enables them to build, maintain, and scale data-driven products and solutions. An immediate priority will be implementing advanced scaling strategies for our relational database systems to support a highly scalable infrastructure. 

This role also requires a strong commitment to uptime and incident response, including participation in an on-call rotation. You’ll bring expertise in database technologies (relational, columnar, document, key-value, and unstructured) and familiarity with core data infrastructure components like message queues, ETL pipelines, and real-time processing tools to support a resilient, high-performing data platform.

This role is required to be in person twice a week at either our New York City, NY or San Francisco, CA office.

Responsibilities:

  • Database Scaling and Optimization: Design and implement scaling strategies for relational systems to ensure they meet the high availability and scalability needs of data and product engineering teams.
  • Availability and Uptime Management: Proactively monitor and optimize database systems to meet stringent uptime requirements. Participate in an on-call rotation to respond to incidents, troubleshoot issues, and restore service promptly during disruptions.
  • Architect and Optimize Database Infrastructure: Manage a variety of database technologies, balancing tradeoffs across relational, columnar, document, key-value, and unstructured data solutions, providing a foundation for data warehousing and supporting data-driven product needs.
  • Integration with Data Engineering and Product Pipelines: Collaborate with data and product engineering teams to implement and optimize data pipelines, including message queues (e.g., Kafka), ETL workflows, and real-time processing, ensuring efficient and reliable data movement.
  • Infrastructure Automation and Reliability: Utilize infrastructure as code (IaC) to automate deployment, scaling, and maintenance, creating a consistent, reliable environment that supports high availability and deployment efficiency for both data and product teams.
  • Performance Tuning and Incident Response: Conduct performance tuning, establish monitoring and alerting, and address potential issues quickly to ensure a responsive platform that meets the needs of all engineering workloads.
  • Documentation and Knowledge Sharing: Document processes, including scaling strategies, monitoring setups, and best practices, to support alignment with engineering requirements and ensure smooth handoffs in on-call situations.

Qualifications:

  • Deep expertise in data and storage technologies, including RDBMS (e.g., Postgres), NoSQL, and other database types (e.g., columnar, document, key-value, and unstructured), Object Storage (S3), with a strong understanding of tradeoffs and use cases for each.
  • Demonstrated experience with advanced database scaling strategies for relational systems.
  • Strong knowledge of high-availability architectures and proficiency with monitoring tools to support uptime and incident response.
  • Experience with cloud-based database and data processing platforms, such as Amazon Aurora, Databricks, AWS RDS, Redshift, BigQuery, Snowflake, and managed services like AWS EMR and Google Cloud Dataflow.
  • Hands-on experience with modern data transport and streaming platforms such as Kafka, Kinesis, and Pulsar, including building and operating real-time data pipelines.
  • Familiarity with traditional ETL workflows, scheduled batch pipelines, and message queuing systems (e.g., RabbitMQ, SQS), and how they integrate with streaming architectures.
  • Strong programming skills (e.g., Python, Bash, SQL) and experience with CI/CD practices.
  • Experience in an on-call rotation and handling incident response.
  • Excellent communication and collaboration skills, with a proven ability to work effectively with data and product engineering teams.
It Pays to Work Here

 

The compensation & benefits package for this role includes:

  • Competitive starting salary
  • A discretionary annual bonus
  • Long-term incentive in the form of a new hire equity grant
  • Comprehensive health plans
  • 401K with company matching
  • Paid Parental Leave
  • Flexible time off

Salary Range: The base salary range for this role is between $168,000 - $240,000 in the State of New York, the State of California and the State of Washington. This range is not inclusive of our discretionary bonus or equity package. When determining a candidate’s compensation, we consider a number of factors including skillset, experience, job scope, and current market data.

In the United States, we offer a hybrid work approach at our hub offices, balancing the benefits of in-person collaboration with the flexibility of remote work. Expectations may vary by location and role, so candidates are encouraged to connect with their recruiter to learn more about the specific policy for the role. Employees who do not live near one of our hubs are part of our remote workforce.

At Gemini, we strive to build diverse teams that reflect the people we want to empower through our products, and we are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, or Veteran status. Equal Opportunity is the Law, and Gemini is proud to be an equal opportunity workplace. If you have a specific need that requires accommodation, please let a member of the People Team know.

#LI-ES1

Use AI to Automatically Apply!

Let your AI Job Copilot auto-fill application questions
Auto-apply to relevant jobs from 300,000 companies

Auto-apply with JobCopilot Apply manually instead
Share job

Meet JobCopilot: Your Personal AI Job Hunter

Automatically Apply to Hybrid Data Jobs. Just set your preferences and Job Copilot will do the rest—finding, filtering, and applying while you focus on what matters.

Related Data Jobs

See more Data jobs →