Related skills
grpc postgresql python distributed systems kafkaπ Description
- Design and build high-throughput data pipelines ingesting unstructured content
- Evolve ingestion systems for scalability, reliability, and maintainability
- Transform data into structured, enriched datasets for platform features
- Collaborate with Product, Data Science, Search, and Platform teams
- Mentor engineers through code reviews and technical guidance
- Lead high-impact initiatives balancing speed and long-term health
π― Requirements
- 8+ years of professional software engineering experience
- Experience designing distributed systems and data pipelines at scale
- Strong proficiency in Python or a similar backend language
- Experience building microservices that are reliable, observable, scalable
- Experience with streaming tech such as Kafka or Kinesis
- Experience with APIs and service communication patterns like gRPC and Protobuf
- Experience with large-scale data systems or high-throughput SaaS
- Relational or search databases such as MySQL, Postgres, Elasticsearch, or OpenSearch
- Ability to trade off performance, reliability, and maintainability in distributed systems
π Benefits
- Fully distributed remote team
- Home office stipend, phone and internet reimbursement, coworking membership
- Virtual and in-person team bonding events
- Comprehensive health coverage for employees and dependents
- 401(k) with employer contributions
- Equity opportunities
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Engineering Jobs. Just set your
preferences and Job Copilot will do the rest β finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!