This job is no longer available

The job listing you are looking has expired.
Please browse our latest remote jobs.

See open jobs →
← Back to all jobs
Added
14 days ago
Location
Type
Full time
Salary
Not Specified

Use AI to Automatically Apply!

Let your AI Job Copilot auto-fill application questions
Auto-apply to relevant jobs from 300,000 companies

Auto-apply with JobCopilot Apply manually instead
Save job

Who are we

DoubleVerify is an Israeli-founded big data analytics company (Stock: NYSE: DV). We track and analyze tens of billions of ads every day for the biggest brands in the world.

We operate at a massive scale, handling over 100B events per day and over 1M RPS at peak, we process events in real-time at low latencies (ms) and analyze over 2.5M video years every day. We verify that all ads are fraud free, appear next to appropriate content, appear to people in the right geography and measure the viewability and user’s engagement throughout the ad’s lifecycle. 

We are global, with HQ in NYC and R&D centers in Tel Aviv, New York, Finland, Berlin, Belgium and San Diego. We work in a fast-paced environment and have a lot of challenges to solve. If you like to work in a huge scale environment and want to help us build products that have a huge impact on the industry, and the web - then your place is with us.

 

What you will do

  • You will work in a team of data and backend engineers producing user attentiveness insights and recommendations that help big brands optimize their digital advertising strategy. 
  • You will process 10’s of billions of records a day using mainly Python and Scala, utilizing advanced technologies like Spark, Kafka, Docker, Kubernetes, DataBricks and GCP managed solutions. 
  • You will build data pipelines that integrate with internal and external data sources from various platforms to enhance business insights while keeping standards high with testing, automated deployment and high degree of observability. 
  • You will be responsible for defining solutions and delivering them while collaborating with engineering & product teams on productionizing & monitoring your solutions.
  • You will explore, implement and support multiple data initiatives and projects at the same time.

 

What you need to have

  • At least 4 years in Software Development
  • At least 3 years experience coding in Python\Java\C#\Scala and a like
  • At least 3 years of experience working with relational databases 
  • At least 2 years of experience building ETLs or ELTs
  • Good analytical skills and well-versed with SQL
  • Good interpersonal and communication skills
  • Passion for development with high quality and a sharp mind
  • Experience maintaining business-critical production systems

 

Nice to have

  • Experience with Python 
  • Experience with CI/CD pipeline, Docker & Kubernetes, public cloud providers
  • Experience with Spark 
  • Experience with non-relational DB
  • Experience with streaming processing
  • Experience with online advertising technologies

#Hybrid 

Use AI to Automatically Apply!

Let your AI Job Copilot auto-fill application questions
Auto-apply to relevant jobs from 300,000 companies

Auto-apply with JobCopilot Apply manually instead
Share job

Meet JobCopilot: Your Personal AI Job Hunter

Automatically Apply to On site Engineering Jobs. Just set your preferences and Job Copilot will do the rest—finding, filtering, and applying while you focus on what matters.

Related Engineering Jobs

See more Engineering jobs →