LeethubLeethub
JobsCompaniesBlog
Go to dashboard

Leethub

Curated tech jobs from FAANG and top companies worldwide.

Top Companies

  • Google Jobs
  • Meta Jobs
  • Amazon Jobs
  • Apple Jobs
  • Netflix Jobs
  • All Companies →

Job Categories

  • Software Engineering
  • Data, AI & Machine Learning
  • Product Management
  • Design & User Experience
  • Operations & Strategy
  • Remote Jobs
  • All Categories →

Browse by Type

  • Remote Jobs
  • Hybrid Jobs
  • Senior Positions
  • Entry Level
  • All Jobs →

Resources

  • Google Interview Guide
  • Salary Guide 2025
  • Salary Negotiation
  • LeetCode Study Plan
  • All Articles →

Company

  • Dashboard
  • Privacy Policy
  • Contact Us
© 2026 Leethub LLC. All rights reserved.
Home›Jobs›TripleLift›Data Engineer III
TripleLift

About TripleLift

The omnichannel ad marketplace for modern brands

🏢 Media, Marketing & Advertising👥 201-500 employees📅 Founded 2012📍 Flatiron District, New York, NY💰 $18.7m⭐ 3.2
B2BMarketplaceMarketingDigital MediaAdvertising

Key Highlights

  • Headquartered in Flatiron District, New York, NY
  • $18.7 million raised in Series A funding
  • 201-500 employees, fostering a collaborative environment
  • Partners with leading publishers and DSPs in digital media

TripleLift is a leading omnichannel advertisement marketplace headquartered in the Flatiron District of New York, NY. With $18.7 million in funding, TripleLift simplifies the advertising process by allowing businesses to create campaigns across online video, CTV, display, branded content, and native...

🎁 Benefits

TripleLift offers comprehensive medical, dental, and vision plans, along with unlimited PTO to promote work-life balance. Employees also benefit from ...

🌟 Culture

TripleLift fosters a culture focused on simplifying the complexities of digital advertising. The company values innovation and collaboration, enabling...

🌐 Website💼 LinkedIn𝕏 TwitterAll 36 jobs →
TripleLift

Data Engineer III

TripleLift • Pune, Maharashtra, India

Posted 5 months ago🏛️ On-SiteSeniorData engineer📍 Pune
Apply Now →

Job Description

About TripleLift

We're TripleLift, an advertising platform on a mission to elevate digital advertising through beautiful creative, quality publishers, actionable data and smart targeting. Through over 1 trillion monthly ad transactions, we help publishers and platforms monetize their businesses. Our technology is where the world's leading brands find audiences across online video, connected television, display and native ads. Brand and enterprise customers choose us because of our innovative solutions, premium formats, and supportive experts dedicated to maximizing their performance.

As part of the Vista Equity Partners portfolio, we are NMSDC certified, qualify for diverse spending goals and are committed to economic inclusion. Find out how TripleLift raises up the programmatic ecosystem at triplelift.com.

The Role:

TripleLift is seeking a Senior Data Engineer to join a small, influential Data Engineering team. You will be responsible for expanding and optimizing our high-volume, low-latency data platform architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline engineer and data wrangler who enjoys optimizing data systems and building them from the ground up. This role will support our software engineers, product managers, business intelligence analysts and data scientists on data initiatives, and will ensure optimal data delivery architecture is applied consistently throughout new and ongoing projects. Ideal candidates must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next generation of products and data initiatives.

Responsibilities:

  • Create and maintain optimal, high-throughput data platform architecture handling 100’s of billions of daily events.
  • Explore, refine and assemble large, complex data sets that meet functional product and business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using Spark, EMR, Snowpark, Kafka and other big data technologies.
  • Work with stakeholders across geo-distributed teams, including product managers, engineers and analysts to assist with data-related technical issues and support their data infrastructure needs.
  • Digest and communicate business requirements effectively to both technical and non-technical audiences.
  • Translate business requirements into concise technical specifications.

Qualifications:

  • 6+ years of experience in a Data Engineer role
  • Bachelors Degree, or higher, in Computer Science or related Engineering field
  • Experience building and optimizing ‘big data’ data pipelines, architectures and data sets
  • Expert working knowledge of Databricks/Spark and associated APIs
  • Strong experience with object-oriented and functional scripting languages: Python, Java, Scala and associated toolchain
  • Experience working with relational databases, SQL authoring/optimizing as well as operational familiarity with a variety of databases.
  • Experience with AWS cloud services: EC2, EMR, RDS
  • Experience working with NoSQL data stores such as: Elasticsearch, Apache Druid
  • Experience with data pipeline and workflow management tools: Airflow
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Strong experience working with unstructured and simi-structured data formats: JSON, Parquet, Iceberg, Avro, Protobuf
  • Expert knowledge of processes supporting data transformation, data structures, metadata, dependency and workload management.
  • Proven experience in manipulating, processing, and extracting value from large, disparate datasets.
  • Working knowledge of streams processing, message queuing, and highly scalable ‘big data’ data stores.
  • Experience supporting and working with cross-functional teams in a dynamic environment.

Preferred: 

  • Streaming systems experience with Kafka, Spark Streaming, Kafka Streams
  • Snowflake/Snowpark
  • DBT
  • Exposure to AdTech

Life at TripleLift

At TripleLift, we’re a team of great people who like who they work with and want to make everyone around them better. This means being positive, collaborative, and compassionate. We hustle harder than the competition and are continuously innovating.

Learn more about TripleLift and our culture by visiting our LinkedIn Life page.

Establishing People, Culture and Community Initiatives

At TripleLift, we are committed to building a culture where people feel connected, supported, and empowered to do their best work. We invest in our people and foster a workplace that encourages curiosity, celebrates shared values, and promotes meaningful connections across teams and communities. We want to ensure the best talent of every background, viewpoint, and experience has an opportunity to be hired, belong, and develop at TripleLift. Through our People, Culture, and Community initiatives, we aim to create an environment where everyone can thrive and feel a true sense of belonging.

Privacy Policy

Please see our Privacy Policies on our TripleLift and 1plusX websites.

TripleLift does not accept unsolicited resumes from any type of recruitment search firm. Any resume submitted in the absence of a signed agreement will become the property of TripleLift and no fee shall be due.

Interested in this role?

Apply now or save it for later. Get alerts for similar jobs at TripleLift.

Apply Now →Get Job Alerts