LeethubLeethub
JobsCompaniesBlog
Go to dashboard

Leethub

Curated tech jobs from FAANG and top companies worldwide.

Top Companies

  • Google Jobs
  • Meta Jobs
  • Amazon Jobs
  • Apple Jobs
  • Netflix Jobs
  • All Companies →

Job Categories

  • Software Engineering
  • Data, AI & Machine Learning
  • Product Management
  • Design & User Experience
  • Operations & Strategy
  • Remote Jobs
  • All Categories →

Browse by Type

  • Remote Jobs
  • Hybrid Jobs
  • Senior Positions
  • Entry Level
  • All Jobs →

Resources

  • Google Interview Guide
  • Salary Guide 2025
  • Salary Negotiation
  • LeetCode Study Plan
  • All Articles →

Company

  • Dashboard
  • Privacy Policy
  • Contact Us
© 2026 Leethub LLC. All rights reserved.
Home›Jobs›Kpler›Data Engineer
Kpler

About Kpler

Transforming trade with data-driven insights

🏢 Tech, Corporate👥 201-500 employees📅 Founded 2014📍 Etterbeek, Brussels💰 $220m⭐ 3.1
B2BArtificial IntelligenceEnergyBig dataDeep TechMachine LearningSaaSMobile

Key Highlights

  • Raised $220 million in funding for expansion
  • Acquired five companies between 2022 and 2023
  • Leader in commodity market data with ten times the revenue of competitors
  • Headquartered in Etterbeek, Brussels with 201-500 employees

Kpler, headquartered in Etterbeek, Brussels, is a leader in providing data and analytics for the commodity trading industry. Founded in 2014, Kpler has raised $220 million in funding and has acquired five companies between 2022 and 2023 to enhance its offerings. The company serves a diverse range of...

🎁 Benefits

Kpler offers a flexible office policy allowing employees to choose between co-working spaces, office locations, or full remote work. Employees receive...

🌟 Culture

Kpler fosters a culture of innovation by leveraging advanced data technology to transform the traditionally archaic commodity trading industry. The co...

🌐 Website💼 LinkedIn𝕏 TwitterAll 59 jobs →
Kpler

Data Engineer

Kpler • Germany

Posted 3w agoData engineer📍 Germany
Apply Now →

Skills & Technologies

REST APIKafkaApache Spark

Overview

Kpler is seeking a Data Engineer to build and maintain core datasets and develop REST APIs and streaming pipelines. You'll work with technologies like Kafka and Apache Spark to ensure optimal functionality and reliability.

Job Description

Who you are

You have a strong background in data engineering, with experience in building and maintaining core datasets that include vessels characteristics, companies, and geospatial data. You are skilled in creating and maintaining REST APIs, and you have a solid understanding of streaming pipelines, particularly with Kafka. Your expertise in Apache Spark allows you to efficiently handle batch processing tasks, ensuring data is processed accurately and in a timely manner.

You take pride in your end-to-end ownership of development tasks, starting with a thorough understanding of assigned tickets and requirements. You are adept at designing and building functionality, including APIs and data processing components, and you ensure that your code is deployed to development environments and undergoes rigorous peer and product testing. Your attention to detail ensures that all code is compliant with defined standards and best practices.

You are committed to writing and executing unit, integration, and functional tests that align with defined test scenarios. You understand the importance of validation and compliance in your work, and you strive to maintain high standards in all aspects of your role. After release, you take responsibility for monitoring system performance, alerts, and service level objectives (SLOs) to ensure optimal functionality and reliability.

Desirable

Experience with cloud platforms and data warehousing solutions would be a plus, as would familiarity with data visualization tools. You are a proactive communicator, able to collaborate effectively with cross-functional teams to deliver impactful results.

What you'll do

In this role, you will be responsible for building and maintaining Kpler's core datasets, which are crucial for providing valuable insights to clients in the commodities, energy, and maritime sectors. You will create and maintain REST APIs that facilitate data access and integration, ensuring that clients can easily navigate complex markets.

You will develop streaming pipelines using Kafka to handle real-time data processing, as well as batch pipelines with Apache Spark for more extensive data operations. Your work will involve end-to-end ownership of development tasks, from understanding requirements to deploying code and conducting thorough testing.

You will collaborate closely with other engineers and stakeholders to ensure that the data processing components you build meet the needs of the organization and its clients. Your role will also include monitoring system performance and responding to alerts to maintain optimal functionality.

What we offer

At Kpler, you will join a team of over 700 experts from more than 35 countries, all dedicated to transforming intricate data into actionable strategies. We provide a supportive environment where you can leverage cutting-edge innovation for impactful results. You will have the opportunity to work on user-friendly platforms that simplify global trade information and empower organizations to make informed decisions.

We encourage you to apply even if your experience doesn't match every requirement. Join us in our mission to deliver top-tier intelligence and help our clients stay ahead in a dynamic market landscape.

Interested in this role?

Apply now or save it for later. Get alerts for similar jobs at Kpler.

Apply Now →Get Job Alerts