LeethubLeethub
JobsCompaniesBlog
Go to dashboard

Leethub

Curated tech jobs from FAANG and top companies worldwide.

Top Companies

  • Google Jobs
  • Meta Jobs
  • Amazon Jobs
  • Apple Jobs
  • Netflix Jobs
  • All Companies →

Job Categories

  • Software Engineering
  • Data, AI & Machine Learning
  • Product Management
  • Design & User Experience
  • Operations & Strategy
  • Remote Jobs
  • All Categories →

Browse by Type

  • Remote Jobs
  • Hybrid Jobs
  • Senior Positions
  • Entry Level
  • All Jobs →

Resources

  • Google Interview Guide
  • Salary Guide 2025
  • Salary Negotiation
  • LeetCode Study Plan
  • All Articles →

Company

  • Dashboard
  • Privacy Policy
  • Contact Us
© 2026 Leethub LLC. All rights reserved.
Home›Jobs›Liquid AI›Member of Technical Staff - Training Infrastructure Engineer
Liquid AI

About Liquid AI

Efficient AI for a smarter future

🏢 Tech👥 51-250📅 Founded 2023📍 Cambridge, Massachusetts, United States

Key Highlights

  • Headquartered in Cambridge, Massachusetts
  • 200+ employees with expertise in AI/ML
  • Specializes in Liquid Foundation Models (LFMs)
  • Focus on reducing computational overhead for AI solutions

Liquid AI, headquartered in Cambridge, Massachusetts, specializes in general-purpose artificial intelligence through its innovative Liquid Foundation Models (LFMs). These models are designed to deliver high performance while significantly reducing memory and computing resource requirements, making a...

🎁 Benefits

Employees enjoy competitive salaries, equity options, generous PTO policies, and opportunities for remote work. Liquid AI also offers a learning budge...

🌟 Culture

Liquid AI fosters a culture of innovation and efficiency, focusing on optimizing AI capabilities while minimizing resource usage. The company values c...

🌐 Website💼 LinkedIn𝕏 TwitterAll 15 jobs →
Liquid AI

Member of Technical Staff - Training Infrastructure Engineer

Liquid AI • San Francisco

Posted 5 months ago🏛️ On-SiteMid-LevelAi engineer📍 San francisco
Apply Now →

Job Description

Work With Us

At Liquid, we’re not just building AI models—we’re redefining the architecture of intelligence itself. Spun out of MIT, our mission is to build efficient AI systems at every scale. Our Liquid Foundation Models (LFMs) operate where others can’t: on-device, at the edge, under real-time constraints. We’re not iterating on old ideas—we’re architecting what comes next.

We believe great talent powers great technology. The Liquid team is a community of world-class engineers, researchers, and builders creating the next generation of AI. Whether you're helping shape model architectures, scaling our dev platforms, or enabling enterprise deployments—your work will directly shape the frontier of intelligent systems.

While San Francisco and Boston are preferred, we are open to other locations.

This Role Is For You If:

  • You have extensive experience building distributed training infrastructure for language and multimodal models, with hands-on expertise in frameworks like PyTorch Distributed, DeepSpeed, or Megatron-LM

  • You're passionate about solving complex systems challenges in large-scale model training—from efficient multimodal data loading to sophisticated sharding strategies to robust checkpointing mechanisms

  • You have a deep understanding of hardware accelerators and networking topologies, with the ability to optimize communication patterns for different parallelism strategies

  • You're skilled at identifying and resolving performance bottlenecks in training pipelines, whether they occur in data loading, computation, or communication between nodes

  • You have experience working with diverse data types (text, images, video, audio) and can build data pipelines that handle heterogeneous inputs efficiently

Desired Experience:

  • You've implemented custom sharding techniques (tensor/pipeline/data parallelism) to scale training across distributed GPU clusters of varying sizes

  • You have experience optimizing data pipelines for multimodal datasets with sophisticated preprocessing requirements

  • You've built fault-tolerant checkpointing systems that can handle complex model states while minimizing training interruptions

  • You've contributed to open-source training infrastructure projects or frameworks

  • You've designed training infrastructure that works efficiently for both parameter-efficient specialized models and massive multimodal systems

What You'll Actually Do:

  • Design and implement high-performance, scalable training infrastructure that efficiently utilizes our GPU clusters for both specialized and large-scale multimodal models

  • Build robust data loading systems that eliminate I/O bottlenecks and enable training on diverse multimodal datasets

  • Develop sophisticated checkpointing mechanisms that balance memory constraints with recovery needs across different model scales

  • Optimize communication patterns between nodes to minimize the overhead of distributed training for long-running experiments

  • Collaborate with ML engineers to implement new model architectures and training algorithms at scale

  • Create monitoring and debugging tools to ensure training stability and resource efficiency across our infrastructure

What You'll Gain:

  • The opportunity to solve some of the hardest systems challenges in AI, working at the intersection of distributed systems and cutting-edge multimodal machine learning

  • Experience building infrastructure that powers the next generation of foundation models across the full spectrum of model scales

  • The satisfaction of seeing your work directly enable breakthroughs in model capabilities and performance

About Liquid AI

Spun out of MIT CSAIL, we’re a foundation model company headquartered in Boston. Our mission is to build capable and efficient general-purpose AI systems at every scale—from phones and vehicles to enterprise servers and embedded chips. Our models are designed to run where others stall: on CPUs, with low latency, minimal memory, and maximum reliability. We’re already partnering with global enterprises across consumer electronics, automotive, life sciences, and financial services. And we’re just getting started.

Interested in this role?

Apply now or save it for later. Get alerts for similar jobs at Liquid AI.

Apply Now →Get Job Alerts