LeethubLeethub
JobsCompaniesBlog
Go to dashboard

Leethub

Curated tech jobs from FAANG and top companies worldwide.

Top Companies

  • Google Jobs
  • Meta Jobs
  • Amazon Jobs
  • Apple Jobs
  • Netflix Jobs
  • All Companies →

Job Categories

  • Software Engineering
  • Data, AI & Machine Learning
  • Product Management
  • Design & User Experience
  • Operations & Strategy
  • Remote Jobs
  • All Categories →

Browse by Type

  • Remote Jobs
  • Hybrid Jobs
  • Senior Positions
  • Entry Level
  • All Jobs →

Resources

  • Google Interview Guide
  • Salary Guide 2025
  • Salary Negotiation
  • LeetCode Study Plan
  • All Articles →

Company

  • Dashboard
  • Privacy Policy
  • Contact Us
© 2026 Leethub LLC. All rights reserved.
Home›Jobs›Liquid AI›Member of Technical Staff - ML Inference Engineer, Pytorch
Liquid AI

About Liquid AI

Efficient AI for a smarter future

🏢 Tech👥 51-250📅 Founded 2023📍 Cambridge, Massachusetts, United States

Key Highlights

  • Headquartered in Cambridge, Massachusetts
  • 200+ employees with expertise in AI/ML
  • Specializes in Liquid Foundation Models (LFMs)
  • Focus on reducing computational overhead for AI solutions

Liquid AI, headquartered in Cambridge, Massachusetts, specializes in general-purpose artificial intelligence through its innovative Liquid Foundation Models (LFMs). These models are designed to deliver high performance while significantly reducing memory and computing resource requirements, making a...

🎁 Benefits

Employees enjoy competitive salaries, equity options, generous PTO policies, and opportunities for remote work. Liquid AI also offers a learning budge...

🌟 Culture

Liquid AI fosters a culture of innovation and efficiency, focusing on optimizing AI capabilities while minimizing resource usage. The company values c...

🌐 Website💼 LinkedIn𝕏 TwitterAll 15 jobs →
Liquid AI

Member of Technical Staff - ML Inference Engineer, Pytorch

Liquid AI • San Francisco

Posted 5 months ago🏛️ On-SiteMid-LevelMachine learning engineer📍 San francisco
Apply Now →

Job Description

Work With Us

At Liquid, we’re not just building AI models—we’re redefining the architecture of intelligence itself. Spun out of MIT, our mission is to build efficient AI systems at every scale. Our Liquid Foundation Models (LFMs) operate where others can’t: on-device, at the edge, under real-time constraints. We’re not iterating on old ideas—we’re architecting what comes next.

We believe great talent powers great technology. The Liquid team is a community of world-class engineers, researchers, and builders creating the next generation of AI. Whether you're helping shape model architectures, scaling our dev platforms, or enabling enterprise deployments—your work will directly shape the frontier of intelligent systems.

This Role Is For You If:

  • You have experience building large-scale production stacks for model serving

  • You have a solid understanding of ragged batching, dynamic load balancing, KV-cache management, and other multi-tenant serving techniques

  • You have experience with applying quantization strategies (e.g., FP8, INT4) while safeguarding model accuracy

  • You have deployed models in both single-GPU and multi-GPU environments and can diagnose performance issues across the stack

Desired Experience:

  • PyTorch

  • Python

  • Model-serving frameworks (e.g. TensorRT, vLLM, SGLang)

What You'll Actually Do:

  • Optimize and productionize the end-to-end pipeline for GPU model inference around Liquid Foundation Models (LFMs)

  • Facilitate the development of next-generation Liquid Foundation Models from the lens of GPU inference

  • Profile and robustify the stack for different batching and serving requirements

  • Build and scale pipelines for test-time compute

What You'll Gain:

  • Hands-on experience with state-of-the-art technology at a leading AI company

  • Deeper expertise in machine learning systems and efficient large model inference

  • Opportunity to scale pipelines that directly influence user latency and experience with Liquid's models

  • A collaborative, fast-paced environment where your work directly shapes our products and the next generation of LFMs

About Liquid AI

Spun out of MIT CSAIL, we’re a foundation model company headquartered in Boston. Our mission is to build capable and efficient general-purpose AI systems at every scale—from phones and vehicles to enterprise servers and embedded chips. Our models are designed to run where others stall: on CPUs, with low latency, minimal memory, and maximum reliability. We’re already partnering with global enterprises across consumer electronics, automotive, life sciences, and financial services. And we’re just getting started.

Interested in this role?

Apply now or save it for later. Get alerts for similar jobs at Liquid AI.

Apply Now →Get Job Alerts