LeethubLeethub
JobsCompaniesBlog
Go to dashboard

Leethub

Curated tech jobs from FAANG and top companies worldwide.

Top Companies

  • Google Jobs
  • Meta Jobs
  • Amazon Jobs
  • Apple Jobs
  • Netflix Jobs
  • All Companies →

Job Categories

  • Software Engineering
  • Data, AI & Machine Learning
  • Product Management
  • Design & User Experience
  • Operations & Strategy
  • Remote Jobs
  • All Categories →

Browse by Type

  • Remote Jobs
  • Hybrid Jobs
  • Senior Positions
  • Entry Level
  • All Jobs →

Resources

  • Google Interview Guide
  • Salary Guide 2025
  • Salary Negotiation
  • LeetCode Study Plan
  • All Articles →

Company

  • Dashboard
  • Privacy Policy
  • Contact Us
© 2026 Leethub LLC. All rights reserved.
Home›Jobs›Databricks›Staff Software Engineer, Foundational Model Serving
Databricks

About Databricks

Empowering data teams with unified analytics

🏢 Tech👥 1K-5K📅 Founded 2013📍 San Francisco, California, United States

Key Highlights

  • Headquartered in San Francisco, CA
  • Valuation of $43 billion with $3.5 billion raised
  • Serves over 7,000 customers including Comcast and Shell
  • Utilizes Apache Spark for big data processing

Databricks, headquartered in San Francisco, California, is a unified data analytics platform that simplifies data engineering and collaborative data science. Trusted by over 7,000 organizations, including Fortune 500 companies like Comcast and Shell, Databricks has raised $3.5 billion in funding, ac...

🎁 Benefits

Databricks offers competitive salaries, equity options, generous PTO policies, and a remote-friendly work environment. Employees also benefit from a l...

🌟 Culture

Databricks fosters a culture of innovation with a strong emphasis on data-driven decision-making. The company values collaboration across teams and en...

🌐 Website💼 LinkedIn𝕏 TwitterAll 675 jobs →
Databricks

Staff Software Engineer, Foundational Model Serving

Databricks • San Francisco, California

Posted 2 months ago🏛️ On-SiteLeadStaff engineer📍 San francisco📍 California
Apply Now →

Job Description

At Databricks, we are passionate about enabling data teams to solve the world's toughest problems — from making the next mode of transportation a reality to accelerating the development of medical breakthroughs. We do this by building and running the world's best data and AI infrastructure platform so our customers can use deep data insights to improve their business. 

 

Foundation Model Serving is the API Product for hosting and serving frontier AI model inference for open source models like Llama, Qwen, and GPT OSS as well as proprietary models like Claude and OpenAI GPT. For this role, no prior ML or AI experience is necessary. We’re looking for engineers who have owned high scale operational sensitive systems like customer facing APIs, Edge Gateways, ML Inference, or similar services and have an interest in getting deep building LLM APIs and runtimes at scale.

 

As a Staff Engineer, you’ll play a critical role in shaping both the product experience and core infrastructure. You will design and build systems that enable high-throughput, low-latency inference on GPU workloads with frontier models, influence architectural direction, and collaborate closely across platform, product, infrastructure, and research teams to deliver a world-class foundation model API product.

 

The impact you will have:

 

  • Design and implement core systems and APIs that power Databricks Foundation Model Serving, ensuring scalability, reliability, and operational excellence.
  • Partner with product and engineering leadership to define the technical roadmap and long-term architecture for serving workloads.
  • Drive architectural decisions and trade-offs to optimize performance, throughput, autoscaling, and operational efficiency for GPU serving workloads.
  • Contribute directly to key components across the serving infrastructure — from working in systems like vLLM and SGLang to creating token based rate limiters and optimizers — ensuring smooth and efficient operations at scale.
  • Collaborate cross-functionally with product, platform, and research teams to translate customer needs into reliable and performant systems.
  • Establish best practices for code quality, testing, and operational readiness, and mentor other engineers through design reviews and technical guidance.
  • Represent the team in cross-organizational technical discussions and influence Databricks’ broader AI platform strategy.

 

What we look for:

 

  • 10+ years of experience building and operating large-scale distributed systems.
  • Experience leading high-scale operationally sensitive backend systems.
  • A track record of up-leveling teams engineering excellence.
  • Strong foundation in algorithms, data structures, and system design as applied to large-scale, low-latency serving systems.
  • Proven ability to deliver technically complex, high-impact initiatives that create measurable customer or business value.
  • Strong communication skills and ability to collaborate across teams in fast-moving environments.
  • Strategic and product-oriented mindset with the ability to align technical execution with long-term vision.
  • Passion for mentoring, growing engineers, and fostering technical excellence.

 

Pay Range Transparency

Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles.  Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. For more information regarding which range your location is in visit our page here.

 

Local Pay Range
$192,000—$260,000 USD

About Databricks

Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook.

Benefits

At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https://www.mybenefitsnow.com/databricks. 

Our Commitment to Diversity and Inclusion

At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics.

Compliance

If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.

Interested in this role?

Apply now or save it for later. Get alerts for similar jobs at Databricks.

Apply Now →Get Job Alerts