LeethubLeethub
JobsCompaniesBlog
Go to dashboard

Leethub

Curated tech jobs from FAANG and top companies worldwide.

Top Companies

  • Google Jobs
  • Meta Jobs
  • Amazon Jobs
  • Apple Jobs
  • Netflix Jobs
  • All Companies →

Job Categories

  • Software Engineering
  • Data, AI & Machine Learning
  • Product Management
  • Design & User Experience
  • Operations & Strategy
  • Remote Jobs
  • All Categories →

Browse by Type

  • Remote Jobs
  • Hybrid Jobs
  • Senior Positions
  • Entry Level
  • All Jobs →

Resources

  • Google Interview Guide
  • Salary Guide 2025
  • Salary Negotiation
  • LeetCode Study Plan
  • All Articles →

Company

  • Dashboard
  • Privacy Policy
  • Contact Us
© 2026 Leethub LLC. All rights reserved.
Home›Jobs›Cartesia›Researcher, Evals
Cartesia

About Cartesia

AI solutions for real-time, multimodal applications

🏢 Tech👥 11-50📅 Founded 2023📍 San Francisco, California, United States

Key Highlights

  • Founded by former Stanford researchers
  • $27 million raised from Index Ventures, Lightspeed, and General Catalyst
  • Specializes in real-time multimodal AI models
  • Headquartered in San Francisco, California

Cartesia, based in San Francisco, California, is at the forefront of artificial intelligence, specializing in real-time multimodal models that enhance performance while minimizing computational resource usage. Their advanced state space models (SSMs) power applications like ultra-realistic generativ...

🎁 Benefits

Employees enjoy competitive salaries, equity options, flexible PTO, and a remote-friendly work policy that supports work-life balance....

🌟 Culture

Cartesia fosters a culture of innovation and collaboration, driven by a commitment to pushing the boundaries of AI technology while maintaining a focu...

🌐 Website💼 LinkedIn𝕏 TwitterAll 27 jobs →
Cartesia

Researcher, Evals

Cartesia • *HQ - San Francisco, CA

Posted 2 months ago🏛️ On-SiteLeadAi research engineer📍 San francisco
Apply Now →

Job Description

About Cartesia

Our mission is to build the next generation of AI: ubiquitous, interactive intelligence that runs wherever you are. Today, not even the best models can continuously process and reason over a year-long stream of audio, video and text—1B text tokens, 10B audio tokens and 1T video tokens—let alone do this on-device.

We're pioneering the model architectures that will make this possible. Our founding team met as PhDs at the Stanford AI Lab, where we invented State Space Models or SSMs, a new primitive for training efficient, large-scale foundation models. Our team combines deep expertise in model innovation and systems engineering paired with a design-minded product engineering team to build and ship cutting edge models and experiences.

We're funded by leading investors at Index Ventures and Lightspeed Venture Partners, along with Factory, Conviction, A Star, General Catalyst, SV Angel, Databricks and others. We're fortunate to have the support of many amazing advisors, and 90+ angels across many industries, including the world's foremost experts in AI.

About the Role

The New Horizons Evaluations team is reimagining how we measure progress in interactive machine intelligence. As Evaluations Lead, you will design evaluation frameworks that capture not just what models know — but how they reason, remember, and interact over time. You’ll work at the intersection of research, product, and infrastructure to develop metrics, systems, and studies that define what “intelligence” means in the next generation of AI. This role is ideal for someone who combines scientific rigor with technical execution, and who’s deeply curious about how people use — and want to use — intelligent systems. Your work will shape how Cartesia builds and evaluates frontier models, ensuring that progress isn’t measured solely by static benchmarks, but by deeper qualities like understanding, naturalness, and adaptability in real-world interaction.

Your Impact

  • Identify and define key model capabilities and behaviors that matter for next-generation model evals

  • Develop and implement new evaluation pipelines with robust statistical analysis and clear reporting

  • Partner closely with model training and research teams to embed evaluation systems directly into model development loops

  • Prototype new user studies and behavioral experiments to ground evaluations in real-world use.

What You Bring

  • Experience designing or implementing evaluation frameworks for generative models (audio, text, or multimodal)

  • Strong technical and analytical skills, ability to take open-ended research ideas and translate them into production-ready systems

  • Creativity in defining novel quantitative metrics for subjective or behavioral qualities

  • Excitement for building evaluation systems that bridge research and real-world use

  • Curiosity and rigor in equal measure; motivation driven by discovering how to measure meaningful progress in intelligent behavior

Nice to Have

  • Understanding of model alignment concepts and evaluation approaches

What We Offer

🍽 Lunch, dinner, and snacks at the office

🏥 Fully covered medical, dental, and vision insurance for employees

🏦 401(k)

✈️ Relocation and immigration support

🦖 Your own personal Yoshi

Our Culture

🏢 We’re an in-person team based out of San Francisco. We love being in the office, hanging out together, and learning from each other every day.

🚢 We ship fast. All of our work is novel and cutting edge, and execution speed is paramount. We have a high bar, and we don’t sacrifice quality or design along the way.

🤝 We support each other. We have an open & inclusive culture that’s focused on giving everyone the resources they need to succeed.

Interested in this role?

Apply now or save it for later. Get alerts for similar jobs at Cartesia.

Apply Now →Get Job Alerts