LeethubLeethub
JobsCompaniesBlog
Go to dashboard

Leethub

Curated tech jobs from FAANG and top companies worldwide.

Top Companies

  • Google Jobs
  • Meta Jobs
  • Amazon Jobs
  • Apple Jobs
  • Netflix Jobs
  • All Companies →

Job Categories

  • Software Engineering
  • Data, AI & Machine Learning
  • Product Management
  • Design & User Experience
  • Operations & Strategy
  • Remote Jobs
  • All Categories →

Browse by Type

  • Remote Jobs
  • Hybrid Jobs
  • Senior Positions
  • Entry Level
  • All Jobs →

Resources

  • Google Interview Guide
  • Salary Guide 2025
  • Salary Negotiation
  • LeetCode Study Plan
  • All Articles →

Company

  • Dashboard
  • Privacy Policy
  • Contact Us
© 2026 Leethub LLC. All rights reserved.
Home›Jobs›Cartesia›Researcher: Multimodal
Cartesia

About Cartesia

AI solutions for real-time, multimodal applications

🏢 Tech👥 11-50📅 Founded 2023📍 San Francisco, California, United States

Key Highlights

  • Founded by former Stanford researchers
  • $27 million raised from Index Ventures, Lightspeed, and General Catalyst
  • Specializes in real-time multimodal AI models
  • Headquartered in San Francisco, California

Cartesia, based in San Francisco, California, is at the forefront of artificial intelligence, specializing in real-time multimodal models that enhance performance while minimizing computational resource usage. Their advanced state space models (SSMs) power applications like ultra-realistic generativ...

🎁 Benefits

Employees enjoy competitive salaries, equity options, flexible PTO, and a remote-friendly work policy that supports work-life balance....

🌟 Culture

Cartesia fosters a culture of innovation and collaboration, driven by a commitment to pushing the boundaries of AI technology while maintaining a focu...

🌐 Website💼 LinkedIn𝕏 TwitterAll 28 jobs →
Cartesia

Researcher: Multimodal

Cartesia • *HQ - San Francisco, CA

Posted 1 year ago🏛️ On-SiteAi research engineer📍 San francisco
Apply Now →

Job Description

About Cartesia

Our mission is to build the next generation of AI: ubiquitous, interactive intelligence that runs wherever you are. Today, not even the best models can continuously process and reason over a year-long stream of audio, video and text—1B text tokens, 10B audio tokens and 1T video tokens—let alone do this on-device.

We're pioneering the model architectures that will make this possible. Our founding team met as PhDs at the Stanford AI Lab, where we invented State Space Models or SSMs, a new primitive for training efficient, large-scale foundation models. Our team combines deep expertise in model innovation and systems engineering paired with a design-minded product engineering team to build and ship cutting edge models and experiences.

We're funded by leading investors at Index Ventures and Lightspeed Venture Partners, along with Factory, Conviction, A Star, General Catalyst, SV Angel, Databricks and others. We're fortunate to have the support of many amazing advisors, and 90+ angels across many industries, including the world's foremost experts in AI.

Your Impact

  • Conduct cutting-edge research at the intersection of machine learning, multimodal data, and generative modeling to advance the state of AI across audio, text, vision, and other modalities.

  • Develop novel algorithms for multimodal understanding and generation, leveraging new architectures, training algorithms, datasets, and inference techniques.

  • Design and build models that enable seamless integration of modalities for multimodal reasoning on streaming data.

  • Lead the creation of robust evaluation frameworks to benchmark model performance on multimodal datasets and tasks.

  • Collaborate closely with cross-functional teams to translate research breakthroughs into impactful products and applications.

What You Bring

  • Expertise in machine learning, multimodal learning, and generative modeling, with a strong research track record in top-tier conferences (e.g., CVPR, ICML, NeurIPS, ICCV).

  • Proficiency in deep learning frameworks such as PyTorch or TensorFlow, with experience in handling diverse data modalities (e.g., audio, video, text).

  • Strong understanding of state-of-the-art techniques for multimodal modeling, such as autoregressive and diffusion modeling, and deep understanding of architectural tradeoffs.

  • Passion for exploring the interplay between modalities to solve complex problems and create groundbreaking applications.

  • Excellent problem-solving skills, with the ability to independently tackle research challenges and collaborate effectively with multidisciplinary teams.

Nice to Have

  • Experience working with multimodal datasets, such as audio-visual datasets, video-captioning datasets, or large-scale cross-modal corpora.

  • Background in designing or deploying real-time multimodal systems in resource-constrained environments.

  • Early-stage startup experience or experience working in fast-paced R&D environments.

What We Offer

🍽 Lunch, dinner and snacks at the office.

🏥 Fully covered medical, dental, and vision insurance for employees.

🏦 401(k).

✈️ Relocation and immigration support.

🦖 Your own personal Yoshi.

Our Culture

🏢 We’re an in-person team based out of San Francisco. We love being in the office, hanging out together, and learning from each other every day.

🚢 We ship fast. All of our work is novel and cutting edge, and execution speed is paramount. We have a high bar, and we don’t sacrifice quality or design along the way.

🤝 We support each other. We have an open & inclusive culture that’s focused on giving everyone the resources they need to succeed.

Interested in this role?

Apply now or save it for later. Get alerts for similar jobs at Cartesia.

Apply Now →Get Job Alerts