LeethubLeethub
JobsCompaniesBlog
Go to dashboard

Leethub

Curated tech jobs from FAANG and top companies worldwide.

Top Companies

  • Google Jobs
  • Meta Jobs
  • Amazon Jobs
  • Apple Jobs
  • Netflix Jobs
  • All Companies →

Job Categories

  • Software Engineering
  • Data, AI & Machine Learning
  • Product Management
  • Design & User Experience
  • Operations & Strategy
  • Remote Jobs
  • All Categories →

Browse by Type

  • Remote Jobs
  • Hybrid Jobs
  • Senior Positions
  • Entry Level
  • All Jobs →

Resources

  • Google Interview Guide
  • Salary Guide 2025
  • Salary Negotiation
  • LeetCode Study Plan
  • All Articles →

Company

  • Dashboard
  • Privacy Policy
  • Contact Us
© 2026 Leethub LLC. All rights reserved.
Home›Jobs›Amazon›Senior Inference Engineer, AGI Inference, AGI Inference
Amazon

About Amazon

The everything store and cloud computing leader

🏢 Tech👥 1001+ employees📅 Founded 1995📍 South Lake Union, Seattle, WA⭐ 3.7
B2CB2BMarketplaceCloud ComputingeCommerce

Key Highlights

  • Headquartered in South Lake Union, Seattle, WA
  • Over 1.5 million employees worldwide
  • Leading cloud services through Amazon Web Services (AWS)
  • Acquired Whole Foods, Twitch, and Ring

Amazon, headquartered in South Lake Union, Seattle, WA, is the world's largest online retailer and a leader in cloud computing through Amazon Web Services (AWS). With over 1.5 million employees globally, Amazon operates in various sectors, including AI with its Alexa devices and a vast marketplace k...

🎁 Benefits

Amazon offers competitive salaries, stock options, generous PTO policies, and comprehensive health benefits. Employees also have access to a learning ...

🌟 Culture

Amazon's culture is driven by customer obsession and a focus on innovation. The company encourages employees to think big and move fast, fostering an ...

🌐 Website💼 LinkedIn𝕏 TwitterAll 94870 jobs →
Amazon

Senior Inference Engineer, AGI Inference, AGI Inference

Amazon • Cambridge, England, GBR

Posted 7 months ago🏛️ On-SiteSeniorAi engineer📍 Cambridge
Apply Now →

Job Description

The Inference team at AGI is a group of innovative developers working on ground-breaking multi-modal inference solutions that revolutionize how AI systems perceive and interact with the world. We push the limits of inference performance to provide the best possible experience for our users across a wide range of applications and devices. We are looking for talented, passionate, and dedicated Inference Engineers to join our team and build innovative, mission-critical, high-volume production systems that will shape the future of AI. You will have an enormous opportunity to make an impact on the design, architecture, and implementation of cuting-edge technologies used every day, potentially by people you know.

Key job responsibilities
Drive the technical strategy and roadmap for inference optimizations across AGI
• Develop high-performance inference software for a diverse set of neural models, typically in C/C++
• Optimize inference performance across various platforms (on-device, cloud-based CPU, GPU, proprietary ASICs)
• Collaborate closely with research scientists to bring next-generation neural models to life
• Partner with internal and external hardware teams to maximize platform utilization
• Work in an Agile environment to deliver high-quality software against tight schedules
• Mentor and grow technical talent- 5+ years of non-internship professional software development experience
- 5+ years of leading design or architecture (design patterns, reliability and scaling) of new and existing systems experience
- 5+ years of programming with at least one software programming language experience
- Experience as a mentor, tech lead or leading an engineering team
- Bachelor's degree in Computer Science, Computer Engineering, or related field
- 2+ years of experience optimizing neural models
- Deep expertise in C/C++ and low-level system optimization
- Proven track record of leading large-scale technical initiatives
- Solid understanding of deep learning architectures (CNNs, RNNs, Transformers, etc.)
- Experience with inference frameworks (PyTorch, TensorFlow, ONNXRuntime, TensorRT, LLaMA.cpp, etc.)
- Strong communication skills and ability to work in a collaborative environment- Proficiency in kernel programming for accelerated hardware
- Experience with latency-sensitive optimizations and real-time inference
- Understanding of resource constraints on mobile/edge hardware
- Knowledge of model compression techniques (quantization, pruning, distillation, etc.)
- Experience with LLM efficiency techniques like speculative decoding and long context

Amazon is an equal opportunities employer. We believe passionately that employing a diverse workforce is central to our success. We make recruiting decisions based on your experience and skills. We value your passion to discover, invent, simplify and build. Protecting your privacy and the security of your data is a longstanding top priority for Amazon. Please consult our Privacy Notice (https://www.amazon.jobs/en/privacy_page) to know more about how we collect, use and transfer the personal data of our candidates.

Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status.

Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Interested in this role?

Apply now or save it for later. Get alerts for similar jobs at Amazon.

Apply Now →Get Job Alerts