Apple

About Apple

The personal technology company redefining user experience

🏒 Tech, HardwareπŸ‘₯ 1001+ employeesπŸ“… Founded 1976πŸ“ Cupertino, CA⭐ 4.2
B2CB2BHardwareSaaSTelecommunicationseCommerce

Key Highlights

  • Market cap of $3 trillion as of 2022
  • Over 1 billion active devices worldwide
  • Comprehensive medical plans including mental healthcare
  • Paid parental leave and gradual return-to-work program

Apple Inc. (NASDAQ: AAPL), headquartered in Cupertino, CA, is the world's most valuable company with a market capitalization of $3 trillion as of 2022. Known for its iconic products such as the iPhone, iPad, and Mac, Apple serves over 1 billion active devices globally. The company has a strong commi...

🎁 Benefits

Apple offers comprehensive medical plans covering physical and mental healthcare, paid parental leave, and a gradual return-to-work program. Employees...

🌟 Culture

Apple's culture emphasizes an obsessive focus on user experience and consumer privacy, setting it apart from competitors. The company promotes inclusi...

Apple

Generative AI Research Engineer, Multimodal, Agent Modeling - SIML

Apple β€’ Cupertino, California, United States

Apply Now β†’

Job Description

Are you passionate about Generative AI? Are you interested in working on groundbreaking generative modeling technologies to enrich billions of people? We are driving multiple initiatives focused on advancing generative models, and we are seeking candidates experienced in training, adapting and deploying large-scale generative models. This role emphasizes AI safety, multimodal understanding and generation, and the development of agentic systems that push the boundaries of what AI can achieve responsibly. We are the Intelligence System Experience (ISE) team within Apple’s software organization. The team operates at the intersection of multimodal machine learning and system experiences. It oversees a range of experiences such as System Experience (Springboard, Settings), Image Generation, Genmoji, Writing tools, Keyboards, Pencil & Paper, Generative Shortcuts - all powered by production scale ML workflows. Our multidisciplinary ML teams focus on a broad spectrum of areas, including Visual Generation Foundation Models, Multimodal Understanding, Visual Understanding of People, Text, Handwriting, and Scenes, Personalization, Knowledge Extraction, Conversation Analysis, Behavioral Modeling for Proactive Suggestions, and Privacy-Preserving Learning. These innovations form the foundation of the seamless, intelligent experiences our users enjoy every day. We are looking for research engineers to architect and advance multimodal LLM and Agentic AI technologies, ensuring their safe and responsible deployment in the real world. An ideal candidate will have the ability to lead diverse cross functional efforts spanning ML modeling, prototyping, validation and privacy-preserving learning. A strong foundation in machine learning and generative AI, along with a proven ability to translate research innovations into production-grade systems, is essential. Industry experience in Vision-Language multimodal modeling, Reinforcement and Preference Learning, Multimodal Safety, and Agentic AI Safety & Security would be meaningful needs. SELECTED REFERENCES TO OUR TEAM’S WORK: https://arxiv.org/pdf/2507.13575 https://arxiv.org/pdf/2407.21075 https://www.apple.com/newsroom/2024/12/apple-intelligence-now-features-image-playground-genmoji-and-more/

Description

We are looking for a candidate with a proven track record in applied ML research. Responsibilities in the role will include training large scale-multimodal (2D/3D vision-language) models on distributed backends, deploying efficient neural architectures on device and private cloud compute, addressing emerging safety challenges to make the model/agents robust and aligned with human values. A key focus of the position is ensuring real-world quality, emphasizing model and agent safety, fairness, and robustness. You will collaborate closely with ML researchers, software engineers, and hardware and design teams across multiple disciplines. The core responsibilities include advancing the multimodal capabilities of large language models and strengthening AI safety and security for agentic workflows. On the user experience front, the work will involve aligning image and video content to the space of LLMs for visual actions and multi-turn interactions, enabling rich, intuitive experiences powered by agentic AI systems.

Minimum Qualifications

M.S. or PhD in Electrical Engineering/Computer Science or a related field (mathematics, physics or computer engineering), with a focus on computer vision and/or machine learning or comparable professional experience. Strong ML and Generative Modeling fundamentals Experience using one or more of the following: Pre-training or Post-training of Multimodal-LLMs, Reinforcement Learning, Distillation Familiarity with distributed training Proficiency in using ML toolkits, e.g., PyTorch You're aware of the challenges associated to the transition of a prototype into a final product Proven record of research innovation and demonstrated leadership in both applied research and development

Preferred Qualifications

Experience with building & deploying AI agents, LLMs for tool use, and Multimodal-LLMs

Eeo Content

Apple is an equal opportunity employer that is committed to inclusion and diversity. We seek to promote equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics. Learn more about your EEO rights as an applicant.

Interested in this role?

Apply now or save it for later. Get alerts for similar jobs at Apple.