AI/ML – Machine Learning Engineer, Accessibility

Apple
Apple

Job Overview

Key Qualifications

  • 5+ years of research experience in machine learning, speech, multimodal sensing, or related areas (e.g., sequence modeling, time-series modeling, sensor fusion, acoustic modeling, ASR, paralinguistics, etc.)
  • Experience applying machine learning to solve practical problems
  • Ability to quickly prototype new ideas and use creative approaches to solve sophisticated problems
  • Proficiency with ML tools like pyTorch/TensorFlow/Jax, scikit-learn, etc.
  • Ability to collaborate closely with multi-functional teams
  • Description

    Apple’s central AI/ML org is looking for Machine Learning Engineers who are passionate about using machine learning to build new user experiences focused on accessibility. The team you will join is responsible for researching and prototyping solutions that enable or improve input modalities for accessibility use cases. We balance strong technical skills with keen pragmatism to identify and explore problems focused around speech and multimodal sensing. This group is highly collaborative and partners with teams across Apple including Siri, Accessibility, and others. In this role, you will work with speech or other time-series inputs, build appropriate data and modeling pipelines, apply a variety machine learning techniques, help integrate models on-device to power new experiences, and work with other teams to iterate on the end-user experience.

    Education & Experience

    PhD in Applied Machine Learning, Speech, Multimodal Modeling, or related field or MS with strong research track record

    View More
    Job Detail
    Shortlist Never pay anyone for job application test or interview.