Siri – Machine Learning Engineer, Speech and Audio Systems


Job Overview

Key Qualifications

  • 5+ years experience applying deep learning technologies to computer vision, speech recognition, natural language understanding, etc.
  • Proficiency in programming languages including but not limited to Python/C/C++
  • Extensive experience with machine learning frameworks like Tensorflow, PyTorch
  • Strong understanding of embedded systems and server-side engineering optimization
  • Outstanding written and verbal communication, with the ability to work well in multi-functional teams
  • Experience on speech recognition, and TinyML is preferred
  • Description

    You will be part of a team that is responsible for developing and integrating Siri’s speech and audio experience in a full range Apple devices. Your focus will be to continuously improve the invocation experience of Siri voice assistant on all Apple devices, across the globe through technological advances. You will develop new machine learning (ML) technologies and agile deployment processes which are easier to scale using the best automation practices. This position requires passion for improving the ML training and evaluation infrastructure for improved research productivity, and faster modeling iterations. You will work with the speech, audio hardware and software engineering teams to deliver a great speech user experience. You must have a “make this happen” attitude and willingness to also work hands-on in building tools, testing, data collection, running experiments as well as work with state-of-the-art speech and audio processing algorithms. You should thrive in a fast paced environment with constantly evolving priorities, and collaborate well with other engineering teams at Apple.

    Education & Experience

    MS in Computer Science, Electrical Engineering or related field

    View More
    Job Detail
    Shortlist Never pay anyone for job application test or interview.