You will be a part of a team that’s responsible for help research and develop Siri’s multi-modal experience in a full range Apple devices. This position requires passion for researching and developing multi-modal machine learning algorithms and systems. You will work with the speech, vision, natural language understanding teams to deliver a great Siri user experience. You must have a “make this happen” attitude and willingness to also work hands-on in building machine learning tools, testing, data collection, running experiments as well as work with state-of-the-art computer vision, speech and natural language understanding processing algorithms.Your key responsibilities in this role are:-Research, design and implementation of machine learning/deep learning algorithms-Benchmarking and fine tuning of machine learning/deep learning algorithms-Optimizing algorithms for real time and low power constraints for embedded devices-Support algorithm integration into Apple products-Collaboration with teams across Apple with multidisciplinary skills Because you’ll be working closely with engineers from a number of other teams at Apple, you’re a team player who thrives in a fast paced environment with rapidly changing priorities.
Education & Experience
MS/Ph.D in Computer Science, Electrical Engineering or related field with focus on machine learning, computer vision, speech processing, natural language understanding or similarView More
More jobs from our partners (302)
AppleMachine Learning Infrastructure Engineer, Input and Interaction on14 June 2021Any
AppleComputational Support for Machine Learning and LSTM on14 June 2021Any
AppleAI/ML – Machine Learning Engineer, Siri Search, Knowledge & Platform on10 June 2021Any