AI/ML – Engineering Program Manager, Search Human Annotation

Apple
Apple

Job Overview

Key Qualifications

  • 5+ years experience managing a large program that consists of cross functional product, data science or software engineering teams
  • Proven capability to set direction and work across a large number of partners to implement process and efficient outcomes in a data-driven environment
  • Experience in experimental design, industry research, or academic research
  • Self-motivated and proactive with demonstrated process optimization capabilities required to forge a path to success
  • Phenomenal attention to detail and organized
  • Strong curiosity and creativity in solving challenges surfaced in human annotations across different human evaluation scenarios
  • Extraordinary communication and presentation skills, written and verbal, to all levels of an organization
  • The ability of building a new program/process from the group up including defining requirements, driving agreements from multiple teams and executing towards a successful completion
  • Description

    In this role you will lead Search human annotation program roadmap planning, annotation strategy, budget, partnership management to scale Siri human annotation to support the next generation of search products via human annotation based evaluation excellence. Additionally, you will be the specialist on human annotation, raising the bar for effective and efficient utilization of human judgment in empowering better product evaluation and development in the AI/ML org. You will work with world-class data scientists, machine learning engineers, evaluation tooling development engineers, Human Annotation Operation team and product teams to build the highest quality user experience that over 1 billion Apple customers love.- Lead roadmap, detailed execution of future human annotation platforms and programs to deliver world class search quality offline evaluation via close collaboration with data scientists in Search analytics team – Contribute to brainstorming on continuous human evaluation methodology improvement as well as solutions and workarounds for human evaluation challenges (e.g. on-device search evaluation) – Drive organization wide initiatives to scale human annotation tooling and process – Lead allocations of grading resources across Siri teams – Partner with procurement and vendors to meet human annotation resource demand – Collaborate with Search analytics team, Engineering team, Privacy team, Siri Annotation Operation team, Evaluation Tooling and Platform teams to manage dependencies, requirements as well as executions to ensure excellence in human grading based quality evaluation – Organize data sharing initiatives for Siri teams to maximize utilization and value generation of our human grading data assets – Establish efficacy and efficiency standard methodologies for human annotation in Siri org

    Education & Experience

    BA/BS, MA/MS or PhD in Linguistics, Communication Studies, Journalism, Information Science, Computer Science, Computer Engineering or related fields/experiences

    View More
    Job Detail
    Shortlist Never pay anyone for job application test or interview.