Minimum qualifications:
- Master's degree in Statistics, Data Science, Mathematics, Physics, Economics, Operations Research, Engineering, or a related quantitative field or equivalent practical experience.
- 5 years of experience in solving product or business problems, coding (e.g., Python, R, SQL), querying databases or statistical analysis, or 3 years of experience with a PhD degree.
Preferred qualifications:
- Experience with machine learning (tools, prepare training sets, train classifiers), large language models, predictive modeling, causal inference, statistical data analysis or operations research with SQL and Python.
- Experience using mathematical techniques or statistical tools to find answers and translate results into business recommendations.
- Experience with fraud, security and threat analysis in context of Internet-related products/activities, especially with Generative AI.
- Excellent written and verbal communication skills with ability to self-direct and collaborate with stakeholders.
- Excellent teaching skills, with ability to learn new techniques across offices and time zones.
- Excellent problem-solving and critical thinking skills with attention to detail.
About The Job
Trust and Safety is Google’s team of abuse fighting and user trust experts working to make the internet a safer place. A diverse team of Analysts, Policy Specialists, Technical Experts, and Program Managers, we work to reduce risk and fight abuse across all of Google’s products, protecting our users, advertisers, and publishers across the globe in over 40 languages. Within the Trust and Safety organization, Data Science is part of the Insights and UX teams that leverage the power of data and research to deliver insights to inform selection-making, motivate operational excellence and foster user trust in Google products.
In this role, you will evaluate and improve Google's products for users through well-reasoned research and analysis. You will understand and quantify how far the source can be trusted, apply critical thinking to the data, and think through the impact on a macro scale. You will collaborate and communicate with a multi-disciplinary team of engineers and abuse analysts on a wide range of problems. You will bring problem-solving excellence and statistical methods to understanding emerging risks with specific focus on responsible AI, testing standards and red teaming. You will also have the opportunity to motivate impact in a diverse range of tests ranging from measuring quality, developing abuse metrics, and optimizing content moderation workflows, operational systems and abuse protections with data-motivated focus.
The US base salary range for this full-time position is $150,000-$223,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. The range displayed on each job posting reflects the minimum and maximum target salaries for the position across all US locations. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.
Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google .
Responsibilities
- Work with large data sets, solve analysis problems and apply advanced problem-solving methods as needed. Conduct end-to-end analysis including data gathering and requirements specification, processing, analysis, ongoing deliverables, and presentations.
- Build and prototype analysis pipelines to provide insights. Develop understanding of Google data structures and metrics, advocating for changes where needed.
- Interact cross-functionally with a variety of teams. Work with engineers to identify opportunities for, design, and assess improvements to Google products.
- Make business recommendations (e.g., cost-benefit, forecasting, experiment analysis) with presentations of findings at multiple stakeholder levels through visual quantitative information displays.
- Research and develop analysis, forecasting, and optimization methods to improve Google's user facing Generative AI products and internal operations (e.g., AI testing standards, adversarial analysis, quantifying and optimizing red teaming exercises, prompt analysis and evaluate data set optimization).
Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form .