Onsite
Full Time Posted 25 days ago
I'm Interested

Job Type

Full Time

Job Details

Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Seattle, WA, USA; Atlanta, GA, USA; Austin, TX, USA. Minimum qualifications:
  • Bachelor's degree or equivalent practical experience.
  • 4 years of experience in red teaming, fairness assessments, scaled evaluations, anti-abuse, or a related field.
  • Experience in testing machine learning systems, and conducting qualitative and quantitative data analysis.

Preferred qualifications:
  • Experience in SQL, building dashboards, data collection/transformation, visualization/dashboards, or scripting/programming language (e.g., Python).
  • Experience working in product policy and fairness analysis and identifying associated risks.
  • Excellent written/verbal communication and presentation skills, with the ability to influence cross-functionally at various levels.
  • Excellent problem-solving and critical thinking skills with attention to detail in an ever-changing environment.
About the job

Trust & Safety team members are tasked with identifying and taking on the biggest problems that challenge the safety and integrity of our products. They use technical know-how, excellent problem-solving skills, user insights, and proactive communication to protect users and our partners from abuse across Google products like Search, Maps, Gmail, and Google Ads. On this team, you're a big-picture thinker and strategic team-player with a passion for doing what’s right. You work globally and cross-functionally with Google engineers and product managers to identify and fight abuse and fraud cases at Google speed - with urgency. And you take pride in knowing that every day you are working hard to promote trust in Google and ensuring the highest levels of user safety.

As a ProFair Analyst, you will be supporting Google's flagship generative AI products. You will be part of a collaborative team, working with groups across the company to analyze how products impact users and broader society, and leveraging data to provide solutions.

In this role, you will identify emerging ethical and socio-technical abuse trends. You will design and lead adversarial fairness and safety testing efforts to identify and mitigate ethical AI concerns across a variety of Google products. You will work closely with cross-functional partner teams across Google (e.g., Product, Legal, UX, Operations, Research, and Engineering teams), and help solve problems to ensure user trust in Google and our products.

At Google we work hard to earn our users’ trust every day. Trust & Safety is Google’s team of abuse fighting and user trust experts working daily to make the internet a safer place. We partner with teams across Google to deliver bold solutions in abuse areas such as malware, spam and account hijacking. A diverse team of Analysts, Policy Specialists, Engineers, and Program Managers, we work to reduce risk and fight abuse across all of Google’s products, protecting our users, advertisers, and publishers across the globe in over 40 languages.

The US base salary range for this full-time position is $108,000-$158,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. The range displayed on each job posting reflects the minimum and maximum target salaries for the position across all US locations. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.

Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google.
Responsibilities
  • Design and lead proactive algorithmic fairness and safety testing, including adversarial red teaming sessions, to ensure our technologies are not reflecting societal biases, in support of Google’s AI principles.
  • Perform analysis into fairness issues and vulnerabilities. Prepare written reports outlining user impact issues, abuse vectors, and trends, and effectively communicate findings to product and company leadership.
  • Provide socio-technical guidance to product teams in conducting their own fairness testing by providing technical, qualitative, and quantitative data support in assessing Machine Learning (ML) models and products.
  • Provide expert guidance to cross-functional groups and partner teams to define fairness best practices and actionable solutions.
Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.
Mission
We're connecting diverse talent to big career moves. Meeting people who boost your career is hard - yet networking is key to growth and economic empowerment. We’re here to support you - within your current workplace or somewhere new. Upskill, join daily virtual events, apply to roles (it’s free!).
Are you hiring? Join our platform for diversifiying your team
ProFair Analyst, Trust and Safety
I'm Interested