Watch this video to learn more about Google Inc.
Job Type
Job Details
- Bachelor's degree or equivalent practical experience.
- 7 years of experience in data analytics, Trust and Safety, policy, cyber security, intel (e.g., geopolitical research, open-source intelligence (OSINT), etc.), or related fields.
Preferred qualifications:
- Master's or PhD in relevant field or equivalent practical experience.
- Experience in SQL, data collection/transformation, visualization/dashboards, or experience in a scripting/programming language (e.g. Python).
- Experience with machine learning.
- Excellent communication and presentation skills (written and verbal) and the ability to influence cross-functionally at various levels.
- Excellent problem-solving and critical thinking skills with attention to detail in an fluid environment.
Trust and Safety team members are tasked with identifying and taking on the biggest problems that challenge the safety and integrity of our products. They use technical know-how, excellent problem-solving skills, user insights, and proactive communication to protect users and our partners from abuse across Google products like Search, Maps, Gmail, and Google Ads. On this team, you're a big-picture thinker and strategic team-player with a passion for doing what’s right. You work globally and cross-functionally with Google engineers and product managers to identify and fight abuse and fraud cases at Google speed - with urgency. And you take pride in knowing that every day you are working hard to promote trust in Google and ensuring the highest levels of user safety.
As a Senior Analyst on the Content Adversarial Red Team (CART), you will be a leading expert in identifying and mitigating emerging content safety risks within Google's Generative Artificial Intelligence(GenAI) products. You will lead the charge in uncovering "unknown" generative AI issues - novel threats and vulnerabilities that are not captured by traditional testing methods. Your ability to think strategically will be instrumental in shaping the future of Artificial Intelligence(AI) development, ensuring that Google's AI products are safe, fair, and unbiased.
Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google.
- Use Google’s big data to conduct data-oriented analysis, architect metrics, synthesize information, solve problems, and influence business decision-making by presenting insights and market trends.
- Influence the team providing thought leadership and expertise in AI safety. This will involve advocating AI safety initiatives, for the importance of AI safety within Google and the wider industry.
- Develop and implement CART strategic programs to improve AI safety and security. This will include defining and developing new programs, identify key areas for improvement and develop comprehensive program plans.
- Lead the team's efforts in identifying, analyzing, and mitigating high-complexity content risks. This will involve developing and executing innovative red teaming strategies, identify emerging threats and design highly targeted testing approaches.
- Work with sensitive content or situations and be exposed to graphic, controversial, or upsetting topics or content.
Build for everyone Since our founding in 1998, Google has grown by leaps and bounds. Starting from two computer science students in a university... Read more