Careers
Careers

job details

Back to jobs search

Jobs search results

2,691 jobs matched
Back to jobs search

Principal, Generative AI Safety Strategy

GoogleSan Francisco, CA, USA; Sunnyvale, CA, USA
Note: By applying to this position you will have an opportunity to share your preferred working location from the following: San Francisco, CA, USA; Sunnyvale, CA, USA.

Minimum qualifications:

  • Bachelor's degree or equivalent practical experience.
  • 10 years of experience in trust and safety at a technology company.
  • 7 years of experience in Trust and Safety, intelligence, security analysis, threat or risk management, geopolitical forecasting, or a related field.
  • 2 years of experience in Child Development Security to understand potential harms, develop mitigations, and launch products that earn user trust.
  • 1 year of experience working with generative AI technologies.

Preferred qualifications:

  • Experience in data analytics using statistical analysis and hypothesis testing and analyzing ML models performance or working on large-language models (LLMs).
  • Experience making business decisions, including identifying gaps or business needs, and innovating and scaling solutions.
  • Ability to think and work in a changing environment and to influence cross-functionally and cross-geographically with all levels of management.
  • Ability to manage executive stakeholders, collaborating safety strategy and operational execution and to function in high pressure situations and take lead as needed.
  • Excellent problem-solving, critical thinking and communication skills skills, with attention to details.

About the job

In this role, you will use the critical thinking and leadership skills to orchestrate a team of analysts, policy specialists, product managers, and engineers. Your role is responsible for leading launches for the under-18 audience, requiring you to create and execute a safety strategy that brings the best of Trust and Safety to the table. Your work will ensure the understanding of potential harms, developing effective mitigations, and launching products that earn user trust. Your responsibility extends beyond launch, encompassing post-launch monitoring and analysis to ensure ongoing safety and address emerging trends. You will have a growth opportunity to shadow and upskill on the agentic safety space.

At Google we work hard to earn our users’ trust every day. Trust & Safety is Google’s team of abuse fighting and user trust experts working daily to make the internet a safer place. We partner with teams across Google to deliver bold solutions in abuse areas such as malware, spam and account hijacking. A team of Analysts, Policy Specialists, Engineers, and Program Managers, we work to reduce risk and fight abuse across all of Google’s products, protecting our users, advertisers, and publishers across the globe in over 40 languages.

The US base salary range for this full-time position is $160,000-$237,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.

Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google.

Responsibilities

  • Deliver the safety strategy for generative AI launches, interacting with executive stakeholders from Trsut and Safety, engineering, legal, product teams, etc.
  • Serve as the domain expert for Gemini's under-18 user experience, including key policy differences, major risk factors, current regulatory sensitivities, and risk mitigation strategies.
  • Partner with product and engineer teams to provide critical product insights and data-motivated analyses to identify potential abuse vectors and prevent user harm on pre and post launch products.
  • Act as a trusted partner in a changing environment, coordinating and providing a consolidated view of risks and mitigations across all launch pillars (e.g., policy, testing, features) to cross-functional partners and leadership.
  • Be exposed to graphic, controversial, or upsetting content.

Information collected and processed as part of your Google Careers profile, and any job applications you choose to submit is subject to Google's Applicant and Candidate Privacy Policy.

Google is proud to be an equal opportunity and affirmative action employer. We are committed to building a workforce that is representative of the users we serve, creating a culture of belonging, and providing an equal employment opportunity regardless of race, creed, color, religion, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, veteran status, marital status, pregnancy or related condition (including breastfeeding), expecting or parents-to-be, criminal histories consistent with legal requirements, or any other basis protected by law. See also Google's EEO Policy, Know your rights: workplace discrimination is illegal, Belonging at Google, and How we hire.

If you have a need that requires accommodation, please let us know by completing our Accommodations for Applicants form.

Google is a global company and, in order to facilitate efficient collaboration and communication globally, English proficiency is a requirement for all roles unless stated otherwise in the job posting.

To all recruitment agencies: Google does not accept agency resumes. Please do not forward resumes to our jobs alias, Google employees, or any other organization location. Google is not responsible for any fees related to unsolicited resumes.

Google apps
Main menu