Engineering Analyst, Cloud AI Abuse
Company: Google
Location: Seattle
Posted on: April 1, 2026
|
|
|
Job Description:
info_outline X In accordance with Washington state law, we are
highlighting our comprehensive benefits package, which is available
to all eligible US based employees. Benefits for this role include:
Health, dental, vision, life, disability insurance Retirement
Benefits: 401(k) with company match Paid Time Off: 20 days of
vacation per year, accruing at a rate of 6.15 hours per pay period
for the first five years of employment Sick Time: 40 hours/year
(statutory, where applicable); 5 days/event (discretionary)
Maternity Leave (Short-Term Disability Baby Bonding): 28-30 weeks
Baby Bonding Leave: 18 weeks Holidays: 13 paid days per year Note:
By applying to this position you will have an opportunity to share
your preferred working location from the following: Seattle, WA,
USA; Washington D.C., DC, USA . Minimum qualifications: Bachelor's
degree or equivalent practical experience. 2 years of experience in
data analysis, including identifying trends, generating summary
statistics, and drawing insights from quantitative and qualitative
data. Experience with SQL. Preferred qualifications: Master's
degree in a technical discipline (e.g., Computer Science,
Statistics, Mathematics, Operations Research, etc.). 5 years of
relevant work experience in data analysis. Experience in security
threats or abuse detection. Knowledge or experience in any one of
the domains, such as anomaly detection, security threats analysis
and investigation, time-series analysis, Cloud APIs, or metrics and
reporting. Understanding of generative AI technologies, Large
Language Models (LLMs) and AI agents. Excellent problem-solving and
critical thinking skills with attention to detail in an
ever-changing environment. About the job Trust & Safety team
members are tasked with identifying and taking on the biggest
problems that challenge the safety and integrity of our products.
They use technical know-how, excellent problem-solving skills, user
insights, and proactive communication to protect users and our
partners from abuse across Google products like Search, Maps,
Gmail, and Google Ads. On this team, you're a big-picture thinker
and strategic team-player with a passion for doing what’s right.
You work globally and cross-functionally with Google engineers and
product managers to identify and fight abuse and fraud cases at
Google speed - with urgency. And you take pride in knowing that
every day you are working hard to promote trust in Google and
ensuring the highest levels of user safety. The Trust and Safety
Cloud AI team is the vanguard of Cloud AI security and safety. Our
mission is to safeguard users, protect the integrity of our
platforms, and ensure the responsible deployment of AI products at
global scale. We operate at the critical intersection of AI
research and real-world security, building the foundational
defenses that prevent the misuse of generative models and agents.
By pioneering proactive threat detection and robust mitigation
strategies, we empower Google Cloud to push the boundaries of AI
innovation safely, ethically, and securely. At Google we work hard
to earn our users’ trust every day. Trust & Safety is Google’s team
of abuse fighting and user trust experts working daily to make the
internet a safer place. We partner with teams across Google to
deliver bold solutions in abuse areas such as malware, spam and
account hijacking. A team of Analysts, Policy Specialists,
Engineers, and Program Managers, we work to reduce risk and fight
abuse across all of Google’s products, protecting our users,
advertisers, and publishers across the globe in over 40 languages.
The US base salary range for this full-time position is
$132,000-$189,000 bonus equity benefits. Our salary ranges are
determined by role, level, and location. Within the range,
individual pay is determined by work location and additional
factors, including job-related skills, experience, and relevant
education or training. Your recruiter can share more about the
specific salary range for your preferred location during the hiring
process. Please note that the compensation details listed in US
role postings reflect the base salary only, and do not include
bonus, equity, or benefits. Learn more about benefits at Google .
Responsibilities Monitor Google Cloud AI products for signs of
abuse, including prompt injection, jailbreaking, data poisoning,
distillation, and generation of policy-violating content. Perform
in-depth analysis of risks associated with both generative and
agentic AI. Measure these risks using benchmarking, evaluations,
red teaming, and scaled usage monitoring. Develop, tune, and deploy
rules, heuristics, and rate limits to proactively block abusive
actors and mitigate automated attacks Effectively collaborate with
engineering, product, and legal teams to ensure that the risks of
AI are understood and robust solutions are adopted. Educate
cross-functional teams about Gen AI safety risks and advocate for
secure design principles. Promote a culture of safety and user
trust throughout the product development process.
Keywords: Google, Lacey , Engineering Analyst, Cloud AI Abuse, IT / Software / Systems , Seattle, Washington