Senior Product Security Engineer

LMArena

2w ago 1 views 0 applications
Bay Area Hybrid
Competitive
Full-time
Security Engineer

Job Description

Security Engineer - Fortify the Future of AI Evaluation

Location: SF Bay Area / Remote
Type: Full-Time

About the Role

Join LMArena as a Security Engineer and be at the forefront of protecting a critical platform shaping the future of AI. You'll be responsible for designing and implementing systems to defend against bots and adversarial manipulation, ensuring the integrity of real-time AI evaluations. This is your opportunity to build and break, identify vulnerabilities, and create resilient defenses that scale trust across a rapidly growing platform.

Your Mission:
Design and implement robust security frameworks to neutralize Sybil attacks and other vulnerabilities in distributed systems.
Conduct threat modeling and risk assessments, with a laser focus on Sybil resistance strategies.
Collaborate with engineering to architect Sybil-resistant protocols and mechanisms, including reputation systems, proof-of-work/proof-of-stake, and identity verification.
Develop real-time tools and systems to monitor, detect, and respond to Sybil attacks and security threats.
Stay ahead of the curve by researching the latest advancements in Sybil attack mitigation, distributed systems security, and cryptographic techniques, and apply them to our platform.
Partner with product managers, engineers, and stakeholders to embed security into the product development lifecycle.
Lead incident response, perform root cause analysis, and implement effective remediation strategies.
Champion security best practices and cultivate a security-aware culture throughout the organization.
Develop and maintain a comprehensive threat model, including:
Identification of Sybil attack vectors such as vote spamming, leaderboard manipulation, and collusion.
Categorization of adversarial behaviors (e.g., botnets, automation, proxy/VPN abuse, human-assisted fraud).
Risk assessment for each attack type with severity ratings and projected impact.
Clearly defined mitigation strategies, detection mechanisms, and response protocols for each category.

Why LMArena?

LMArena, created by researchers from UC Berkeley’s SkyLab, is the leading open platform for exploring and interacting with cutting-edge AI models. Our community-driven leaderboard, fueled by side-by-side comparisons and user votes, brings transparency and real-world context to AI progress.

Impact You'll Make:
Join a platform trusted by Google, OpenAI, Meta, xAI, and more. LMArena is becoming the standard for transparent, human-centered AI evaluation at scale. Your work will directly impact millions of users and shape the next generation of safe, aligned AI systems.

Our work is recognized by industry leaders:
Sundar Pichai
Jeff Dean
Elon Musk
Sam Altman

Perks of Joining:
High Impact: Your contributions will be used daily by the world’s most advanced AI labs.
Global Reach: Develop data infrastructure that powers millions of real-world evaluations, influencing AI reliability across industries.
Exceptional Team: Join a small team of top talent from Google, DeepMind, Discord, Vercel, UC Berkeley, and Stanford.

What You'll Bring:
5+ years of experience in software or security engineering, building secure, scalable systems.
Expertise in designing and deploying infrastructure security measures across cloud environments, identity systems, and access controls.
Hands-on experience implementing user fingerprinting techniques, identity modeling, or behavioral analytics to prevent fraud or abuse.
Experience designing and/or defending against Sybil attacks, bot activity, or adversarial manipulation at the platform level.
Strong knowledge of threat modeling, risk assessment, and mitigation strategies for real-world attack scenarios.
Familiarity with common security tools and practices, including VPNs, MFA/2FA, intrusion detection, secrets management, and secure deployment pipelines.
Excellent communication skills to collaborate cross-functionally and articulate risks and tradeoffs to technical and non-technical audiences.
Bonus: Experience in adversarial ML, trust & safety systems, or securing user-driven platforms with voting or reputation systems.

Our Tech Stack:
NextJS
Tailwind
ShadCN
HonoJS
Postgres
Vitest

Compensation & Benefits:
210k - 231k + equity. Actual compensation depends on job-related knowledge, skills, experience, and candidate location.
Competitive salary and meaningful equity.
Comprehensive healthcare coverage (medical, dental, vision).
The opportunity to work on cutting-edge AI with a small, mission-driven team.
A culture that values transparency, trust, and community impact.

Help us build the future of AI evaluation. Apply now!