Security Engineer, Agent Security

OpenAI

3d ago 1 views 0 applications
San Francisco Onsite
$325,000 - $495,000
Full-time
Security Engineer

Job Description

Secure the Future of Agentic AI at OpenAI

About the Team
Join the Agent Security Team at OpenAI, where our mission is to accelerate the secure evolution of agentic AI systems. We're on the front lines, designing, implementing, and continuously refining security policies, frameworks, and controls that defend OpenAI's most critical assets – including the user and customer data embedded within them – against the unique risks introduced by agentic AI.

About the Role: Security Engineer, Agent Security Team
As a Security Engineer on the Agent Security Team, you'll be a key player in securing OpenAI’s cutting-edge agentic AI systems. This is your chance to shape the future of AI security! You will:
Design and implement robust security frameworks, policies, and controls.
Develop comprehensive threat models specific to agentic AI.
Partner closely with our Agent Infrastructure group to fortify the platforms powering OpenAI's most advanced agentic systems.
Lead efforts to enhance safety monitoring pipelines at scale.

We're looking for a versatile engineer who thrives in a dynamic environment and is ready to make a significant impact from day one. You'll ship solutions quickly, while maintaining the highest standards of quality and security, and drive innovative solutions that set the industry standard for agent security. If you're passionate about securing complex systems and designing robust isolation strategies for emerging AI technologies, this is the place for you.

Location: San Francisco, CA (Hybrid work model: 3 days in office)
Relocation assistance is available.

You'll Be Responsible For:
Architecting Security Controls: Design, implement, and iterate on identity, network, and runtime-level defenses (e.g., sandboxing, policy enforcement) that integrate directly with the Agent Infrastructure stack.
Building Production-Grade Security Tooling: Ship code that hardens safety monitoring pipelines across agent executions at scale.
Cross-Functional Collaboration: Work daily with Agent Infrastructure, product, research, safety, and security teams to balance security, performance, and usability.
Influencing Strategy & Standards: Shape the long-term Agent Security roadmap, publish best practices internally and externally, and help define industry standards for securing autonomous AI.

We're Looking For Someone With:
Strong Software Engineering Skills: Proficiency in Python or at least one systems language (Go, Rust, C/C++), plus a track record of shipping and operating secure, high-reliability services.
Deep Expertise in Modern Isolation Techniques: Experience with container security, kernel-level hardening, and other isolation methods.
Hands-On Network Security Experience: Implementing identity-based controls, policy enforcement, and secure large-scale telemetry pipelines.
Clear Communication: Ability to bridge engineering, research, and leadership audiences; comfort influencing roadmaps and driving consensus.
Bias for Action & Ownership: Thriving in ambiguity, moving quickly without sacrificing rigor, and elevating the security bar company-wide from day one.
Cloud Security Depth: Expertise on at least one major provider (Azure, AWS, GCP), including identity federation, workload IAM, and infrastructure-as-code best practices.
Familiarity with AI/ML Security Challenges: Experience addressing risks associated with advanced AI systems (a plus!).

About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.

We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.

For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.

Qualified applicants with arrest or conviction records will be considered for employment in accordance with applicable law, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.

To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.

We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.

OpenAI Global Applicant Privacy Policy

At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.