Senior AI Security Engineer

Menlo Security

2d ago 1 views 0 applications
Full-time Remote
US - Distributed
Competitive
Full-time
Security Engineer

Job Description

Senior AI Security Enginee

About Menlo Security
At Menlo Security, we're on a mission to empower the world to connect, communicate, and collaborate securely, without compromise. In a rapidly evolving digital landscape, our work has never been more critical – safeguarding operations for Fortune 500 companies, 9 out of 10 of the largest global banks, and the Department of Defense.

We are entering an exciting new phase of growth, expanding our team beyond 400 passionate innovators. We believe in agility, empathy, and a relentless commitment to our mission. Our team members are ethical, hyper-organized, driven to complete projects, service-oriented, and possess the humility to learn and the confidence to lead.

Our journey is well-funded, backed by an exceptional roster of investors including Vista Equity Partners, General Catalyst, JPMC, American Express, HSBC, and Ericsson Ventures, ensuring we have the resources to continue pushing the boundaries of cybersecurity.

The Opportunity: Shaping the Future of AI Security
The rise of autonomous AI agents presents a revolutionary, yet challenging, new frontier in cybersecurity. We are seeking a visionary **Senior AI Security Engineer** to join our pioneering team and spearhead the defense against emerging threats targeting these intelligent systems.

In this critical role, you will dive deep into the complexities of agentic AI security. You will research, design, and implement novel techniques to detect and mitigate sophisticated attacks such as prompt poisoning, context manipulation, malicious agent behaviors, and other adversarial threats. Your work will directly protect agents interacting with untrusted web content in real-world environments, translating cutting-edge security research into practical, deployable controls that define the future of secure AI.

Core Responsibilities

Research Emerging Agentic Threats: Investigate novel attack vectors against AI agents, including prompt injection, context poisoning, adversarial content embedding, and misuse of agent planning and reasoning mechanisms.
Architect Scalable Agentic Workflows: Design and implement robust, high-performance pipelines that secure agent-to-web interactions at scale.
Develop Novel Detection & Mitigation Techniques: Design and prototype new approaches for identifying malicious prompts, unsafe contextual signals, and adversarial behaviors in LLM-powered agents.
Implement Agent Security Controls: Translate these techniques into practical security controls within agentic runtimes, ensuring agents can safely reason over and act on external data sources.
Collaborate on Engineering: Partner closely with applied engineers to seamlessly integrate research-driven security mechanisms into production systems, balancing security effectiveness with agent performance.
Proactive Threat Modeling: Continuously evaluate the evolving AI threat landscape, anticipating future risks as agent capabilities and autonomy advance.
Build Adversarial Resilience: Develop defensive mechanisms within the browser surrogate to detect and neutralize complex context poisoning and injection attempts embedded in web content.

Qualifications

BSc in Computer Science or significant experience in high-scale cloud engineering; a relevant MSc or PhD is a strong advantage.
3+ years of experience in applied AI, with a proven track record of deploying high-scale AI systems in production environments. Agentic experience in production environments is an important advantage.
Expert-level Python proficiency; deep experience with Kubernetes (k8s) and cloud-native orchestration; proficiency with advanced data modeling and version control.
Significant experience in cybersecurity or browser-related technologies is highly preferred.
Deep understanding of prompt engineering techniques and how they can be exploited in agentic systems.
Ability to explore ambiguous problem spaces, experiment with new ideas, and iterate toward effective security solutions.

Nice to Have

Hands-on experience with orchestration frameworks (e.g., LangChain, AutoGen) and / or standardized communication protocols like MCP.
Experience building immutable event streams and high-speed data pipelines for real-time traffic analysis.
Understanding of how web pages are rendered and how to programmatically manipulate the DOM or Accessibility Tree to enhance security.
A "security-first" mindset with a bias toward building auditable, traceable, and fault-tolerant systems.

Why Menlo Security?
Our culture is collaborative, inclusive, and fun! We live by five core values: Stay Aligned, Get It Done, Customer Empathy, Think Creatively, and Help Each Other Out. We foster an environment of open communication, actively support new ideas, and share a mutual mindset of what we’re aiming to achieve together. This is a unique opportunity to take initiative, implement ground-breaking ideas, and build a lasting legacy at the forefront of AI and cybersecurity.

All qualified applicants will receive consideration for employment without regard to race, sex, color, religion, sexual orientation, gender identity, national origin, protected veteran status, or on the basis of disability.

TO ALL AGENCIES: Please, no phone calls or emails to any employee of Menlo Security outside of the Talent organization. Menlo Security’s policy is to only accept resumes from agencies via Ashby (ATS). Agencies must have a valid services agreement executed and must have been assigned by the Talent team to a specific requisition. Any resume submitted outside of this process will be deemed the sole property of Menlo Security. In the event a candidate submitted outside of this policy is hired, no fee or payment will be paid.