Senior AI Security Enginee
Location: [Specify Location - if remote, hybrid, or on-site, add details here]
About Menlo Security
At Menlo Security, our mission is clear: to enable the world to connect, communicate, and collaborate securely without compromise. In an ever-evolving digital landscape, this mission has never been more critical. We are trusted by some of the world's most demanding organizations, including Fortune 500 companies, 9 out of 10 of the largest global banks, and the U.S. Department of Defense.
We're not just growing; we're accelerating into the next phase of our journey, expanding our team from 400 employees and continuing to innovate at the forefront of cybersecurity. This growth is fueled by robust funding and unwavering support from world-class investors like Vista Equity Partners, General Catalyst, JPMC, American Express, HSBC, and Ericsson Ventures. We're seeking passionate, ethical, and agile professionals who are fanatical about seeing things through, service-oriented, and embody both the humility to learn and the confidence to lead.
The Opportunity: Securing the Future of AI Agents
The rise of autonomous AI agents introduces a groundbreaking new frontier in cybersecurity. We are actively seeking a pioneering Senior AI Security Engineer to spearhead our efforts in addressing these emerging challenges head-on. This isn't just a role; it's a chance to define the security posture for the next generation of AI systems.
You will be at the forefront of research, design, and implementation, crafting novel techniques to detect and mitigate the most sophisticated threats targeting agentic AI. Your work will directly counter prompt poisoning, context manipulation, malicious agent behaviors, and other adversarial tactics. You will bridge cutting-edge security research with practical, deployable controls, ensuring agents can operate securely in real-world environments, especially when interacting with untrusted web content.
Core Responsibilities
Research Emerging Agentic Threats: Dive deep into new attack vectors against AI agents, including sophisticated prompt injection, context poisoning, adversarial content embedding, and the misuse of agent planning and reasoning mechanisms.
Architect Scalable Agentic Workflows: Design and implement robust, high-performance pipelines engineered to secure complex agent-to-web interactions at scale.
Develop Novel Detection & Mitigation Techniques: Design and prototype innovative approaches for identifying malicious prompts, unsafe contextual signals, and adversarial behaviors within LLM-powered agents.
Implement Agent Security Controls: Translate these research-driven techniques into tangible security controls within agentic runtimes, safeguarding agents as they reason over and act on external data sources.
Collaborative Engineering: Partner closely with applied engineering teams to seamlessly integrate advanced security mechanisms into production systems, striking a critical balance between security effectiveness and agent performance.
Proactive Threat Modeling: Continuously evaluate the evolving AI threat landscape, anticipate future risks, and develop defenses as agent capabilities and autonomy grow.
Enhance Adversarial Resilience: Build robust defensive mechanisms within our browser surrogate to detect and neutralize complex context poisoning and injection attempts embedded in web content.
Qualifications
Required Skills & Experience
BSc in Computer Science or significant, demonstrable experience in high-scale cloud engineering. A relevant MSc or PhD is a strong advantage.
3+ years of experience in applied AI, with a proven track record of deploying high-scale AI systems in production environments.
Expert-level proficiency in Python.
Deep experience with Kubernetes (k8s) and cloud-native orchestration.
Proficiency with advanced data modeling and version control systems.
A deep understanding of prompt engineering techniques and how they can be exploited within agentic systems.
Demonstrated ability to explore ambiguous problem spaces, experiment with new ideas, and iterate rapidly toward effective security solutions.
Highly Preferred
Significant experience in cybersecurity or browser-related technologies.
Prior agentic experience in production environments.
Nice to Have
Hands-on experience with AI orchestration frameworks (e.g., LangChain, AutoGen) and/or standardized communication protocols like MCP.
Experience building immutable event streams and high-speed data pipelines for real-time traffic analysis.
A strong understanding of how web pages are rendered and how to programmatically manipulate the Document Object Model (DOM) or Accessibility Tree to enhance security.
A "security-first" mindset with a bias toward building auditable, traceable, and fault-tolerant systems.
MSGL-I4
Why Join Menlo Security?
Our culture thrives on collaboration, inclusivity, and innovation. We live by five core values: Stay Aligned, Get It Done, Customer Empathy, Think Creatively, and Help Each Other Out. We foster an environment of open communication, where new ideas are encouraged, and every team member contributes to a shared vision. This is a unique opportunity to take initiative, implement groundbreaking ideas, and leave a lasting legacy on the future of AI security.
At Menlo Security, we are committed to building a diverse and inclusive workplace. All qualified applicants will receive consideration for employment without regard to race, sex, color, religion, sexual orientation, gender identity, national origin, protected veteran status, or on the basis of disability.
TO ALL AGENCIES: Please, no phone calls or emails to any employee of Menlo Security outside of the Talent organization. Menlo Security’s policy is to only accept resumes from agencies via Ashby (ATS). Agencies must have a valid services agreement executed and must have been assigned by the Talent team to a specific requisition. Any resume submitted outside of this process will be deemed the sole property of Menlo Security. In the event a candidate submitted outside of this policy is hired, no fee or payment will be paid.