AI Red Team Engineer - Dutch

LILT

4mo ago 2 views 0 applications
Contract Remote
Netherlands (Remote)
Competitive
Contract

Job Description

AI Red Team Expert (Freelance)

Join LILT in securing the next generation of AI. At LILT, we're revolutionizing how the world communicates by harnessing the power of AI to make information accessible to everyone, regardless of language. We're seeking skilled and passionate AI Red Team experts to help us identify and mitigate potential vulnerabilities in our AI systems.

As a freelance AI Red Team expert, you'll be on the front lines of adversarial testing, working to break our AI models (LLMs, multimodal models, inference services, RAG/embeddings, and product integrations) before malicious actors can. This is your chance to apply your offensive security skills to cutting-edge AI technology and directly impact the safety and reliability of our platform.

The Mission:

Think like an attacker. Your mission is to:

Craft creative prompts and scenarios to expose weaknesses in model guardrails.
Explore novel methods to bypass restrictions and uncover hidden vulnerabilities.
Systematically document findings and collaborate with engineers and safety researchers to improve system defenses.

What You'll Bring:

Generative AI Expertise: Deep understanding of generative AI models, including architectures, training processes (prompt engineering, fine-tuning, RLHF), and potential failure modes.
Cybersecurity & Threat Modeling: Proven experience in cybersecurity principles, including threat modeling, vulnerability assessment, and penetration testing. Ability to identify attack vectors, simulate real-world threats, and understand the potential impact of an attack.
Data Analysis & NLP: Strong analytical skills to dissect model outputs, identify biases or factual errors, and recognize patterns in how the model responds to different inputs. A background in Natural Language Processing (NLP) is a major plus.
Ethical Hacking Mindset: A commitment to using your skills for defensive and security-focused purposes, adhering to a strict ethical code, and understanding the importance of responsible disclosure.

Core

Requirements

  • Education: Bachelor's or Master’s Degree in Computer Science, Software Engineering, Cybersecurity, Digital Forensics, or a related field.
  • Language: Advanced English proficiency (C1 or above).
  • Mindset: Strong adversarial thinking.
  • Knowledge: Solid understanding of common model vulnerabilities (prompt injection, prompt-history leakage, data exfiltration via RAG).
  • Experience: Proven experience in AI/ML security, evaluation, and red teaming, particularly with LLMs, AI agents, and RAG pipelines.
  • Adaptability: Ability to quickly learn new methods, switch between tasks and topics, and work with complex guidelines.
  • Scripting: Proficient in scripting and automation using Python, Bash, or PowerShell.
  • Tools: Familiarity with AI red-teaming frameworks such as garak or PyRIT.
  • Bonus Points:
  • Physical-world adversarial testing experience.
  • Experience with containerization and CI/CD security tools (e.g., Docker).
  • Proficiency in offensive exploitation and exploit development.
  • Skilled in reverse engineering using tools like Ghidra or equivalents.
  • Expertise in network and application security, including web application security.
  • Knowledge of operating system security concepts (Linux privilege escalation, Windows internals).
  • Familiarity with secure coding practices for full-stack development.
  • Perks of Joining LILT:
  • Competitive Pay: Earn up to $55/hour based on your skills, experience, and project complexity.
  • Flexible Schedule: Part-time, remote freelance work that fits your commitments.
  • Impactful Work: Contribute to the security of cutting-edge AI technology and enhance your professional portfolio.
  • Real-World Influence: Directly impact how AI models understand and communicate across various industries.
  • About LILT: Powering Global Communication with AI
  • Founded by former Google Translate engineers, LILT is on a mission to break down language barriers and make information accessible to everyone. We're backed by Sequoia, Intel Capital, and Redpoint, and trusted by industry leaders like Intel, Canva, and the U.S. Department of Defense.
  • Our Tech Stack:
  • Brand-aware AI that learns your unique voice, tone, and terminology.
  • Agentic AI workflows that automate the entire translation process.
  • 100+ native integrations with systems like Adobe Experience Manager, Webflow, Salesforce, GitHub, and Google Drive.
  • Human-in-the-loop reviews via our global network of expert linguists.
  • Ready to put your cybersecurity skills to the test and help us build a more secure AI future? Apply now and join the LILT team!
  • ```
CyberJob.app

Your trusted source for cybersecurity job opportunities worldwide.


© 2026 CyberJob.app. All rights reserved.