AI Red Team Expert (Freelance)
Join LILT in securing the next generation of AI. At LILT, we're revolutionizing how the world communicates by harnessing the power of AI to make information accessible to everyone, regardless of language. We're seeking skilled and passionate AI Red Team experts to help us identify and mitigate potential vulnerabilities in our AI systems.
As a freelance AI Red Team expert, you'll be on the front lines of adversarial testing, working to break our AI models (LLMs, multimodal models, inference services, RAG/embeddings, and product integrations) before malicious actors can. This is your chance to apply your offensive security skills to cutting-edge AI technology and directly impact the safety and reliability of our platform.
The Mission:
Think like an attacker. Your mission is to:
Craft creative prompts and scenarios to expose weaknesses in model guardrails.
Explore novel methods to bypass restrictions and uncover hidden vulnerabilities.
Systematically document findings and collaborate with engineers and safety researchers to improve system defenses.
What You'll Bring:
Generative AI Expertise: Deep understanding of generative AI models, including architectures, training processes (prompt engineering, fine-tuning, RLHF), and potential failure modes.
Cybersecurity & Threat Modeling: Proven experience in cybersecurity principles, including threat modeling, vulnerability assessment, and penetration testing. Ability to identify attack vectors, simulate real-world threats, and understand the potential impact of an attack.
Data Analysis & NLP: Strong analytical skills to dissect model outputs, identify biases or factual errors, and recognize patterns in how the model responds to different inputs. A background in Natural Language Processing (NLP) is a major plus.
Ethical Hacking Mindset: A commitment to using your skills for defensive and security-focused purposes, adhering to a strict ethical code, and understanding the importance of responsible disclosure.
Core