AI Red Team - Freelance Opportunity
Ready to break things for good? At LILT, we're pioneering the future of AI-powered language translation. We're seeking skilled and passionate AI Red Team experts to join us in a freelance capacity, pushing the limits of our AI systems and ensuring their robustness against evolving threats.
As a member of our Red Team, you'll be at the forefront of AI security, working on adversarial testing of cutting-edge technologies including Large Language Models (LLMs), multimodal models, inference services, RAG/embeddings, and critical product integrations. This is your chance to leverage your expertise to identify vulnerabilities, develop innovative attack strategies, and collaborate with our engineering and safety research teams to harden our defenses.
What You'll Do:
Craft sophisticated prompts and scenarios to rigorously test model guardrails.
Explore creative and unconventional methods to bypass restrictions and uncover hidden weaknesses.
Systematically document testing outcomes, providing actionable insights for system improvements.
Collaborate closely with engineers and safety researchers to share findings and contribute to robust security strategies.
Is This You?
We're looking for individuals with a blend of cybersecurity expertise and a deep understanding of AI.
Key Skills:
Generative AI Expertise: Deep understanding of generative AI models, including LLMs, their architectures, training processes (including prompt engineering, fine-tuning, and RLHF), and potential failure modes.
Cybersecurity & Threat Modeling: Solid foundation in cybersecurity principles, threat modeling, vulnerability assessment, and penetration testing. Ability to identify attack vectors and simulate real-world threats.
Data Analysis & NLP: Strong analytical skills to dissect model outputs, identify biases, and recognize patterns. A background in Natural Language Processing (NLP) is a significant plus.
Ethical Hacking Mindset: A strong commitment to ethical hacking principles and responsible disclosure.
Core