Become an AI Red Team Expert at LILT
Join LILT's mission to revolutionize how the world communicates! We're leveraging cutting-edge AI, machine translation, and human expertise to make information accessible to everyone, regardless of language. We're trusted by top organizations like Intel, Canva, and the U.S. Department of Defense, and backed by industry leaders like Sequoia and Intel Capital.
Are you a cybersecurity professional with a passion for breaking things – specifically, AI systems? If you thrive on finding vulnerabilities and pushing the limits of technology, LILT wants you! We're seeking talented freelance AI Red Team experts to help us fortify our AI systems against emerging threats.
The Challenge:
As an AI Red Team expert, you'll be on the front lines of AI security, tasked with:
Adversarial testing of AI systems: LLMs, multimodal models, inference services, RAG/embeddings, and product integrations.
Crafting ingenious prompts and scenarios to expose weaknesses in model guardrails.
Systematically documenting your findings to help us improve system defenses.
Collaborating with engineers and safety researchers to share insights and develop robust solutions.
Think like an attacker, find the flaws, and help us build more secure and reliable AI.
What You'll Need:
Deep Generative AI Knowledge: Comprehensive understanding of generative AI models, architectures, training processes, and potential failure modes, including prompt engineering, fine-tuning, and RLHF.
Cybersecurity Expertise: Solid grasp of cybersecurity principles, threat modeling, vulnerability assessment, and penetration testing. Experience identifying attack vectors, simulating real-world threats, and understanding potential impact.
Analytical Skills: Ability to analyze model outputs, identify biases, and recognize patterns in model responses. NLP background a plus.
Ethical Hacking Mindset: Commitment to ethical security practices and responsible disclosure.
Core