Mindgard

About Mindgard

Mindgard is a London-based startup specializing in AI security.

We’ve spun-out from a leading UK university after a decade of R&D, and our mission is to secure the future of AI against cyber attacks against AI. This remains an unsolved challenge globally, and we are among the few companies globally to offer solutions to this rapidly growing problem.

The Role

We’re seeking a Security Researcher to join our R&D team, who is passionate about finding AI vulnerabilities in the wild.

You’ll join our collaborative and friendly AI security R&D team, where you will have the opportunity to work on cutting-edge AI security problems, mentor other researchers, and deliver presentations to the technical community.

We’ll encourage you to present your research at cyber security working groups, conferences, and in publications.

What you will be doing:

  • Identifying new and open research questions within AI security
  • Discovering vulnerabilities (0-days) within AI (models, artifacts, systems), conducting PoCs, and participating in blog post writeups
  • Providing domain guidance and expertise to other teams, and collaborating with them to enhance the security offering provided by Mindgard
  • Working with the providers, builders, and the open source community to disclose new security issues and help secure their AI solutions
  • Maintaining your knowledge in a specialized area of research, through engaging with the broader security community, attending meetups and conferences, and reading widely

We’re looking for people who are:

  • Kind, to collaborate effectively towards the highest quality outcomes.
  • Passionate about our mission to help security teams with AI security risks.
  • Curious, to deepen your understanding of AI security.
  • Pragmatic, helping our customers make the best security tradeoffs.

You’ll need:

  • Ability to lead and contribute to research projects.
  • Experience within the application security field, including a good understanding of major vulnerabilities such as XSS, RCE, SQL Injection, Deserialization, etc.
  • Hands-on software development experience
  • Knowledge on a wide range of programming languages (Python, JavaScipt, Java, etc.).
  • Practical familiarity with AI models, LLMs, ML frameworks, MLDevOps, and / or are ready to learn.
  • Good knowledge on static and dynamic application security testing (SAST / DAST).

You’ll stand out if you have:

  • Expertise in AI or AI security.
  • Worked in a SaaS product startup.