Stop Prompt Injections. For Good.
- ✓ We protect your AI, we protect you from AI.
- ✓ We are a router to all things AI security.
- ✓ We turn AI transparent, auditable, and deterministic.
-
✓
We are your Blue Team.
We solve your AI Security problems. - ✓ Only takes a few lines of code to integrate.
- ✓ Don't trust the model provider to enforce the rules. Take Control.
- ✓ Deploy on-prem, we don't need your data.
See Sequrity in Action
Watch Ilia explain why computers in the AI era cannot be trusted and how Sequrity protects AI applications from attackers.
Meet the Team
Our work is backed by leading researchers in the fields of AI, Computer Systems, and Security.
Dr. Ilia Shumailov
Previously Dr. Ilia Shumailov was a Senior Research Scientist at Google DeepMind. His work regularly appears in premier publications across both computer security and machine learning, shaping critical conversations within the industry and academia.
He holds a PhD in Computer Science from the University of Cambridge and a prestigious Junior Research Fellowship at Christ Church College, Oxford. Dr. Shumailov is a recipient of a competitive Bosch Research Foundation scholarship.
Prof. Yiren Zhao
Dr. Yiren Zhao is an Assistant Professor in Computer Engineering within the Department of Electrical and Electronic Engineering at Imperial College London, where he leads a research lab focused on AI, hardware, and security.
He holds a PhD in Computer Science from the University of Cambridge and a prestigious Junior Research Fellowship at St John's College, Cambridge. Dr. Zhao was named an Apple Scholar in AI and ML in 2020 and is a recipient of a grant from Microsoft's Accelerating Foundation Models Research program.
Cheng Zhang
Cheng Zhang is a PhD student supervised by Dr. Yiren Zhao and Prof. George A. Constantinides at the Department of Electrical and Electronic Engineering, Imperial College London.
His main research interests include efficient machine learning and AI acceleration. Recently he also started to explore the security and privacy issues in machine learning. He has published several papers in top-tier conferences, including ICML, ICLR, and EMNLP.
Edoardo Debenedetti
Edoardo is a PhD student in Computer Science at ETH Zurich, advised by Prof. Florian Tramèr. His research focuses on the security of AI agents, working on evaluation frameworks and defenses.
Before joining the AI Sequrity Company he was also Research Scientist Intern at Meta and as a Student Researcher at Google. He has published several papers in top-tier conferences, including NeurIPS, ICML, ICLR and USENIX Security and he was a recipient of the armasuisse CYD Fellowship.
Zehui Li
Zehui is a PhD student in Machine Learning at Imperial College London, supervised by Dr. Yiren Zhao and Dr. Guy-Bart Stan. His research spans diffusion and autoregressive generative models, with a focus on long-context sequence modeling on discrete data.
He has conducted research internships at Microsoft Research Asia (Rising Star Research Internship; Outstanding Award) and at the Vector Institute & University of Toronto. Zehui has authored publications at NeurIPS (2024, 2025), ICLR (2025), and an ICML 2023 workshop Best Paper Award.
Hanna Foerster
Hanna Foerster is a PhD student in Computer Science at the University of Cambridge, supervised by Prof. Robert Mullins and co-supervised by Dr. Yiren Zhao. Her research focuses on Machine Learning Security, spanning model extraction, supply chain attacks, adversarial perturbations, and the security of AI agents.
Her work has been published in top-tier venues including NeurIPS, ICML, and USENIX Security, and she is a recipient of the Tazaki Cambridge Studentship and the Scholarship of the German Academic Scholarship Foundation. She has also held research internships at Google DeepMind, the Vector Institute, and TU Darmstadt's System Security lab.