![]()
As the technology industry rapidly deploys artificial intelligence across banking, healthcare, and critical infrastructure, one Silicon Valley engineer is documenting what could go wrong—and building tools to test it. There is a familiar pattern in cybersecurity: a new technology arrives, and security becomes an afterthought. It happened with the web in the 1990s and cloud computing in the 2000s. According to Nayan Goel, a Principal Application Security Engineer, it is happening again with AI—but at a much faster pace. “The systems being deployed today are different from anything we’ve had to secure before,” Goel has noted. “They don’t behave predictably.
They interpret language, infer intent, and take actions in ways their designers didn’t anticipate.” Goel is among a small group of professionals working both to secure real AI systems and to research their risks. He works on AI systems in production at a major fintech company while also publishing research on how these systems can fail. This dual role shapes his approach. While many researchers study AI systems in controlled settings, Goel works with systems that must function reliably in real-world environments, handling financial data and user activity.
His research reflects this practical exposure. His 2025 paper on federated learning highlights challenges in systems where AI models learn from distributed data without centralising it. He outlines risks such as model poisoning, where attackers inject harmful data; privacy leakage, where sensitive information may be exposed; and Sybil attacks, where fake identities manipulate the system. Instead of offering simple fixes, the research highlights trade-offs between security, accuracy, and system performance. Goel has also contributed to OWASP, which develops widely used security standards. He co-authored a report on AI agents capable of taking actions without constant human input and contributed to the OWASP LLM Top 10, a list of key vulnerabilities in large language model applications. Alongside his research, he has built tools to test AI systems. These include a GraphQL Security Tester that generates adversarial queries and a Prompt Injection Tester designed to simulate attacks on AI workflows.
The aim is to move beyond theory and understand whether such threats can be replicated in real systems. Taken together, his work points to a larger issue: AI is increasingly becoming part of critical systems, but the frameworks to secure it are still evolving. Current solutions often involve trade-offs rather than clear answers. What is emerging is not a complete solution, but a clearer understanding of the risks. Securing AI systems requires new ways of thinking, especially as these systems learn and adapt in unpredictable ways. For now, that work remains ongoing.

