Addressing Security Risks In LLM-Based Applications
Large Language Models continue to grow in popularity as people experiment, applying them to problems and pushing new code into production applications. Growing along with this popularity is an engineering approach that advocates outsourcing more and more of an application’s functionality to these LLMs. But what seems like an advantage on the surface masks different costs and risks. Ultimately, you may end up with less reliable code that’s harder to troubleshoot and fix, accruing technical debt along the way. There’s also the potential increase in attack surface from integrating LLMs into your application, giving attackers more vectors to explore. All isn’t lost. With the right approach, you can have a balance that addresses these issues. In this presentation, we’ll look at the risks involved in engineering applications with LLM functionality and outline steps you can take to reduce your exposure.