Prompt injection: What’s the worst that can happen? 14th April 2023 Activity around building sophisticated applications on top of LLMs (Large Language Models) such as GPT-3/4/ChatGPT/etc is growing like wildfire right now. Many of these applications are potentially vulnerable to prompt injection. It’s not clear to me that this risk is being taken as seriously as it should. To quickly review: promp