We demonstrate our web app used for experimenting with different types of prompt injection attacks and mitigations on LLMs and how easy it can be to hack GPT through malicious prompts.
We demonstrate our web app used for experimenting with different types of prompt injection attacks and mitigations on LLMs and how easy it can be to hack GPT through malicious prompts.