Update README.md

This commit is contained in:
Elias Bachaalany
2024-03-13 20:44:57 -07:00
parent b18cbaab2d
commit 5f0b2634b3

View File

@@ -78,7 +78,7 @@ Steps:
In this section we list various protection techniques for various LLM systems: In this section we list various protection techniques for various LLM systems:
- [ChatGPT GPT Instructions protections](./Security/GPT-Protections/) - [ChatGPT GPT Instructions protections](https://github.com/0xeb/TheBigPromptLibrary/blob/main/Security/GPT-Protections/README.md)
However, please note that without additional filter layers and with direct access to the LLM system it may be impossible to reliably protect system prompts or instructions. However, please note that without additional filter layers and with direct access to the LLM system it may be impossible to reliably protect system prompts or instructions.