Collection of extracted System Prompts from popular chatbots like ChatGPT, Claude & Gemini
-
Updated
Mar 22, 2026 - HTML
Collection of extracted System Prompts from popular chatbots like ChatGPT, Claude & Gemini
Superagent protects your AI applications against prompt injections, data leaks, and harmful outputs. Embed safety directly into your app and prove compliance to your customers.
ChatGPT Jailbreaks, GPT Assistants Prompt Leaks, GPTs Prompt Injection, LLM Prompt Security, Super Prompts, Prompt Hack, Prompt Security, Ai Prompt Engineering, Adversarial Machine Learning.
The Security Toolkit for LLM Interactions
AI Red Teaming playground labs to run AI Red Teaming trainings including infrastructure.
A playground of highly experimental prompts, Jinja2 templates & scripts for machine intelligence models from OpenAI, Anthropic, DeepSeek, Meta, Mistral, Google, xAI & others. Alex Bilzerian (2022-2025).
LLM Prompt Injection Detector
Kernel-enforced agent sandbox and agent security CLI and SDKs. Capability-based isolation with secure key management, atomic rollback, cryptographic immutable audit chain of provenance. Run your agents in a zero-trust environment.
a security scanner for custom LLM applications
🔍 LangKit: An open-source toolkit for monitoring Large Language Models (LLMs). 📚 Extracts signals from prompts & responses, ensuring safety & security. 🛡️ Features include text quality, relevance metrics, & sentiment analysis. 📊 A comprehensive tool for LLM observability. 👀
HacxGPT CLI — Open-source command-line interface for unrestricted AI model access with multi-provider support, prompt injection research capabilities, configurable API endpoints, Termux/Linux/Windows compatibility, and Rich terminal UI for security research and red-team evaluation
KawaiiGPT — Open-source LLM gateway accessing DeepSeek, Gemini, and Kimi-K2 through reverse-engineered Pollinations API with no API keys required, built-in prompt injection capabilities for security research, Termux/Linux native support, and Rich console interface
💼 another CV template for your job application, yet powered by Typst and more
setup openclaw: https://remoteopenclaw.com/
Every practical and proposed defense against prompt injection.
Arcjet JavaScript (JS) / TypeScript SDK. Stop bots and automated attacks from burning your AI budget, leaking data, or misusing tools with Arcjet's AI security building blocks.
A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.
⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs
This repository provides a benchmark for prompt injection attacks and defenses in LLMs
Open-source Python, TypeScript, and Go SAST with dead code detection. Finds secrets, exploitable flows, and AI regressions. VS Code extension, GitHub Action, and MCP server for AI agents.
Add a description, image, and links to the prompt-injection topic page so that developers can more easily learn about it.
To associate your repository with the prompt-injection topic, visit your repo's landing page and select "manage topics."