← Library
splApache-2.0from splunk/security_content

Ollama Suspicious Prompt Injection Jailbreak

Detects potential prompt injection or jailbreak attempts against Ollama API endpoints by identifying requests with abnormally long response times. Attackers often craft complex, layered prompts designed to bypass AI safety controls, which typically result in extended processing times as the model attempts to parse and respond to these malicious inputs. This detection monitors /api/generate and /api/chat endpoints for requests exceeding 30 seconds, which may indicate sophisticated jailbreak techniques, multi-stage prompt injections, or attempts to extract sensitive information from the model.

Quality
43
FP risk
Forks
0
Views
0
Rule source🔒 locked
🔒

Sign in to view the rule source

Free accounts can view the source for the top-ranked rules. Create one in seconds — no credit card required.

Sign in →