ShieldCortex Now Speaks Python — Protect AI Memory from CrewAI, LangChain, and Beyond
AI agents built in Python are everywhere. CrewAI, LangChain, AutoGPT, LlamaIndex — the Python ecosystem dominates agent development. And the moment any of these agents adds persistent memory, it gains an attack surface.
Until today, ShieldCortex only supported Node.js. That left Python developers without a way to scan memory writes before they're stored.
Not any more. The official Python SDK is live on PyPI.
The Problem
When your agent stores a memory — a user preference, a tool output, a conversation summary — that content persists. It gets loaded into future sessions. It influences future decisions.
If an attacker can get a poisoned string into that memory store, they've compromised every future session. Prompt injection buried in memory. Credentials accidentally saved. Encoded payloads that reassemble on retrieval.
The fix is simple in concept: scan before you store. The Python SDK makes it simple in practice.
Three Lines of Code
from shieldcortex import ShieldCortex
client = ShieldCortex(api_key="sc_live_...")
result = client.scan("user input to remember")
if result.allowed:
save_to_memory(result)
else:
print(f"Blocked: {result.firewall.reason}") The scan() call sends content to the ShieldCortex Cloud API, which runs it through the full 6-layer defence pipeline: input sanitisation, pattern detection, semantic analysis, structural validation, behavioural scoring, and credential leak detection.
You get back a verdict — ALLOW, BLOCK, or QUARANTINE — along with trust scores, threat indicators, and sensitivity classification. Every scan is logged to your Cloud audit trail.
CrewAI: Guard Your Agent's Memory
CrewAI agents learn from every task they complete. That learning is stored in memory and shapes future behaviour. ShieldCortex sits between the agent and the memory store.
from shieldcortex import ShieldCortex
from shieldcortex.integrations.crewai import ShieldCortexMemoryGuard, MemoryBlockedError
client = ShieldCortex(api_key="sc_live_...")
guard = ShieldCortexMemoryGuard(client, mode="strict")
try:
guard.check("content to remember")
# Safe — save to memory store
except MemoryBlockedError as e:
print(f"Blocked: {e.result.firewall.reason}") The mode parameter controls sensitivity. strict blocks aggressively. balanced (the default) blocks clear threats and quarantines ambiguous content. permissive logs everything but rarely blocks.
LangChain: Scan the Entire Chain
The LangChain integration hooks into the callback system. It scans inputs on chain start, outputs on LLM end, and tool I/O — no changes to your existing code.
from shieldcortex import AsyncShieldCortex from shieldcortex.integrations.langchain import ShieldCortexCallbackHandler client = AsyncShieldCortex(api_key="sc_live_...") handler = ShieldCortexCallbackHandler(client, raise_on_block=True) llm = ChatOpenAI(callbacks=[handler])
Every scan's audit_id is tracked on the handler, so you can correlate chain runs with specific security events in your Cloud dashboard.
Async, Batch, and Full API Coverage
The SDK covers every ShieldCortex Cloud endpoint:
- Sync and async clients —
ShieldCortexandAsyncShieldCortexwith identical APIs - Batch scanning — scan up to 100 items in a single request
- Audit log queries — search, filter, export, and auto-paginate through your scan history
- Quarantine management — review and release quarantined content programmatically
- Team management — API keys, invites, usage stats
- Webhooks and alerts — get notified when threats are detected
- Custom firewall rules — create, update, and manage rules via the API
Get Started
See plans and pricing for API keys and cloud sync. Read the full docs at shieldcortex.ai/docs. Source code and examples on GitHub.
Python 3.9+. Zero config. Scan before you store.