Blog

securityai-agentsmemoryruntime

Why Runtime Security Isn't Enough — The Case for Memory Integrity

IronClaw secures the runtime. But what secures the memory? Even the most locked-down agent sandbox can't protect against poisoned persistent memory. Here's why you need both runtime security and memory integrity.

launchiron-domesecurityai-agentsbehaviour

Introducing Iron Dome — Behaviour Protection for AI Agents

Your defence pipeline stops poisoned inputs. Iron Dome stops poisoned outputs. Control what your AI agent can do — not just what it remembers.

pythonlaunchcrewailangchain

ShieldCortex Now Speaks Python — Protect AI Memory from CrewAI, LangChain, and Beyond

The official Python SDK is live on PyPI. Scan AI agent memory for prompt injection and credential leaks — with built-in CrewAI and LangChain integrations.

openclawpluginreal-timesecurity

Real-time LLM Scanning for OpenClaw — Defence at the Pipeline Level

The new ShieldCortex plugin for OpenClaw scans every LLM input for threats and auto-extracts memories from outputs. Fire-and-forget — never blocks your agent.

mcpclaude-codesecurity

MCP Security: What Claude Code Users Need to Know

The Model Context Protocol is powerful, but MCP servers have direct access to your system. Here's how to protect yourself.

securityai-agentsthreats

The 5 Ways Hackers Can Poison Your AI Agent's Memory

AI agents with persistent memory are vulnerable to a new class of attacks. Here are the 5 techniques attackers use — direct injection, encoded payloads, fragmentation, context manipulation, and trust escalation.

aisecurityopen-source

Introducing ShieldCortex: The Security Layer Your AI Agent is Missing

AI agents are everywhere. They're writing code, managing infrastructure, processing emails. And increasingly, they have persistent memory. This is powerful. It's also dangerous.