InclementSec Research

DACSA

Data Authorization Controls for
Securing Agentic AI Systems

A framework that extends authorization enforcement beyond the action boundary to the data layer — because in agentic AI, controlling what a system can do is no longer sufficient to control the risk it poses.

The Problem

Authorization models — RBAC, ABAC, Zero Trust — share a common assumption: constraining actions constrains risk. For decades, this worked. Behavior was deterministic, bounded, and auditable.

Agentic AI systems break this assumption. An agent that can dynamically compose tool chains, transform data across contexts, and pursue goals through non-deterministic paths makes action-based authorization structurally insufficient.

The evidence is accumulating: EchoLeak, Log-To-Leak, Claudy Day — in every case, each action was authorized. The data exposure was not.

The Four Pillars

DACSA operates on four pillars that together provide data-layer enforcement for agentic systems.

01

Sensitivity Classification

A graduated lattice structure that classifies data by sensitivity level, enabling policy decisions based on what the data is, not just who requested it.

02

Lineage Tracking

Directed acyclic provenance graphs that track how data flows through the system, maintaining a complete audit trail of transformations.

03

Delta Inspection

Detection of inference-based disclosure and aggregation attacks — catching the subtle accumulation of individually harmless data points into sensitive composites.

04

Output-Bound Enforcement

Constraints on what the system can emit regardless of the path it took to get there — the last line of defense at the output boundary.

Related Writing

Read the Full Paper

DACSA: Data Authorization Controls for Securing Agentic AI Systems

View Publications