Skip to main content

2 posts tagged with "autonomy"

View All Tags

The Productive Half-Life of AI Agents

· 6 min read
Dave Hulbert
Builder and maintainer of Wardley Leadership Strategies

Every executive asks the same question in different words: how long can we let the agent run before someone has to look over its shoulder? Call that interval the productive half-life—the span of time where an AI agent remains helpful, safe, and aligned without human intervention. There is real research to stand on here: METR’s time-horizon studies measure how long agents can pursue a task at 50% reliability and how that varies by domain; OSWorld, RealWebAssist, TOOLATHLON, and SWE-bench Verified all probe long, messy task chains rather than single steps. Human–automation trust work (Lee & See; CHI 2019 Guidelines for Human–AI Interaction) and decades of mixed-initiative research (Horvitz; Hearst; Bradshaw) show when people should re-enter the loop. The longer the half-life, the more the team can focus on higher-order choices instead of mechanical supervision—and the clearer the protocol for when to step back in.

Autonomy Gradient Maps

· 5 min read
Dave Hulbert
Builder and maintainer of Wardley Leadership Strategies

AI is compounding faster than governance. Leaders need a tool that lets them accelerate delegation without drifting into risk. Autonomy Gradient Maps extend Wardley Maps with explicit bands of delegated authority, showing how much control a component should have at each stage of evolution. The gradient creates an operational contract between human teams and AI agents: what they may decide, what they must escalate, and how that posture should change as the landscape shifts.

Autonomy Gradient Map bands

This model sits alongside the other AI-era operating patterns on this site. Where Cybernetic AI Leadership with the Viable System Model wires recursive governance, Autonomy Gradient Maps provide the map-level annotations that tell each System 1–5 node how much freedom to grant. They also complement Background AI for Continual Improvement by declaring where background agents can act without approval, and Autonomously Executed Strategy by defining the evidence gates that convert intent into safe machine-led execution. Together they form a choreography: recursive cybernetic loops, background AI improving the organism, and autonomy bands deciding how boldly the system acts.