Skip to main content

The Multi-Model Mind: Meta-Rationality for Wardley Leaders

· 4 min read
Dave Hulbert
Builder and maintainer of Wardley Leadership Strategies

Your AI safety team wants to pause deployment. Your product team sees only competitive risk. The map shows the component is custom-built and evolving fast. Which model wins? If you choose just one, you’ve already lost. Wardley Doctrine already warns us to Use Appropriate Methods—avoid one-size-fits-all approaches to delivery, governance, or even mapping itself. That doctrine is a gateway into meta-rationality: the ability to notice when a formal method has hit its limits and to fluidly swap in different lenses without abandoning rigour. Charlie Munger called it a "latticework of models"; David Chapman calls it meta-rationality—the pragmatism of choosing and combining frames instead of worshipping one.

Why meta-rationality matters now

  • AI multiplies frames. Product, legal, ethics, and safety teams see the same AI capability through different models. Meta-rational leaders can hold these models together without forcing premature convergence.
  • Wardley Maps can ossify. Maps are situational, not sacred. Meta-rationality keeps leaders from mistaking a neat map for the territory.
  • Doctrine is context-sensitive. Principles like Use Appropriate Methods and Think Small Teams are invitations to pick the right play for the landscape, not to apply a fixed ritual.

A meta-rational stack for mappers

1) Name the frame

Before mapping, declare which lens you’re using as the primary lens (e.g., evolution and user need). Admit which perspectives you’re temporarily ignoring (e.g., power dynamics, regulatory posture).

2) Carry a latticework

Pair the map with at least two other models that stress different dimensions. Examples: Cynefin for complexity class, OODA for tempo, Viable System Model for governance recursion, Panarchy for cross-scale shocks.

3) Switch modes deliberately

Use experiments or incidents as prompts to change frame: if the map stalls, view the situation as a Panarchy adaptive cycle; if coordination fails, run a VSM audit; if risk spooks delivery, move to Cynefin’s exploratory probes.

4) Tolerate overlap

Accept that models will disagree. Capture the contradictions in the map notes instead of hiding them. When models contradict each other, you’ve found leverage, not confusion.

5) Retire brittle models

Chapman’s meta-rationality stresses knowing when a model no longer fits. If a component’s behaviour routinely violates your chosen frame, pivot the model instead of forcing the data.

Applying the latticework to doctrine

  • Use Appropriate Methods → Frame checks. In cadence reviews, ask: Which model dominated our last decision? What would change if we applied a different frame? Rotate the dominant model based on signals, not habit.
  • Optimise Flow → Tempo-aware switching. Use OODA loops to decide when to tighten or loosen control on AI components, while Wardley Maps show which components can safely accelerate.
  • Be Transparent → Annotated maps. Record which models shaped each strategic play. Transparency makes it easier to revisit assumptions when signals shift.
  • Design for Evolution → Model sunsets. As components commoditise, retire bespoke models and move to simpler governance. Genesis work may need rich narrative models; utilities can rely on checklists.

Leadership moves

  • Teach model pluralism. Run short sessions where teams map the same landscape through different frames. Highlight how each reveals different failure modes.
  • Attach models to risks. When declaring a risk, specify the model that surfaces it (e.g., VSM highlights missing recursion; Cynefin spots premature standardisation). This keeps risk debates concrete.
  • Use AI to broker frames. Fine-tuned agents can summarise how each model interprets new signals—drift, regulation, user behaviour—and propose frame shifts before humans polarise.
  • Measure meta-skill, not model count. Track how often teams switch frames when evidence demands it, not how many frameworks they can name.

What goes wrong when you worship the map

  • Regulatory whiplash. Single-frame thinking on compliance can blindside deployment when the map says “fast” but lawmakers say “halt”.
  • Safety incidents. Over-trusting the map’s evolution guesses can ignore ethical signals that don’t fit the neat component boundaries.
  • Competitive myopia. Optimising for tempo alone can miss the cultural or political moves competitors are making off-map.

Connections and further reading