Dynamical intelligence is the capacity of a system to sustain its viability over time by adaptively regulating its state through feedback-driven perception, action, and learning within a dynamic environment.
Formulation
A dynamical intelligence is a system $S$ characterized by:
State Space
- A set of internal states $X \subseteq \mathbb{R}^n$.
- The system evolves over time according to dynamics: $\dot x(t) = f\!\bigl(x(t), u(t), e(t)\bigr)$ where $x(t)\in X$, $u(t)$ are actions, and $e(t)$ are environmental inputs.
Perception (Information Flow)
- A mapping from environment to internal states: $y(t) = h\!\bigl(x(t), e(t)\bigr)$ giving the system partial, noisy, or uncertain information about the environment.
Action (Regulation)
- A policy or control law that selects actions based on states, observations, or beliefs: $u(t) = \pi\!\bigl(x(t), y(t)\bigr)$
Objective (Criterion)
- The system has an implicit or explicit function it tries to minimize or maximize, e.g.:
- Control Theory: stability, error minimization.
- Active Inference: free-energy minimization.
- Reinforcement Learning: expected cumulative reward.
- Biological Systems: viability, survival, reproduction.
- Abstractly: $J = \int_0^\infty L\!\bigl(x(t), u(t), e(t)\bigr)\,dt$ where $L$ encodes costs, rewards, or divergences from expectations.
- $L$ encodes the local rate at which the system gains or loses alignment with its objective, given its state, action, and environment at that moment.
Adaptation (Learning/Updating)
- The system modifies parameters $\theta$ of $f, h, \pi$ over time, in response to feedback, to maintain or improve performance: $\theta_{t+1} = \mathcal{U}\!\bigl(\theta_t, x(t), y(t), u(t), e(t)\bigr)$
This update law $\mathcal{U}$ can be Bayesian (belief updating), gradient-based (RL, control), evolutionary (population dynamics), or heuristic.
Closure (Self-Maintenance)
- The system must preserve its operational coherence over time, i.e. remain viable.