State Space Representation

State space is the modern approach to control: model a system as matrices operating on state vectors. Unlike transfer functions (single-input, single-output), state space naturally handles multiple inputs, multiple outputs, and internal states.

Why It Matters

Transfer functions work for simple SISO systems. But a quadcopter has 4 inputs (motor speeds) and 6+ outputs (position, orientation). State space handles this naturally. It’s also the framework for the Kalman Filter, optimal control (LQR), and model-predictive control.

The State Space Equations

ẋ = Ax + Bu    (state equation — how the system evolves)
y = Cx + Du    (output equation — what you can measure)
SymbolSizeMeaning
xn×1State vector (all internal variables)
um×1Input vector (control signals)
yp×1Output vector (measurements)
An×nSystem matrix — internal dynamics
Bn×mInput matrix — how inputs affect states
Cp×nOutput matrix — which states are measured
Dp×mFeedthrough matrix — direct input→output (usually zero)

Example: Mass-Spring-Damper

A mass m on a spring (stiffness k) with damper (coefficient c), driven by force F:

m·x'' + c·x' + k·x = F

Define states: x₁ = position, x₂ = velocity:

x₁' = x₂                        (velocity)
x₂' = (-k/m)·x₁ + (-c/m)·x₂ + (1/m)·F   (Newton's second law)

In matrix form:

import numpy as np
 
m, c, k = 1.0, 0.5, 2.0
 
A = np.array([[0,    1   ],
              [-k/m, -c/m]])   # [[0, 1], [-2, -0.5]]
 
B = np.array([[0   ],
              [1/m ]])         # [[0], [1]]
 
C = np.array([[1, 0]])         # measure position only
D = np.array([[0]])            # no direct feedthrough

Simulation

import matplotlib.pyplot as plt
 
dt = 0.01
steps = 1000
x = np.array([[0.0], [0.0]])  # initial state: at rest
history = []
 
for i in range(steps):
    u = np.array([[1.0]]) if i > 50 else np.array([[0.0]])  # step input at t=0.5s
    x_dot = A @ x + B @ u
    x = x + x_dot * dt        # Euler integration
    y = C @ x + D @ u
    history.append(y[0, 0])
 
t = np.arange(steps) * dt
plt.plot(t, history)
plt.xlabel('Time (s)'); plt.ylabel('Position'); plt.grid(); plt.show()

Controllability

Can the system reach any state from any initial condition using the available inputs?

Controllability matrix: [B, AB, A²B, ..., A^(n-1)B]

If this matrix has full rank (rank = n), the system is controllable.

ctrl = np.hstack([B, A @ B])
rank = np.linalg.matrix_rank(ctrl)
print(f"Controllable: {rank == A.shape[0]}")  # True for our system

Why it matters: if a system isn’t controllable, there are internal states that no input can affect. You can’t control what you can’t reach.

Observability

Can you determine all internal states from the output measurements alone?

Observability matrix: [C; CA; CA²; ...; CA^(n-1)] (stacked vertically)

obs = np.vstack([C, C @ A])
rank = np.linalg.matrix_rank(obs)
print(f"Observable: {rank == A.shape[0]}")  # True if we can reconstruct all states

Why it matters: the Kalman Filter requires observability — you can’t estimate states you can’t infer from measurements.

State Feedback Control

If the system is controllable, you can place the closed-loop poles anywhere with state feedback:

u = -Kx + r    (K is a gain matrix, r is the reference)

Closed loop: ẋ = (A - BK)x + Br. The eigenvalues of (A-BK) are the closed-loop poles.

from scipy.signal import place_poles
 
# Place poles at s = -3 ± 2j (faster, well-damped)
desired_poles = [-3 + 2j, -3 - 2j]
result = place_poles(A, B, desired_poles)
K = result.gain_matrix
print(f"Gain matrix K: {K}")
 
# Closed-loop A matrix
A_cl = A - B @ K
print(f"Closed-loop eigenvalues: {np.linalg.eigvals(A_cl)}")

Converting Transfer Function ↔ State Space

from scipy.signal import tf2ss, ss2tf
 
# Transfer function: G(s) = 1/(s² + 0.5s + 2)
num = [1.0]
den = [1.0, 0.5, 2.0]
 
A, B, C, D = tf2ss(num, den)   # TF → state space
num2, den2 = ss2tf(A, B, C, D)  # state space → TF

State space is not unique — many different (A, B, C, D) representations give the same input-output behavior. The transfer function is unique (up to scaling).