A lightweight Linear Quadratic Regulator (LQR) controller library for Arduino and compatible microcontrollers. AutoLQR provides an easy-to-use implementation of LQR control theory for small embedded systems.
Linear Quadratic Regulators are a form of optimal control that balances system performance against control effort. The AutoLQR library handles the complex matrix operations required to compute optimal feedback gains, allowing you to implement sophisticated control systems on resource-constrained platforms.
- Minimal Dependencies: Designed to work with the Arduino core libraries only
- Memory Efficient: Dynamic allocation of matrices based on system size
- Flexible System Sizing: Support for systems with multiple states and inputs
- Simplified API: Easy-to-use interface for setting up system dynamics and cost matrices
- Pre-computed Gains: Option to use offline-computed gains for complex systems
- Optimal Control: Implementation of discrete-time LQR control theory
- Matrix Operations: Built-in matrix utilities for embedded control applications
- Download this repository as a ZIP file
- In the Arduino IDE, navigate to Sketch > Include Library > Add .ZIP Library
- Select the downloaded ZIP file
- After installation, go to File > Examples > AutoLQR to view the examples
- Download or clone this repository
- Copy the contents to your Arduino libraries directory (usually
Documents/Arduino/libraries/
)
The Linear Quadratic Regulator solves the optimal control problem by minimizing the infinite horizon cost function:
where:
-
$x_k$ is the state vector at time k -
$u_k$ is the control input at time k -
$Q$ is the state cost matrix (penalizes state error) -
$R$ is the control cost matrix (penalizes control effort)
For a discrete-time linear system represented by:
The LQR controller produces a control law:
where K is the optimal feedback gain matrix computed by solving the Discrete Algebraic Riccati Equation (DARE).
The AutoLQR library solves the DARE iteratively:
Once P converges, the optimal gain matrix K is calculated as:
// Define system size
#define STATE_SIZE 2 // Number of state variables
#define CONTROL_SIZE 1 // Number of control inputs
// Create controller instance
AutoLQR controller(STATE_SIZE, CONTROL_SIZE);
The system dynamics are defined by two matrices:
- State matrix A describes how the system evolves on its own
- Input matrix B describes how control inputs affect the system
For example, a simple position-velocity system with discretized dynamics:
// For a system with dt = 0.1s
float A[STATE_SIZE * STATE_SIZE] = {
1.0, 0.1, // Position += velocity * dt
0.0, 0.95 // Velocity with damping factor
};
float B[STATE_SIZE * CONTROL_SIZE] = {
0.005, // Effect of control on position (0.5*dt^2)
0.1 // Effect of control on velocity (dt)
};
// Set matrices in controller
controller.setStateMatrix(A);
controller.setInputMatrix(B);
The cost matrices Q and R define the trade-off between performance and control effort:
- Q penalizes state errors (higher values = faster response)
- R penalizes control effort (higher values = smoother control)
// Q matrix - state cost
float Q[STATE_SIZE * STATE_SIZE] = {
10.0, 0.0, // Position error cost
0.0, 1.0 // Velocity error cost
};
// R matrix - control cost
float R[CONTROL_SIZE * CONTROL_SIZE] = {
0.1 // Control effort cost
};
// Set cost matrices
controller.setCostMatrices(Q, R);
Once the system is defined, compute the optimal gains:
if (controller.computeGains()) {
Serial.println("Gain computation successful");
} else {
Serial.println("Error: Failed to compute gains");
}
In each control loop:
// 1. Calculate state error (current state - desired state)
float state_error[STATE_SIZE] = {
current_position - desired_position,
current_velocity - desired_velocity
};
// 2. Update the controller with current state error
controller.updateState(state_error);
// 3. Calculate optimal control input
float control[CONTROL_SIZE];
controller.calculateControl(control);
// 4. Apply control to your system
motor_power = control[0];
For complex systems or microcontrollers with limited computational resources:
// Pre-computed optimal gain matrix (can be calculated offline)
float K[CONTROL_SIZE * STATE_SIZE] = {
4.47, 2.28 // Example values
};
// Set pre-computed gains
controller.setGains(K);
For setpoint tracking, you can calculate feedforward gains:
float desired_state[STATE_SIZE] = {target_position, 0.0};
float feedforward[CONTROL_SIZE];
controller.estimateFeedforwardGain(feedforward, desired_state);
// Apply both feedback and feedforward control
float u_total = control[0] + feedforward[0];
Estimate how long the system will take to converge:
float convergence_time = controller.estimateConvergenceTime(0.05); // 5% threshold
if (convergence_time > 0) {
Serial.print("Estimated convergence time: ");
Serial.println(convergence_time);
}
Calculate the expected cost from the current state:
float expected_cost = controller.calculateExpectedCost();
if (expected_cost >= 0) {
Serial.print("Expected cost: ");
Serial.println(expected_cost);
}
Verify if your system is controllable:
if (controller.isSystemControllable()) {
Serial.println("System is controllable");
} else {
Serial.println("Warning: System may not be controllable");
}
All matrices are represented as flattened 1D arrays in row-major order:
[0,0] [0,1] [0,2] A[0] A[1] A[2]
[1,0] [1,1] [1,2] → A[3] A[4] A[5]
[2,0] [2,1] [2,2] A[6] A[7] A[8]
For example, a 3×3 state matrix A would be stored in a 9-element array.
Memory allocation is proportional to:
- State matrix A: stateSize² floats
- Input matrix B: stateSize × controlSize floats
- Cost matrices Q and R: stateSize² and controlSize² floats
- Internal buffers for matrix operations
For a 2-state, 1-control system, approximately 72 bytes of RAM are used.
- Matrix multiplication: O(n³) for n×n matrices
- DARE solver: O(k×n³) where k is the number of iterations
- Systems with more than 3 states may be computationally intensive
- Pre-compute gains for larger systems
- Use fixed-point arithmetic for memory-constrained systems
- Consider decreasing DARE solver iteration count for faster computation
- Tune Q and R matrices to achieve desired control performance
The AutoLQR library is suitable for a variety of control applications:
- Motor Position Control: Precisely control motor position with minimal oscillation
- Self-balancing Robots: Maintain stability using multiple sensor inputs
- Temperature Control Systems: Optimize heating/cooling with minimal energy usage
- Drone Flight Control: Stabilize attitude with optimal control authority
- Servo Control: Smooth and precise control of servo mechanisms
See the examples folder for detailed implementation examples.
- Ensure matrices A, B, Q, and R have appropriate dimensions
- Check that Q and R are symmetric and positive definite
- Verify the system is controllable
- Try adjusting Q and R values (start with diagonal matrices)
- Check sign conventions in control output application
- Ensure state error is properly calculated (current - desired)
- Verify system model (A and B matrices) accurately represents the physical system
- Gradually increase Q values for more aggressive control
- Gradually increase R values for smoother control action
- Reduce system size if possible
- Use pre-computed gains instead of on-board computation
- Consider static allocation for very constrained systems
AutoLQR(int stateSize, int controlSize)
stateSize
: Number of state variablescontrolSize
: Number of control inputs
bool setStateMatrix(const float* A)
Sets the system dynamics matrix A. Returns success/failure.
bool setInputMatrix(const float* B)
Sets the input matrix B. Returns success/failure.
bool setCostMatrices(const float* Q, const float* R)
Sets the state cost matrix Q and control cost matrix R. Returns success/failure.
bool computeGains()
Computes the optimal feedback gains by solving the discrete algebraic Riccati equation. Returns success/failure.
void updateState(const float* currentState)
Updates the controller with the current system state or state error.
void calculateControl(float* controlOutput)
Calculates the optimal control inputs for the current state.
void setGains(const float* K)
Sets pre-computed gain values directly.
bool isSystemControllable()
Checks if the system is controllable. Returns true/false.
const float* getRicattiSolution() const
Gets the solution of the Riccati equation (P matrix).
void estimateFeedforwardGain(float* ffGain, const float* desiredState)
Estimates feedforward gain for steady-state tracking.
float estimateConvergenceTime(float convergenceThreshold = 0.05f)
Estimates time to convergence. Returns time in seconds or -1 if estimation fails.
bool exportGains(float* exportedK)
Exports computed gains to an external array. Returns success/failure.
float calculateExpectedCost()
Calculates expected cost from current state. Returns cost value or -1 if calculation fails.
Contributions to improve AutoLQR are welcome! Please follow these steps:
- Fork the repository
- Create a feature branch
- Implement your changes
- Submit a pull request
Please include tests and example code when adding new features.
This library is released under the MIT License. See LICENSE for details.
Created by 1999AZZAR.
For questions or support, please contact: azzar.mr.zs@gmail.com