is my robot skidding?
Background
In my free time, I mentor FIRST Robotics, where part of my job is teaching advanced controls theory. It turns out it's moderately hard to get the average highschooler to have fun doing math, so I decided a more visual self-paced format to absorb information would (hopefully) prove fruitful. This is the first installment in a series of incredibly niche concepts, but the problem solving and the concepts behind them are universally applicable.
on to the lecture,,,
how to tell if your robot is lying
Based on Team 1690 Orbit's Software Session
the problem with trusting your wheels
Ordinarily, we fuse readings from various sensors on the robot to calculate a representation of the robot's position in space. This is usally a combination of a locally-derived position based on the robot's motor readings, and an objectively-derived position based on vision. A significant factor to the locally calculated (dead-reckoned) odometry is the wheel's encoder values. By seeing how much the wheels have turned, and multiplying that by pi and the wheel's diameter, we can calculate the linear displacement contributed at any moment, and sum over these to attain a position. This is mostly fine, with a caveat.
Wheel encoders measure velocity by counting how fast the wheel spins. But if a wheel is not maintaining constant and perfect traction with the floor, the encoder reports more or less motion than is actually happening. Feed that bad data into odometry, and the local position estimate drifts.
The question is: how do we determine when this is happening?
the redundancy insight
A swerve drive has 4 independent modules, each reporting:
- speed — how fast the wheel is spinning (m/s)
- angle — which direction the wheel is pointing (radians)
That's 8 measurements total.
Robot motion only has 3 degrees of freedom:
- — forward/backward velocity
- — left/right velocity
- — rotational velocity
We have 8 numbers describing something that only needs 3. This system is overdetermined.
the rigid body constraint
Before we can detect lies, we need to understand what truth looks like.
A swerve robot is a rigid body. Every point moves uniformly, no stretching or deformation.
When a rigid body moves, any point's velocity is the sum of two components:
The translational component is the velocity of the body's center of mass
Translation is identical for every point on a rigid body. If the robot's center moves at , then every point on the robot — including all four wheel modules — shares that same translational velocity.
This is the foundation of the entire algorithm.
Pure translation: all four modules have identical velocity vectors.
the geometry of rotation
Rotation is different. When a rigid body spins around its center, points farther from the center move faster. A point at position relative to the center has rotational velocity:
In 2D, this cross product simplifies beautifully. If we define as a scalar (positive = counterclockwise), then:
Why ? This is the vector perpendicular to , pointing in the direction of rotation. You can verify: if a point is at — directly to the right of center — and we rotate counterclockwise, it should move upward. Indeed: .
The magnitude follows directly:
Points farther from center move faster. This is why the rotational component differs for each module based on its position.
Pure rotation: each module's velocity is perpendicular to its radius, with magnitude proportional to distance from center.
superposition
Practical motion ona swerve combines both. Each module's measured velocity is:
For module at position :
Expanding into components:
The measured velocities differ between modules, but the underlying translation is the same for all of them. The differences come entirely from the rotational component.
Combined motion: measured velocities (blue) differ, but translational components (orange) are identical.
the inverse problem
Now we can frame the detection algorithm. We have 4 modules reporting measured velocities. We want to:
- Estimate the robot's motion
- Extract each module's implied translational component
- Check if they agree
Forward kinematics: from measurements to motion
Each module gives us two equations (x and y components). With 4 modules, that's 8 equations for 3 unknowns. In matrix form:
Or more compactly: , where is measurements, is the kinematics matrix, and is the chassis state.
This is overdetermined — 8 equations, 3 unknowns. We solve it with least squares regression:
This finds the chassis state that best explains all 4 module measurements, minimizing squared error. We have to use least squares, instead of solving the system directly, because only one solution precisely satisfies our system of equations, but due to sensor noise and real-world factors, we will basically never set up this problem to line up in such a fashion. So we must find the closest thing instead.
ChassisSpeeds speeds = kinematics.toChassisSpeeds(moduleStates);
double omega = speeds.omegaRadiansPerSecond;
Inverse kinematics: from motion to expected rotation
Now we compute what each module's velocity would be if the robot was only rotating (no translation):
SwerveModuleState[] rotationalOnly = kinematics.toSwerveModuleStates(
new ChassisSpeeds(0, 0, omega)
);
Extraction: isolating translation
If , then .
For each module, subtract the rotational component to get the implied translational velocity:
Translation2d measured = new Translation2d(measuredState.speed, measuredState.angle);
Translation2d rotational = new Translation2d(rotationalState.speed, rotationalState.angle);
Translation2d translational = measured.minus(rotational);
Decomposition: after subtracting rotation (purple) from measured (blue), all four translational components (orange) should match.
the consistency check
Now we can finally do something useful. In a perfect world with no skidding:
Why? Because translation is the same for all points on a rigid body. After removing rotation, what's left must be identical. If the extracted translations disagree, the measurements are inconsistent.
We quantify disagreement with the skid ratio:
| Skid Ratio | Meaning |
|---|---|
| 1.0 | All 4 points are moving uniformly |
| > 1 | something funny is happening |
what skidding looks like
There are two kinds of wheel slip, and they produce opposite encoder errors:
Wheel spin — the wheel loses traction and spins freely. The encoder reports more velocity than the robot is actually achieving. Less common in FRC, but happens on worn carpet or when accelerating aggressively.
Wheel drag — the wheel locks up or drags across the surface. The robot is still moving, but the wheel isn't rotating as fast as it should. The encoder reports less velocity than the robot's actual motion. This is the classic scenario where you suddenly brake, and your wheels indeterminately skid across the carpet while momentum carries you forward.
Both cases cause the robot to yaw toward the affected wheel. With wheel drag, FR locks up and creates more resistance, so the robot pivots around that corner — like slamming one brake on a car. With wheel spin, FR provides less thrust, so the left side overpowers it.
The detection algorithm catches both due to only caring about magnitude of disagreement rather than direction
Wheel drag: FR locks up and reports low velocity (short red vector) while the robot curves toward the dragging wheel. The extracted translations disagree.
edge cases
Division by zero
When the robot is stationary, all translational magnitudes approach zero. The ratio becomes , which is undefined.
if (minTranslational < 1e-4) {
return 1.0; // Can't skid if you're not moving
}
Pure rotation
When the robot spins in place with no translation, all extracted translations should be zero. In practice, sensor noise gives small random values. The ratio might be , which looks like skidding.
But notice: the magnitudes are tiny. We can either threshold on absolute magnitude (ignore small translations) or accept that the ratio is unstable when translation is near-zero. In practice, pure rotation without any translation is rare during actual driving.
why this works
this solution is actually quite elegant; four wheels reporting to three degrees of freedom means we have more information than strictly necessary. This allows us to do internal consistency checking, and is actually a very common theme across engineering! if you have a 4 legged robot for example, you're overconstrained by one leg, so you can easily tell if your robot is slipping or falling.
sandbox
if you just wanna play around with the simulation made for this article
references
- Team 1690 Orbit Software Session — Original explanation
- WPILib Swerve Kinematics
- 254 Cheesy Poofs