Estimation a mobile robot’s position and orientation (commonly referred to as a robot’s pose) is an important tasks in robotics. Wheeled robots often do this using a process known as odometry: Rotary encoders are used to measure how fast each drive motor turns. From this, wheel velocity can be calculated from each motor’s rotational speed, gearbox ratio and wheel diameter. By integrating wheel velocities over time, the robot can estimate its current pose.
Legged robots do not lend themselves to the application of rotary encoders. Similarly, there are no wheels which conveniently translate a rotational to a linear velocity. However, for statically stable robots such as LXRobotics’ six-legged hexapod L4XZ a similar approach exists, as described in “A Leg Proprioception Based 6 DOF Odometry for Statically Stable Walking Robots” by M. Görner and A. Stelzer.
Their approach is based on two facts: One, the relationship of a leg to the center of the robot can be derived from the leg’s joint angles. This process is known as forward kinematics. Two, there are always at least three legs in contact with the ground, spanning an area on the ground which can be used to calculate the robot’s center displacement.
The first step is to determine which legs are currently on the ground:

Secondly, a sanity check of the filtered data is necessary. The sanity check is verifying two key requirements: One, that there are at least three legs on the ground. Two, that legs from both sides of the hexapod are in the filtered list. Depending on sensor update rates one or both requirement can be violated when using a tripod gait when the two leg groups transition from swing to stance (and vice versa).

Thirdly, using forward kinematics the position of each ground contacting leg endpoint is calculated in relation to the robot’s center (base_link).

The poses of all ground contacting legs form a contact point cloud whose centroid calculation is shown in pseudo-code:
| |
An estimation of the translation and rotation of the robot’s center between two consecutive centroids can be calculated using singular value decomposition:
| |
Similarly to the wheeled robot, a legged robot can estimate its current pose by integrating translation and rotation over time.
| |
The robot’s linear and angular velocity can be estimated by dividing translation and rotation through the delta time Δt between two consecutive time stamps. Since both delta time and translation and rotation are numerically small values the resulting velocity estimate is quite noisy. Noise is reduced by applying a Savitzky-Golay filter.
| |
The L4XZ software stack uses this pose estimation to publish transforms from base_link to odom, allowing visualization in RViz. Left window shows the simulation running in Gazebo, right window showing an abstract robot representation in Rviz.

Caveat: Odometry based pose estimation is subject to cumulative errors that grow over time and distance travelled. These errors need to be mitigated via sensor fusion with pose estimates generated from different sensor types, such as an IMU or a GNSS device.
