1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246
|
r"""
\file se2_localization.py.
Created on: Jan 11, 2021
\author: Jeremie Deray
---------------------------------------------------------
This file is:
(c) 2021 Jeremie Deray
This file is part of `manif`, a C++ template-only library
for Lie theory targeted at estimation for robotics.
Manif is:
(c) 2021 Jeremie Deray
---------------------------------------------------------
---------------------------------------------------------
Demonstration example:
2D Robot localization based on fixed beacons.
See se3_localization.cpp for the 3D equivalent.
See se3_sam.cpp for a more advanced example performing smoothing and mapping.
---------------------------------------------------------
This demo corresponds to the application in chapter V, section A,
in the paper Sola-18, [https://arxiv.org/abs/1812.01537].
The following is an abstract of the content of the paper.
Please consult the paper for better reference.
We consider a robot in the plane surrounded by a small
number of punctual landmarks or _beacons_.
The robot receives control actions in the form of axial
and angular velocities, and is able to measure the location
of the beacons w.r.t its own reference frame.
The robot pose X is in SE(2) and the beacon positions b_k in R^2,
| cos th -sin th x |
X = | sin th cos th y | // position and orientation
| 0 0 1 |
b_k = (bx_k, by_k) // lmk coordinates in world frame
The control signal u is a twist in se(2) comprising longitudinal
velocity v and angular velocity w, with no lateral velocity
component, integrated over the sampling time dt.
u = (v*dt, 0, w*dt)
The control is corrupted by additive Gaussian noise u_noise,
with covariance
Q = diagonal(sigma_v^2, sigma_s^2, sigma_w^2).
This noise accounts for possible lateral slippage u_s
through a non-zero value of sigma_s,
At the arrival of a control u, the robot pose is updated
with X <-- X * Exp(u) = X + u.
Landmark measurements are of the range and bearing type,
though they are put in Cartesian form for simplicity.
Their noise n is zero mean Gaussian, and is specified
with a covariances matrix R.
We notice the rigid motion action y = h(X,b) = X^-1 * b
(see appendix C),
y_k = (brx_k, bry_k) // lmk coordinates in robot frame
We consider the beacons b_k situated at known positions.
We define the pose to estimate as X in SE(2).
The estimation error dx and its covariance P are expressed
in the tangent space at X.
All these variables are summarized again as follows
X : robot pose, SE(2)
u : robot control, (v*dt ; 0 ; w*dt) in se(2)
Q : control perturbation covariance
b_k : k-th landmark position, R^2
y : Cartesian landmark measurement in robot frame, R^2
R : covariance of the measurement noise
The motion and measurement models are
X_(t+1) = f(X_t, u) = X_t * Exp ( w ) // motion equation
y_k = h(X, b_k) = X^-1 * b_k // measurement equation
The algorithm below comprises first a simulator to
produce measurements, then uses these measurements
to estimate the state, using a Lie-based error-state Kalman filter.
This file has plain code with only one main() function.
There are no function calls other than those involving `manif`.
Printing simulated state and estimated state together
with an unfiltered state (i.e. without Kalman corrections)
allows for evaluating the quality of the estimates.
"""
from manifpy import SE2, SE2Tangent
import numpy as np
from numpy.linalg import inv
Vector = np.array
def Covariance():
return np.zeros((SE2.DoF, SE2.DoF))
def Jacobian():
return np.zeros((SE2.DoF, SE2.DoF))
if __name__ == '__main__':
# START CONFIGURATION
NUMBER_OF_LMKS_TO_MEASURE = 3
# Define the robot pose element and its covariance
X_simulation = SE2.Identity()
X = SE2.Identity()
X_unfiltered = SE2.Identity()
P = Covariance()
u_nom = Vector([0.1, 0.0, 0.05])
u_sigmas = Vector([0.1, 0.1, 0.1])
U = np.diagflat(np.square(u_sigmas))
# Declare the Jacobians of the motion wrt robot and control
J_x = Jacobian()
J_u = Jacobian()
# Define five landmarks in R^2
landmarks = []
landmarks.append(Vector([2.0, 0.0]))
landmarks.append(Vector([2.0, 1.0]))
landmarks.append(Vector([2.0, -1.0]))
# Define the beacon's measurements
measurements = [Vector([0, 0])] * NUMBER_OF_LMKS_TO_MEASURE
y_sigmas = Vector([0.01, 0.01])
R = np.diagflat(np.square(y_sigmas))
# Declare some temporaries
J_xi_x = Jacobian()
J_e_xi = np.zeros((SE2.Dim, SE2.DoF))
# CONFIGURATION DONE
# pretty print
np.set_printoptions(precision=3, suppress=True)
# DEBUG
print('X STATE : X Y Z TH_x TH_y TH_z ')
print('-------------------------------------------------------')
print('X initial : ', X_simulation.log().coeffs())
print('-------------------------------------------------------')
# END DEBUG
# START TEMPORAL LOOP
# Make 10 steps. Measure up to three landmarks each time.
for t in range(10):
# I. Simulation
# simulate noise
u_noise = u_sigmas * np.random.rand(SE2.DoF) # control noise
u_noisy = u_nom + u_noise # noisy control
u_simu = SE2Tangent(u_nom)
u_est = SE2Tangent(u_noisy)
u_unfilt = SE2Tangent(u_noisy)
# first we move
X_simulation = X_simulation + u_simu # overloaded X.rplus(u) = X * exp(u)
# then we measure all landmarks
for i in range(NUMBER_OF_LMKS_TO_MEASURE):
b = landmarks[i] # lmk coordinates in world frame
# simulate noise
y_noise = y_sigmas * np.random.rand(SE2.Dim) # measurement noise
y = X_simulation.inverse().act(b) # landmark measurement, before adding noise
y = y + y_noise # landmark measurement, noisy
measurements[i] = y # store for the estimator just below
# II. Estimation
# First we move
X = X.plus(u_est, J_x, J_u) # X * exp(u), with Jacobians
P = J_x @ P @ J_x.transpose() + J_u @ U @ J_u.transpose()
# Then we correct using the measurements of each lmk
for i in range(NUMBER_OF_LMKS_TO_MEASURE):
# landmark
b = landmarks[i] # lmk coordinates in world frame
# measurement
y = measurements[i] # lmk measurement, noisy
# expectation
e = X.inverse(J_xi_x).act(b, J_e_xi) # note: e = R.tr * ( b - t ), for X = (R,t).
H = J_e_xi @ J_xi_x # Jacobian of the measurements wrt the robot pose. note: H = J_e_x = J_e_xi * J_xi_x
E = H @ P @ H.transpose()
# innovation
z = y - e
Z = E + R
# Kalman gain
K = P @ H.transpose() @ inv(Z) # K = P * H.tr * ( H * P * H.tr + R).inv
# Correction step
dx = K @ z # dx is in the tangent space at X
# Update
X = X + SE2Tangent(dx) # overloaded X.rplus(dx) = X * exp(dx)
P = P - K @ Z @ K.transpose()
# III. Unfiltered
# move also an unfiltered version for comparison purposes
X_unfiltered = X_unfiltered + u_unfilt
# IV. Results
# DEBUG
print('X simulated : ', X_simulation.log().coeffs().transpose())
print('X estimated : ', X.log().coeffs().transpose())
print('X unfilterd : ', X_unfiltered.log().coeffs().transpose())
print('-------------------------------------------------------')
# END DEBUG
|