route rebuilding with NICLA SENSE ME

We are trying to use NICLA SENSE ME to rebuild the route. We use the orientation data from SENSOR_ID_ORI to get the rotation matrix. After experiments, the rotation sequence should be zyx, and it has been validated with the gravity data from SENSOR_ID_GRA.

However, after we transform the acceleration (data from SENSOR_ID_ACC subtracted by  data from SENSOR_ID_GRA) into the world coordinate with the inverse of the rotation matrix mentioned above, and calculate velocity and position with it, the drift seems to be too large that cannot be  ignored, as showned in the figure attached (calculated position). (The sensor did not move when we collected data of this figure)

Is there anyway to correct it? Or is there any problems with my code?

import math
import numpy as np
import csv
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d

def eularAngles2rotationMat(theta, format='degree'):
    """
    Calculates Rotation Matrix given eular angles.
    :param theta: 1-by-3 list [rx, ry, rz] angle in degree
    :param format:
    :return:
    ZYX eular angle
    """
    if format == 'degree':
        theta = [i * math.pi / 180.0 for i in theta]

    R_x = np.array([[1, 0, 0],
                    [0, math.cos(theta[0]), -math.sin(theta[0])],
                    [0, math.sin(theta[0]), math.cos(theta[0])]
                    ])

    R_y = np.array([[math.cos(theta[1]), 0, math.sin(theta[1])],
                    [0, 1, 0],
                    [-math.sin(theta[1]), 0, math.cos(theta[1])]
                    ])

    R_z = np.array([[math.cos(theta[2]), -math.sin(theta[2]), 0],
                    [math.sin(theta[2]), math.cos(theta[2]), 0],
                    [0, 0, 1]
                    ])

    R = np.dot(R_x, np.dot(R_y, R_z))
    return R

if __name__ == '__main__':

    with open("testAll_2023020601.csv", 'r') as f:
        sample_data = list(csv.reader(f, delimiter=","))

    sample_data = np.array(sample_data)

    ts = np.array([float(x[0]) for x in sample_data[1:]])
    accX = np.array([float(x[1]) for x in sample_data[1:]])
    accY = np.array([float(x[2]) for x in sample_data[1:]])
    accZ = np.array([float(x[3]) for x in sample_data[1:]])
    graX = np.array([float(x[10]) for x in sample_data[1:]])
    graY = np.array([float(x[11]) for x in sample_data[1:]])
    graZ = np.array([float(x[12]) for x in sample_data[1:]])
    pitch = np.array([float(x[13]) for x in sample_data[1:]])
    roll = np.array([float(x[14]) for x in sample_data[1:]])
    heading = np.array([float(x[15]) for x in sample_data[1:]])

    theta = np.array([[pitch[x], roll[x], heading[x]] for x in range(np.size(pitch))])
    trans = np.array([eularAngles2rotationMat(x) for x in theta])

    s = 9.81 / 4096
    acc = np.array(
        [[(accX[x] - graX[x]) * s, (accY[x] - graY[x]) * s, (accZ[x] - graZ[x]) * s] for x in range(np.size(accX))])
    accCorr = np.array([np.dot(np.linalg.pinv(trans[x]), acc[x]) for x in range(np.shape(acc)[0])])

    vX = 0
    vY = 0
    vZ = 0
    posX = 0
    posY = 0
    posZ = 0
    vXarr = [0]
    vYarr = [0]
    vZarr = [0]
    posXarr = [0]
    posYarr = [0]
    posZarr = [0]
    for i in range(1, np.shape(accCorr)[0]):
        vX += accCorr[i][0] * (ts[i] - ts[i - 1]) / 1000
        vY += accCorr[i][1] * (ts[i] - ts[i - 1]) / 1000
        vZ += accCorr[i][2] * (ts[i] - ts[i - 1]) / 1000
        vXarr.append(vX)
        vYarr.append(vY)
        vZarr.append(vZ)
    for i in range(1, np.shape(accCorr)[0]):
        posX += vXarr[i - 1] * (ts[i] - ts[i - 1]) / 1000 + 1 / 2 * accCorr[i][0] * (((ts[i] - ts[i - 1]) / 1000) ** 2)
        posY += vYarr[i - 1] * (ts[i] - ts[i - 1]) / 1000 + 1 / 2 * accCorr[i][1] * (((ts[i] - ts[i - 1]) / 1000) ** 2)
        posZ += vZarr[i - 1] * (ts[i] - ts[i - 1]) / 1000 + 1 / 2 * accCorr[i][2] * (((ts[i] - ts[i - 1]) / 1000) ** 2)
        posXarr.append(posX)
        posYarr.append(posY)
        posZarr.append(posZ)

    ax = plt.axes(projection='3d')

    ax.plot3D(posXarr, posYarr, posZarr)
    ax.scatter3D(posXarr, posYarr, posZarr)

    plt.show()

 

testAll_2023020601.png
47.46KB
Best reply by BSTRobin

Hi FBSevnt,

From the information provided, it seems that you are trying to estimate the position and velocity of a device using its acceleration data and the orientation data from a sensor. The approach you are using is based on integrating the acceleration data twice to obtain position, using the orientation data to transform the acceleration data to a common reference frame.

The approach seems reasonable, but it is important to note that integrating acceleration data twice can lead to large errors over time, as any small errors in the acceleration data will accumulate. In addition, errors in the orientation data can also contribute to the drift in the estimated position and velocity.

Here are some suggestions to help correct the drift:

Use a sensor fusion algorithm: A sensor fusion algorithm combines data from multiple sensors to obtain a more accurate estimate of orientation and position. This can help reduce the errors caused by individual sensors.
Use a Kalman filter: A Kalman filter is a widely used algorithm for estimating the state of a system based on noisy sensor data. It can help reduce the effect of sensor noise and drift in the estimated position and velocity.
Add external information: You can also use external information, such as GPS or visual odometry, to correct the drift in the estimated position and velocity.
Tune the integration parameters: The integration parameters, such as the time step and the initial conditions, can also affect the accuracy of the estimated position and velocity. You can try tuning these parameters to see if it improves the results.

Regarding your code, it seems to be correct in principle, one thing should mention is our roll is around the y-axis, and pitch is around the x-axis. You may want to try some of the suggestions above to see if they help correct the drift.

View original
2 replies
Resolved