I am planning to acquire position in 3D cartesian coordinates from an IMU (Inertial Sensor) containing Accelerometer and Gyroscope. I'm using this to track the objects position and trajectory in 3D.
1- From my limited knowledge I was under the assumption that Accelerometer alone would be enough, resulting in acceleration in xyz axis A(Ax,Ay,Az) and would need to be integrated twice to get velocity and then position, but integrating would add an unknown constant value, this error called drift increases with time. How to remove this error?
2- Furthermore, why is there a need for gyroscope in the first place, cant we just translate the x-y-z axis acceleration to displacement, if accelerometer tells the axis of motion then why check orientation from Gyroscopes. Sorry this is a very basic question, everywhere I checked both Gyro+Accel were used but don't know why.
3- Even when stationary and not in any motion there is earth's gravitation force acting on the sensor which will always give values more than that attributed by the motion of sensor. How do you remove the gravity?
Once this has been done ill apply Kalman Filters to them to fuse them and to smooth the values. How accurate is this method for trajectory estimation of an object for environments where GPS is not an option. I'm getting the Accelerometer and Gyroscope values from arduino and then importing to Python where it will be plotted on a 3D graph updating in real time. Any help would be highly appreciated, especially links to similar codes.
1 - An accelerometer can be calibrated to account for some of this drift but in the end no sensor is perfect and inaccuracy will inevitably cause drift. To fix this you would need some filter such as the Kalman filter to use the accelerometer for short high frequency data, and a secondary sensor such as a camera to periodically get the absolute position and update the internal position. This is the fundamental idea behind the Kalman filter.
2 - Accelerometers aren't very good for high frequency rotational data. Just using the accelerometers data would mean the system could not differentiate between a horizontal linear acceleration and rotational position. The gyroscope is used for the high frequency data while the accelerometer is used for low frequency data to adjust and counteract the rotational drift. A Kalman filter is one possible solution to this problem and there are many great online resources explaining this.
3 - You would have to use the methods including gyro / accel sensor fusion to get the 3d orientation of the sensor and then use vector math to subtract 1g from that orientation.
You would most likely be better off looking at some online resources to get the gist of it and then using a pre-built sensor fusion system whether it be a library or an fusion system on the accelerometer (on most accelerometers today including the mpu6050). These onboard systems typically do a better job then a simple Kalman filter and can combine other sensors such as magnetometers to gain even more accuracy.
Related
I'm training a neural network on stimuli which are being developed to mimic a sensory neuroscience task to compare performance to human results.
The task is based on spatial localization of audio. I need to generate white noise audio in python to present to the neural network, but also need to alter the audio as if it were presented at different locations. I understand how I'd generate the audio, but I'm not sure on how to generate the white noise from different theoretical locations.
You can add a delay to the right or left track, to account for the arrival time at the two ears. If I recall correctly, it amounts to up to about 25 or 30 milliseconds, depending on the angle. The travel distance disparity from source to the two ears can be calculated with basic trigonometry, and then multiplied by speed of sound in air to get the delay length. (IDK what python has for controlling delays or to what granularity delay lengths can be specified.)
Most of the other cues we have for spacial location are a lot harder to quantify. Most commonly we use volume, of course. Especially for higher-pitched content (wavelengths smaller than the width of the head) the head itself can block and cause some volume differences, based on the angle.
But a lot comes from reverberation for environmental cues, from timbrel roll-off as a function of distance (a quiet sound with lots of highs in the mix can really sound like they are right next to your ear), from moving the head to capture the sound from different angles, and from the filtering effects of the pinna of the ear. Because everyone's ear shape is different, I don't know that there is a universal thumbnail algorithm for what causes a sound to be sensed as originating from a particular altitude for a given angle. I think to some extent we just all learn by experiencing the sounds with our own particular ears while observing the sound source visually.
I have an air drone with four motors and wanted to make it fly between two straight lines.
The first problem:
its initial position will be in the middle at certain height but because of the air factors it may deviate (up or down) or (left or right). I have calculated the error when it deviates left or right using the camera, but still don't know how to calculate the error of the height (using the camera too without pressure sensor).
The second problem:
after calculating these errors how to convert them from an integer to a real move.
Sorry, I couldn't provide my code. it is too large and complicated.
1) Using a single camera to calculate distance is not enough.
However, if you're using a stereo camera, you can get a distance data pretty easily. If you want to avoid using a pressure sensor, you may want to consider using a distance sensor(LIDAR or ultrasonic: check the maximum range on these) to measure the height at which your drone will fly. In addition to this, you'll require a error control algorithm eg. PID algorithm to make your drone fly at a constant height.
This is a fantastic source for understanding the fundamentals of PID.
2)For implementation:
In my opinion, this video is awesome for understanding how your sensor data will get converted to an actual movement and will help you can create an analogy. You'll also get a headstart on the code provided.
I have a sensor that is continually collecting data (shown in blue) every minute that outputs a voltage output. I have a reference sensor collecting data (shown in red) that outputs in the units that I am interested. I am interested in determining a scaling factor so that I can scale the blue sensor's data to match the red sensor's data.
Normally, I would do a simple linear regression between the values of two sensors at any given time, which would give me a scaling factor based on the slope of the regression. I have noticed, however, that red sensor is slower at sensing a change in the environment, and can anywhere from 6-15 minutes behind -- this makes a regression difficult because at any given time, the two sensors may be measuring different things.
I was wondering if there is any sort of curve fitting that can be performed such that I can extract a scaling factor so that I can scale the blue sensor's data to match the red sensors.
I typically work in Python, so any Python packages (e.g. Numpy/Scipy) that would help with this would be especially helpful.
Thanks for the help. What I ended up doing was finding all the local maxima and minima on the reference curve, then used those peak locations to search for the same maxima or minima on the sample curve. I basically used the reference curve's maxima/minima points as the center of a "window" and I would search for the highest/lowest point on the sample curve within a few minutes of the center point.
Once I had found all the matched maxima/minima on the sample curve, I then could perform a linear regression between these points to determine a scaling factor.
I am working on a GPS data that has the latitude/longitude data speed vehicle ids and so on.
Each day different time vehicle speeds are different for each side of the road.
I created this graph with plotly mapbox and color difference is related with speed of vehicle.
So my question is Can I use any cluster algorithm for find side of vehicle? I tried DBSCAN but I could not find a clear answer.
It depends on the data you have about the different spots, is you know time and speed at each point you can estimate the range within the next point should fail in, and afterwards order them in function of distance. Otherwise it is going to be complicated without more information that position and speed with all those points.
ps there is a heavy computational method to try to estimate the route by using tangent to determine the angle between segments os consecutive points
Many GPS hardware sets will compute direction and insert this information into the standard output data alongside latitude, longitude, speed, etc. You might check if your source data contains information about direction or "heading" often specified in degrees where zero degrees is regarded as North, 90 degrees is East, etc. You may need to parse the data and convert from binary or ascii hexadecimal values to integer values depending on the data structure specifications which vary for different hardware designs. If such data exists in your source, this may be a simpler and more reliable approach to determining direction.
Using my phone held in my hand to represent a toy gun, I move it about ("aim", if you would) and transfer the orientation data (pitch, yaw, roll) to my laptop where I render a moving crosshair relative to a webcam feed pointed forwards.
I start the application by getting the user to press enter on the laptop while they are holding the phone straight in front of them - this is the initial calibration step, and I use the initial yaw/pitch as a reference to aiming at the center.
Then during the camera feed loop, where I draw the crosshair, I measure changes in the pitch/yaw relative to those initial calibrated values, and I use them to redraw the crosshair left/right/up/down.
This is my current code:
pitchDiff = initPitch - newPitch #corresponds to Y axis
yawDiff = 0-(initYaw - newYaw) #corresponds to X axis
pitchChangeFactor = 10
yawChangeFactor = 10
xC = int((imgWidth/2) + yawDiff*yawChangeFactor)
yC = int((imgHeight/2) - pitchDiff*pitchChangeFactor)
## THE TARGETING GUI
cR = 40 #circle radius
cv2.circle(img, (xC,yC), cR, (20, 20, 255), 3)
What I'm asking for in this question is how I can do this more accurately and smoothly. The gyro data is noisy, so when I sample every 30 frames-per-second, I actually take the median of about 60 gyro readings for pitch/yaw.
Also, I believe my dynamical model is wrong: I'm simply moving the targeting crosshair a constant amount across the screen according to changes in angles. I would think trigonometric functions are needed, but it's not clear to me what I should try. Using only 1 camera, I clearly lack depth data. However, I am OK for assuming that the target I would like to aim at is, say, 3-4 meters in front of me.
Thank you for any help
how can I do this more accurately and smoothly?
Gyroscopes measure yaw rate, roll rate, and pitch rate but can't measure roll, pitch, and yaw directly. When you request the pitch and yaw angles from your phone it combines gyro and accelerometer data (and possibly magnetometer data) to give you an estimate of pitch and yaw.
By gyro data I assume that you mean the estimated yaw and pitch that your phone provides.
Make sure that you are using the yaw and pitch angles and not the yaw rate, pitch rate etc.
If you are using the estimation angles then you can look at different methods of filtering those signals before you you do your calculations. You mentioned a 60 sample median filter. Have you tried other filters? A simple low pass filter may perform better for your situation. Median filters are good if you get large and sudden spikes in your signals but a low pass filter or moving average may perform better for your situation. The proper way to do this would be to fix the position of your phone while acquiring data for some time then analyze the frequency components of the noise and choose a filter with the appropriate cut-off frequency to remove as much noise as you can, though this may be overkill for what you're trying to do.
I would suggest experimenting with a simple moving average or low pass filter to start. There is a lot of resources on the web showing how to implement filters in software.(http://en.wikipedia.org/wiki/Low-pass_filter)
Dynamical Model
As far as your calculations, they seem fine to me as long as your phone is providing angles and not rates as discussed above.
You do not need trigonometric functions. If you are just trying to map the angle to the screen position then a simple linear mapping like you've done is all you need so that a change of angle of x degrees corresponds to a change in screen position of M*x pixels.
Another thing that could help is if your phone can provide data faster than 30 Hz. You can still update the screen at 30 Hz but your filtering may be more effective when sampling faster. This all depends on the nature of the noise though, you should experiment with the sampling rate if you can.
Good luck with your project.