Following trajectory with timestamps in CARLA - python

I'm trying to simulate a multi-agent intersection scenario in CARLA. I've generated a trajectory for each vehicle in the scenario using Python. A trajectory is defined discretely by a sequence of waypoints, each with components (x, y, t).
In the CARLA documentation, I've seen two ways that vehicles can be manually controlled:
Define a set of waypoints (without time components) on the road for a vehicle to follow using feedback control.
Use VehicleControl on a vehicle to define motion using throttle and steering angle values.
I'm not sure how to use either of these strategies to execute the trajectories, since I need the timestamps to be conserved and I want to generate actions online. How can I do this?

Related

Efficient method of propagating N satellites over some time

I'm dealing with a problem consisting of propagating N satellites (around 6k) over some time (usually 2 weeks) with time step of 15s. Additionally I have few observers with known position on the Earth in ITRF and known field of view (in Az Alt). What I want to accomplish is check when and what satellites are visible for said observers. Right now I'm using combination of skyfield and pyephem to do the job. Skyfield gives me ITRF coordinates of my observer and ITRF coordinates of satellite (which I need to find if said satellite is visible for observer), unfortunately I need to check if satellite is eclipsed, but can't handle it efficently. For that I use pyephem, just to check if satellite is eclipsed, but this need additional computations. Maybe someone had similar problems and know a better method?
TLDR:
I need to find every observed satellite over some period of time given position on the Earth of the observer, its field of view (in Az Alt) and TLE catalog of satellites

Getting 3D Position Coordinates from an IMU Sensor on Python

I am planning to acquire position in 3D cartesian coordinates from an IMU (Inertial Sensor) containing Accelerometer and Gyroscope. I'm using this to track the objects position and trajectory in 3D.
1- From my limited knowledge I was under the assumption that Accelerometer alone would be enough, resulting in acceleration in xyz axis A(Ax,Ay,Az) and would need to be integrated twice to get velocity and then position, but integrating would add an unknown constant value, this error called drift increases with time. How to remove this error?
2- Furthermore, why is there a need for gyroscope in the first place, cant we just translate the x-y-z axis acceleration to displacement, if accelerometer tells the axis of motion then why check orientation from Gyroscopes. Sorry this is a very basic question, everywhere I checked both Gyro+Accel were used but don't know why.
3- Even when stationary and not in any motion there is earth's gravitation force acting on the sensor which will always give values more than that attributed by the motion of sensor. How do you remove the gravity?
Once this has been done ill apply Kalman Filters to them to fuse them and to smooth the values. How accurate is this method for trajectory estimation of an object for environments where GPS is not an option. I'm getting the Accelerometer and Gyroscope values from arduino and then importing to Python where it will be plotted on a 3D graph updating in real time. Any help would be highly appreciated, especially links to similar codes.
1 - An accelerometer can be calibrated to account for some of this drift but in the end no sensor is perfect and inaccuracy will inevitably cause drift. To fix this you would need some filter such as the Kalman filter to use the accelerometer for short high frequency data, and a secondary sensor such as a camera to periodically get the absolute position and update the internal position. This is the fundamental idea behind the Kalman filter.
2 - Accelerometers aren't very good for high frequency rotational data. Just using the accelerometers data would mean the system could not differentiate between a horizontal linear acceleration and rotational position. The gyroscope is used for the high frequency data while the accelerometer is used for low frequency data to adjust and counteract the rotational drift. A Kalman filter is one possible solution to this problem and there are many great online resources explaining this.
3 - You would have to use the methods including gyro / accel sensor fusion to get the 3d orientation of the sensor and then use vector math to subtract 1g from that orientation.
You would most likely be better off looking at some online resources to get the gist of it and then using a pre-built sensor fusion system whether it be a library or an fusion system on the accelerometer (on most accelerometers today including the mpu6050). These onboard systems typically do a better job then a simple Kalman filter and can combine other sensors such as magnetometers to gain even more accuracy.

Find direction of vehicle on road

I am working on a GPS data that has the latitude/longitude data speed vehicle ids and so on.
Each day different time vehicle speeds are different for each side of the road.
I created this graph with plotly mapbox and color difference is related with speed of vehicle.
So my question is Can I use any cluster algorithm for find side of vehicle? I tried DBSCAN but I could not find a clear answer.
It depends on the data you have about the different spots, is you know time and speed at each point you can estimate the range within the next point should fail in, and afterwards order them in function of distance. Otherwise it is going to be complicated without more information that position and speed with all those points.
ps there is a heavy computational method to try to estimate the route by using tangent to determine the angle between segments os consecutive points
Many GPS hardware sets will compute direction and insert this information into the standard output data alongside latitude, longitude, speed, etc. You might check if your source data contains information about direction or "heading" often specified in degrees where zero degrees is regarded as North, 90 degrees is East, etc. You may need to parse the data and convert from binary or ascii hexadecimal values to integer values depending on the data structure specifications which vary for different hardware designs. If such data exists in your source, this may be a simpler and more reliable approach to determining direction.

Arms motion with coordinate argument with NAO?

I am beginner with NAO programming and I am now working on a project involving arms motion.
I must program a game in which NAO would first stand and point out one among three squares with different colors which would be displayed on the ground.
I think that I can "simply" make Nao move its arm so he would point towards one of three different pre-defined coordinates.
However, animation mode and motion widget do not seem usable for movements with parameters, like one out of the three coordinates.
How do I perform such a move ?
Have you look at the ALMotion.setPositions type of method ?
There are methods working in cartesian space. It means that you just positionnate some end effector (eg the hand) to be at a specific positions compared to the origin of the chest (for instance).
You can see that as a vector pointing to a direction...
The solver used for that could be enhanced, but it's a nice way to achieve what you need to do.
More info there:
http://doc.aldebaran.com/2-1/naoqi/motion/control-cartesian-api.html#ALMotionProxy::setPositions__AL::ALValueCR.AL::ALValueCR.AL::ALValueCR.floatCR.AL::ALValueCR
You could take a look at the pointAt method which takes in parameters the position that you would like to point. If the position of your three objects are known in advance, that would do the job. You can find more here:
http://doc.aldebaran.com/2-1/naoqi/trackers/altracker-api.html#ALTrackerProxy::pointAt__ssCR.std::vector:float:CR.iCR.floatCR

Convolve a sector along a trajectory to make a heat map in python

I have built a VR arena for a fly. Inside the VR world, a fly flies which has objects in it. The VR world is actually built using Panda 3D game engine. I record the trajectory of the fly. I obtain position (x,y) and heading (theta) as function of time.
Briefly, what I am trying to do is, get a heatmap of what the fly saw as it flew. The heatmap is a representation of the world with intensities corresponding how often they were visible to the fly. The more often a fly stared at a particular place by flying towards it, the hotter those regions get. The trajectory plot only explains the path as if the fly was just a point. It doesn't convey what the fly saw as it flew. The fly sees the virtual world based on the VR camera parameters. It has a field of View (FOV), max draw distance and orientation (heading). So as it traverses through the world, it basically sees a sector (pie shaped wedge) of the entire world.
What I want to implement is a method to keep a count of how many frames each point on the map came inside the camera's FOV (the sector).
This naively results in a sector traversing along the trajectory and incrementing a counter at all those points inside the sector. After this entire traversal, All I need is to plot this matrix as a heatmap. This is what I think is a fair representation of what the fly saw mostly in it's trajectory.
Conceptually, the task is doable with multiple for loops. But I am not sure how to go about implementing this as a vectorized form so that it can happen in a lifetime. I am using python

Categories