I'm dealing with a problem consisting of propagating N satellites (around 6k) over some time (usually 2 weeks) with time step of 15s. Additionally I have few observers with known position on the Earth in ITRF and known field of view (in Az Alt). What I want to accomplish is check when and what satellites are visible for said observers. Right now I'm using combination of skyfield and pyephem to do the job. Skyfield gives me ITRF coordinates of my observer and ITRF coordinates of satellite (which I need to find if said satellite is visible for observer), unfortunately I need to check if satellite is eclipsed, but can't handle it efficently. For that I use pyephem, just to check if satellite is eclipsed, but this need additional computations. Maybe someone had similar problems and know a better method?
TLDR:
I need to find every observed satellite over some period of time given position on the Earth of the observer, its field of view (in Az Alt) and TLE catalog of satellites
Related
I'm trying to simulate a multi-agent intersection scenario in CARLA. I've generated a trajectory for each vehicle in the scenario using Python. A trajectory is defined discretely by a sequence of waypoints, each with components (x, y, t).
In the CARLA documentation, I've seen two ways that vehicles can be manually controlled:
Define a set of waypoints (without time components) on the road for a vehicle to follow using feedback control.
Use VehicleControl on a vehicle to define motion using throttle and steering angle values.
I'm not sure how to use either of these strategies to execute the trajectories, since I need the timestamps to be conserved and I want to generate actions online. How can I do this?
maybe somebody knows something, since I am not able to find anything that makes sense to me.
I have a dataset positions (lon, lat) and I want to snap them to the nearest road and calculate the distance between them.
So far I discovered OSM, however I can't find a working example on how to use the API using python.
If any of you could help, I am thankful for ever little detail.
Will try to find it out by myself in the meantime and publish the answer if successful (couldn't find any similar question so maybe it will help someone in the future)
Welcome! OSM is a wonderful resource, but is essentially a raw dataset that you have to download and do your own processing on. There are a number of ways to do this, if you need a relatively small extract of the data (as opposed to the full planet file) the Overpass API is the place to look. Overpass turbo (docs) is a useful tool to help with this API.
Once you have the road network data you need, you can use a library like Shapely to snap your points to the road network geometry, and then either calculate the distance between them (if you need "as the crow flies" distance), or split the road geometry by the snapped points and calculate the length of the line. If you need real-world distance that takes the curvature of the earth into consideration (as opposed to the distance as it appears on a projected map), you can use something like Geopy.
You may also want to look into the Map Matching API from Mapbox (full disclosure, I work there), which takes a set of coordinates, snaps them to the road network, and returns the snapped geometry as well as information about the route, including distance.
You might use KDTree of sklearn for this. You fill an array with coordinates of candidate roads (I downloaded this from openstreetmap). Then use KDTree to make a tree of this array. Finally, use KDTree.query(your_point, k=1) to get the nearest point of the tree (which is the nearest node of the coordinate roads). Since searching the tree is very fast (essentially log(N) for N points that form the tree), you can query lots of points.
My goal is to compute the number of hours for a given day that the sun shines on a given location using Python, under the assumption of a clear sky. The problem arises from the search for real estate. I would like to know how much sun I will actually get on some property, such that I do not have to rely on the statements made by the real estate sales person and can judge a property solely based on the address. I am searching in an area with several nearby mountains, which should be considered in the calcualtion.
The approach that I would like to use is the following:
For a whole year (1.1.2020 till 31.12.2020), with a given temporal resolution (e.g. minutes), compute the altitude and azimuth angles of the sun for the defined location.
Find the angular height of nearby obstacles as seen from the location, yielding value pairs of azimuth and altitude angles. This can be trees, buildings or the already mentioned mountains. Let's assume that trees and buildings are negligible and it's mainly the mountains that take away the sun.
For each day, check for each time with the given resolution, whether the altitude angle of the sun is higher than the altitude angle of the obstacle, at the azimuthal location of the sun. For each day of the year, I can then know at which times the sun is visible.
Steps 1 and 3 are easy. For step 1, I can use for example the Python module pysolar. Step 2 is more complicated. If I were to stand on a perfectly flat plane that extends far to the horizon, the obstacle altitude would be 0 for all azimuth angles. If there was a mountain nearby, I would need to know the shape of the mountain, as seen from the location. Unfortunately, I do not even know where to start solving step 2 and I do not know how to name this problem, i.e. I do not know how to google for a solution. In the best case, there would be a Python module that does that the calculation for me, e.g. by connecting to topography data based on OpenStreetMap or other services. If such a module does not exist, I would have to manually program an access to topography data and then do some type of grid search - divide the landscape in a fine grid (possibly spherical coordinates), compute the altitude and azimuth angle of grid points as seen from the location (under consideration of the earth curvature and the elevation at the location) and find the maximum for a given azimuth angle. Are there easier ways to do this? Are there Python modules that do this?
Step 2 isn't that hard if you have a height map and I'm sure there are height maps available.
Cast n rays in an evenly spread 360 degree pattern out from your location and visit all map positions every delta meters from your starting position along that ray. If d is the distance of a point to you, h is its height and h0 is your height on the heightmap keep the maximum value of (h-h0)/d along the path (if something is twice as far away it needs to be twice as high to cast the same length shadow).
You can pretty much ignore earth's curvature - its effect on sunlight occlusion is negligible.
I am working on a GPS data that has the latitude/longitude data speed vehicle ids and so on.
Each day different time vehicle speeds are different for each side of the road.
I created this graph with plotly mapbox and color difference is related with speed of vehicle.
So my question is Can I use any cluster algorithm for find side of vehicle? I tried DBSCAN but I could not find a clear answer.
It depends on the data you have about the different spots, is you know time and speed at each point you can estimate the range within the next point should fail in, and afterwards order them in function of distance. Otherwise it is going to be complicated without more information that position and speed with all those points.
ps there is a heavy computational method to try to estimate the route by using tangent to determine the angle between segments os consecutive points
Many GPS hardware sets will compute direction and insert this information into the standard output data alongside latitude, longitude, speed, etc. You might check if your source data contains information about direction or "heading" often specified in degrees where zero degrees is regarded as North, 90 degrees is East, etc. You may need to parse the data and convert from binary or ascii hexadecimal values to integer values depending on the data structure specifications which vary for different hardware designs. If such data exists in your source, this may be a simpler and more reliable approach to determining direction.
How can I find the actual real world velocity of an object using the optical flow information obtained from two images? Can anyone help me out?
as the commentators have already said we need some more information on your problem.
Basically: Yes, it is possible to calculate real world velocity from an image
But all of this depends on the following things:
Is your camera fixed or is it maybe even moving
Do you try to calculate velocity of any object moving anywhere on the scene or do you have a fixed lane, like a street filmed with a mounted camera and objects (cars) will always move along one lane?
If the latter, can you do measurements on the street in real world? Like marking points on the boardwalk (permanently or simply to find out to how long a distance of x meters in real world will appear on your camera image in pixels)
if you cannot do those measurements in the real world scene you will need to provide information on angle of the camera to the scene/ground level, distance of the camera to the scene, and parameters of your camera.
For calculating the velocity of any tracked object on the scene you'd probably need all the latter stuff to really calculate the distances in the scene. But this is much more difficult.
If you have the case of a fixed lane where you i.e. want to measure a car's velocity I would prefer the method with measuring or marking points in real world.
Because if have that information:
x m = y px
and an object has moved y px in t time (you get that time by the refreshment rate of your calculation) you can calculate how many pixels it will have moved in 1 second and since you know how many pixels are one meter you'd know its speed in meters per second (or any other unit you prefer.
You could also just set your two marks in the scene and simply measure, how many frames (and therefore how much time) the object needed to move from one marking to the other. This would give you a more averaged velocity since if you do calculations in small time steps you might get a noisy result due to segmentation problems or simply because changes are fairly small between the shorter the measured timespan is.
Well and for segmentation you could simply try a substraction method. Substract two or three following frames from each other. Moving objects (and therefore image parts that have changed) will result in non-zero values whereas color values of a steady image part should substract to something about 0.
Maybe that helps you with your problem... but of couse this depends on your setting and your desired goal... You'll need to provide more information then...
This method is quite long but in short:
What you can do is set a value that specifies the distance of object from camera.
Then capture first frame and save it somewhere.
Capture last frame and save it somewhere.
Apply threshold on both the frames.
Trim all the pixels from left of first frame and then do the same for second frame.
For detail tutorial I think this article may help you a bit.
http://morefunscience.blogspot.in/2012/05/calculating-speed-using-webcam.html