Is it possible to rotate an image layer on Openlayers? - python

I have a time-series point cloud which I wish to animate on Openlayers. The caveat is that the data is not georeferenced (ie, my x & y runs from 0-10km & 0-5km respectively) and the user will need to define the starting point(lonlat of where x&y=0) as well as angle(heading) which pivots around the starting point.
Thus far my solution has been to
Rasterise the point cloud frame-by-frame into a folder of png files (using matplotlib savefig)
Rotate each png file by the angle provided
Expanding/contracting the bounds based on the new dimensions. (step #2 and this is done using PIL's rotate)
Mathematically calculate the new lonlat of where x&y=0 of the new image. (geopy)
Create a rest api so that I can call each image frame as a imagesource in openlayers. (flask)
Here are my questions
Is it possible to rotate an imagelayer in openlayers? (this will remove the need for me to rotate it server-side)
Step #4 seems rather tedious as I need to calculate the shift in x&y, add the difference to a original lonlat to get a new starting point. Is there a library or function that is normally used for this?
Otherwise is there a better way to achieve what I want? (Did I miss a simpler solution?)
Some other methods I have tried:
Creating an animated gif instead. (Did not like this solution since it would introduce more complexity if the user needs to be able to pause or go to specific time)
Rasterising into a folder of geotiff images instead of png. (Geotiff is significantly larger than PNGs and achieve literally the same functionality with the added step of having to setup a wms server or geoserver)

You can try to use a GeoImage layer to place an image on a map with a rotation.
See https://viglino.github.io/ol-ext/examples/layer/map.geoimage.html

Related

How to get Z-distance for every pixel in the frame (Computer Vision)?

I am currently doing a computer vision task and got stuck on the following problem. I need ground truth values for my sequence. I have a nice sequence where the camera moves trough my scene and captures the RGB-frames. Now, I need a corresponding frame for every RGB-frame, but instead of the RGB-values it should store the distance. I know, that you can get the total depth (euclidean distance from camera to real-world-object) by connecting the 'Depth'-output of the Render Layer Node to a File Output Node in the Compositing workspace using the EXR file format. But I just need the Z-component of the distance to the camera and I don't want to convert it afterwards with the camera parameters (already did it, but I need a cleaner workflow).
I stumbled upon this function: bpy_extras.view3d_utils.region_2d_to_location_3d , but could find almost nothing about how to use it properly. I don't know what I should give it as input.
Has anyone a solution or maybe already used the function and can explain me how i would use it in the default Blender setup (just a Cube, Camera and Light) and if it does what I expect it to do (giving the x,y,z and then I can strip the unnecessary information)?
I already tried using the world_to_camera_view function, but this only works for object-parameters like vertices and not for the whole surface to create a dense map.
I also know, that you can render the scene with a Cycles material to store x,y,z in the RGB-channels, but there you only get world-coordinates and you have to change the materials after you rendered the real sequence.
I really need just a frame with z-distance to the camera for every pixel in the frame.
I would be really grateful if someone could help me, because I've been trying to do that for days now.

Compressing big GeoJSON/Shapefle datasets for viewing on web browser

So I have a shapefile that is 3GB in size and as you can imagine my browser doesn't like it. How can I compress the data I have which is either in lon/lat coordinates or points on an X,Y grid?
I saw a video on Computerphile about Discreet Cosine Transforms for reducing high dimesionality data but being a programmer and not a mathematician I don't know if this is even possible. I have tried to take a point every 10 steps in the file like so: map[0:100000:10] but this had an udesireable and very lossy effect.
I would ideally like to have my data so it would work like Google Earth in which the resolution adjusts to your viewport altitude. So when you zoom in to the map higher freqency data is presented in the viewport, limiting the amount of points but I don't know how they do this and Google return nothing of value.
Last point is that since these are just vectors is there any type of vector compression I could use? I'm not to great at math so as you can imagine when I look into this I just get confused fairly quickly. I uderstand SciPy has some DCT built in and I know it has a whole bunch of other features which I don't understand, perhaps I could use this?
I can answer the "level of detail" part: you can experiment with leaflet (a javascript mapping library). You could then define a "coarse" layer wich is displayed for low zoom levels and "high detail" layers that are only displayed at higher zoom levels. You probably need to capture the map zoomend event and load/unload your layers from there.
One solution to this problem is to use a Web Map Server (WMS) like GeoServer or MapServer that stores your ShapeFile (though a spatial database like PostGIS would be better) on the server and sends a rendered image (often broken down into cacheable tiles) to the browser.

Presenting parts of a pre-prepared image array in Shady

I'm interested in migrating from psychtoolbox to shady for my stimulus presentation. I looked through the online docs, but it is not very clear to me how to replicate what I'm currently doing in matlab in shady.
What I do is actually very simple. For each trial,
I load from disk a single image (I do luminance linearization off-line), which contains all the frames I plan to display in that trial (the stimulus is 1000x1000 px, and I present 25 frames, hence the image is 5000x5000px. I only use BW images, so I have a single int8 value per pixel).
I transfer the entire image from the CPU to the GPU
At some point (externally controlled) I copy the first frame to the video buffer and present it
At some other point (externally controlled) I trigger the presentation of the
remaining 24 frames (copying the relevant part of the image to video buffer for each video frame, and then calling flip()).
The external control happens by having another machine communicate with the stimulus presentation code over TCP/IP. After the control PC sends a command to the presentation PC and this is executed, the presentation PC needs to send back an acknowledgement message to the control PC. I need to send three ACK messages, one when the first frame appears on screen, one when the 2nd frame appears on screen, and one when the 25th frame appears on screen (this way the control PC can easily verify if a frame has been dropped).
In matlab I do this by calling the blocking method flip() to present a frame, and when it returns I send the ACK to the control PC.
That's it. How would I do that in shady? Is there an example that I should look at?
The places to look for this information are the docstrings of Shady.Stimulus and Shady.Stimulus.LoadTexture, as well as the included example script animated-textures.py.
Like most things Python, there are multiple ways to do what you want. Here's how I would do it:
w = Shady.World()
s = w.Stimulus( [frame00, frame01, frame02, ...], multipage=True )
where each frameNN is a 1000x1000-pixel numpy array (either floating-point or uint8).
Alternatively you can ask Shady to load directly from disk:
s = w.Stimulus('trial01/*.png', multipage=True)
where directory trial01 contains twenty-five 1000x1000-pixel image files, named (say) 00.png through 24.png so that they get sorted correctly. Or you could supply an explicit list of filenames.
Either way, whether you loaded from memory or from disk, the frames are all transferred to the graphics card in that call. You can then (time-critically) switch between them with:
s.page = 0 # or any number up to 24 in your case
Note that, due to our use of the multipage option, we're using the "page" animation mechanism (create one OpenGL texture per frame) instead of the default "frame" mechanism (create one 1000x25000 OpenGL texture) because the latter would exceed the maximum allowable dimensions for a single texture on many graphics cards. The distinction between these mechanisms is discussed in the docstring for the Shady.Stimulus class as well as in the aforementioned interactive demo:
python -m Shady demo animated-textures
To prepare the next trial, you might use .LoadPages() (new in Shady version 1.8.7). This loops through the existing "pages" loading new textures into the previously-used graphics-card texture buffers, and adds further pages as necessary:
s.LoadPages('trial02/*.png')
Now, you mention that your established workflow is to concatenate the frames as a single 5000x5000-pixel image. My solutions above assume that you have done the work of cutting it up again into 1000x1000-pixel frames, presumably using numpy calls (sounds like you might be doing the equivalent in Matlab at the moment). If you're going to keep saving as 5000x5000, the best way of staying in control of things might indeed be to maintain your own code for cutting it up. But it's worth mentioning that you could take the entirely different strategy of transferring it all in one go:
s = w.Stimulus('trial01_5000x5000.png', size=1000)
This loads the entire pre-prepared 5000x5000 image from disk (or again from memory, if you want to pass a 5000x5000 numpy array instead of a filename) into a single texture in the graphics card's memory. However, because of the size specification, the Stimulus will only show the lower-left 1000x1000-pixel portion of the array. You can then switch "frames" by shifting the carrier relative to the envelope. For example, if you were to say:
s.carrierTranslation = [-1000, -2000]
then you would be looking at the frame located one "column" across and two "rows" up in your 5x5 array.
As a final note, remember that you could take advantage of Shady's on-the-fly gamma-correction and dithering–they're happening anyway unless you explicitly disable them, though of course they have no physical effect if you leave the stimulus .gamma at 1.0 and use integer pixel values. So you could generate your stimuli as separate 1000x1000 arrays, each containing unlinearized floating-point values in the range [0.0,1.0], and let Shady worry about everything beyond that.

Given position and rotation position, how to i translate coordinates from one frame to another? (using baxter robot, ROS and python)

So what i want to do essentially, is transforming a set of coordinates, from one frame to another. I have my camera set on my robot's hand (which i know the position and orientation), and i'm viewing a certain object, and reading coordinates from the cameras frame.
How do i convert those coordinate to my base frame? I know that i can just first reverse the orientation, using the inverse orientation matrix, and then use some kind of translational matrix, but how do i obtain that matrix? Once the orientation is corrected, how do i do the translation?
Note: I think this is better suited as a comment but I lack the reputation points.
Assuming both your frames are available in ROS, i.e. if you run (as specified here):
rosrun tf tf_echo /source_frame /target_frame
you should see the translation and rotation between both frames. Then you could use lookupTransform to obtain this information inside your code (see: TF tutorial).
Hope this helps.

Rendering image independent of size?

I'm currently working on a small game using pygame.
Right now I render images the standard way, by loading them and then blitting them to my main surface. This is great, if I want to work with an individual image size. Yet, I'd like to take in any NxN image and use it at an MxM resolution. Is there a technique for this that doesn't use surfarray and numeric? Something that already exists in pygame? If not, do you think it would be expensive to compute this?
I'd like to stretch the image. So, upscale or downscale the image. Sorry I wasn't clearer.
There is no single command to do this. You will first have to change the size using pygame.transform.scale, then make a rect of the same size, and set its place, and finally blit. It would probably be wisest to do this in a definition.

Categories