I am using python-clutter 1.0
My question in the form of a challenge
Write code to allow zooming up to a CairoTexture actor, by pressing a key, in steps such that at each the actor can be re-drawn (by cairo) so that the image remains high-res but still scales as expected, without re-sizing the actor.
Think of something like Inkscape and how you can zoom into the vectors; how the vectors remain clean at any magnification. Put a path (bunch of cairo line_to commands, say) onto an CairoTexture actor and then allow the same trick to happen.
More detail
I am aiming at a small SVG editor which uses groups of actors. Each actor is devoted to one path. I 'zoom' by using SomeGroup.set_depth(z) and then make z bigger/smaller. All fine so far. However, the closer the actor(s) get to the camera, the more the texture is stretched to fit their new apparent size.
I can't seem to find a way to get Clutter to do both:
Leave the actor's actual size static (i.e. what it started as.)
Swap-out its underlying surface for larger ones (on zooming in) that I can then re-draw the path onto (and use a cairo matrix to perform the scaling of the context.)
If I use set_size or set_surface_size, the actor gets larger which is not intended. I only want it's surface (underlying data) to get larger.
(I'm not sure of the terminology for this, mipmapping perhaps? )
Put another way: a polygon is getting larger, increase the size of its texture array so that it can map onto the larger polygon.
I have even tried an end-run around clutter by keeping a second surface (using pycairo) that I re-create to the apparent size of the actor (get_transformed_size) and then I use clutter's set_from_rgb_data and point it at my second surface, forcing a re-size of the surface but not of the actor's dimensions.
The problem with this is that a)clutter ignores the new size and only draws into the old width/height and b)the RGBA vs ARGB32 thing kind of causes a colour meltdown.
I'm open to any alternative ideas, I hope I'm standing in the woods missing all the trees!
\d
Well, despite all my tests and hacks, it was right under my nose all along.
Thanks to Neil on the clutter-project list, here's the scoop:
CT = SomeCairoTextureActor()
# record the old height, once:
old_width, old_height = CT.get_size()
Start a loop:
# Do stuff to the depth of CT (or it's parent)
...
# Get the apparent width and height (absolute size in pixels)
appr_w,appr_h = CT.get_transformed_size()
# Make the new surface to the new size
CT.set_surface_size( appr_w, appr_h )
# Crunch the actor back down to old size
# but leave the texture surface something other!
CT.set_size(old_width, old_height)
loop back again
The surface size and the size of the
actor don't have to be the same. The
surface size is just by default the
preferred size of the actor. You can
override the preferred size by just
setting the size on the actor. If the
size of the actor is different from
the surface size then the texture will
be squished to fit in the actor size
(which I think is what you want).
Nice to put this little mystery to bed. Thanks clutter list!
\d
Related
I'd like to ask if there's a better or faster alternative way to get the largest rectangle inside an almost rectangular contour.
The rectangle should be aligned to both x and y axis and should be completely inside the rectangular contour. That means it would not contain any external white pixels, yet occupy the largest area in the contour.
Test image is here:
I've tried these two but I'm looking if there's a faster and neater way to go around this.
I also tried going through the points of a contour and getting the minimum and maximum points like in here but of course, it just shows similar results to what cv2.boundingRect already does.
Maybe this is a bit of lateral thinking, but looking at your examples and spec when not fill out white pikels contiguouys with the outside bounding box instead. (Like a 'paint pot' brush in Paint-type application).
E.g. (red pixels being the ones you would turn black from white):
You could probably even limit the process to the outer N pixels.
============================
So how might one implement this? It is essentially a version of the "flood fill" algorithm used in pixel graphics programmes, except that you start not from a single seed pixel but checking every point on the edge of the outside bounding rectangle. You start filling in and build a stack of points you need to come back to because you can't necessarily follow every area at once and may need to go back on your self.
You can look that algorithm up, but a 'pure' version will be very stack-heavy if you push every point you can't follow right now, particularly starting with the whole boundary of the shape.
I haven't implemented it this way, but my first thought would be a scan from a boundary inwards, taking a whole line of pixels at a time and mark all the 'white' pixels with a new 3rd colour, then on the next row you fill all the white pixels touching the previously marked pixels and so on. (doesn't matter whether you mark the changed pixels as a 3rd colour, a mask, or alpha-channel or whatever - but you must be able to tell newly filled in pixels from the old black ones.
As you go, you need to check for any 'stranded' areas where you need to work backwards to fill in white areas that are not directly connected to the outside:
Start filling from the edge...
Watch out for stranded areas - if you find one, scan backwards to fill before going to where you were before, to carry one (you may need to recurse if you stranded area turns back on itself again, though in your particular application this shouldn't be a huge issue, unlike some graphics applications)
And continue, not forgetting to fill in from the other edges if required (see note below) until you come to a row with no further pixels to fill and no more back-filling to do. Then restart at the far side of the image as you need to start a backward pass from the far side to catch anything else on that side.
For a practical implementation there is some thinking to do. Your examples will have a lot of filling at the edge but not much by way of complex internal shapes to follow, which keeps things simple. But you need to work from all 4 sides to do it efficiently - perhaps working in as a series of concentric rectangles rather than one side at a time. More complexity working through the design but massively more efficient in this example.
Food for thought anyhow.
I have images I want to display on a map. At the moment, I am using Marker with CustomIcon like
marker = folium.Marker(location=(lat,long),icon=folium.features.CustomIcon(sprite_url,icon_size=(64,64)))
map.add_child(marker)
However, the icon is always at 64x64. I would love to have it where it is at 64 by 64 at say, zoom level 16, and if it is zoomed out or in it is smaller or larger, respectively.
Is this possible? And if so, how would I do it? I've read about imageOverlay but from the documentation I read it sounds like I could only use that to overlay one image across the entire map, and I have thousands of data points to plot.
It is not inherently supported by folium and chances are it will not be for the foreseeable future or ever, though there appears to be some wiggle room.
See here the answer to when the question was first raised in 2015:
A wider answer to this question before closing:
When zooming two things can happen: either one reprojects the element
geometries into the new zoom level (elements features like size remain
constant) or one scales the whole overlay (but then elements
shrink/expand and one needs to adjust for that if it's desirable).
Since reprojection of the geometries is computationally more
demanding, scaling and adjusting the size of elements is an easier and
quicker way:
...
updSel.
.attr("r", 5 / this._scale)
The code above will set a radius (for SVG circle element) to always be
5 pixels.
I am an experimental physicist (grad student) that is trying to take an AutoCAD model of the experiment I've built and find the gravitational potential from the whole instrument over a specified volume. Before I find the potential, I'm trying to make a map of the mass density at each point in the model.
What's important is that I already have a model and in the end I'll have a something that says "At (x,y,z) the value is d". If that's an crazy csv file, a numpy array, an excel sheet, or... whatever, I'll be happy.
Here's what I've come up with so far:
Step 1: I color code the AutoCAD file so that color associates with material.
Step 2: I send the new drawing/model to a slicer (made for 3D printing). This takes my 3D object and turns it into equally spaced (in z-direction) 2d objects... but then that's all output as g-code. But hey! G-code is a way of telling a motor how to move.
Step 3: This is the 'hard part' and the meat of this question. I'm thinking that I take that g-code, which is in essence just a set of instructions on how to move a nozzle and use it to populate a numpy array. Basically I have 3D array, each level corresponds to one position in z, and the grid left is my x-y plane. It reads what color is being put where, and follows the nozzle and puts that mass into those spots. It knows the mass because of the color. It follows the path by parsing the g-code.
When it is done with that level, it moves to the next grid and repeats.
Does this sound insane? Better yet, does it sound plausible? Or maybe someone has a smarter way of thinking about this.
Even if you just read all that, thank you. Seriously.
Does this sound insane? Better yet, does it sound plausible?
It's very reasonable and plausible. Using the g-code could do that, but it would require a g-code interpreter that could map the instructions to a 2D path. (Not 3D, since you mentioned that you're taking fixed z-slices.) That could be problematic, but, if you found one, it could work, but may require some parser manipulation. There are several of these in a variety of languages, that could be useful.
SUGGESTION
From what you describe, it's akin to doing a MRI scan of the object, and trying to determine its constituent mass profile along a given axis. In this case, and unlike MRI, you have multiple colors, so that can be used to your advantage in region selection / identification.
Even if you used a g-code interpreter, it would reproduce an image whose area you'll still have to calculate, so noting that and given that you seek to determine and classify material composition by path (in that the path defines the boundary of a particular material, which has a unique color), there may be a couple ways to approach this without resorting to g-code:
1) If the colors of your material are easily (or reasonably) distinguishable, you can create a color mask which will quantify the occupied area, from which you can then determine the mass.
That is, if you take a photograph of the slice, load the image into a numpy array, and then search for a specific value (say red), you can identify the area of the region. Then, you apply a mask on your array. Once done, you count the occupied elements within your array, and then you divide it by the array size (i.e. rows by columns), which would give you the relative area occupied. Since you know the mass of the material, and there is a constant z-thickness, this will give you the relative mass. An example of color masking using numpy alone is shown here: http://scikit-image.org/docs/dev/user_guide/numpy_images.html
As such, let's define an example that's analogous to your problem - let's say we have a picture of a red cabbage, and we want to know which how much of the picture contains red / purple-like pixels.
To simplify our life, we'll set any pixel above a certain threshold to white (RGB: 255,255,255), and then count how many non-white pixels there are:
from copy import deepcopy
import numpy as np
import matplotlib.pyplot as plt
def plot_image(fname, color=128, replacement=(255, 255, 255), plot=False):
# 128 is a reasonable guess since most of the pixels in the image that have the
# purplish hue, have RGB's above this value.
data = imread(fname)
image_data = deepcopy(data) # copy the original data (for later use if need be)
mask = image_data[:, :, 0] < color # apply the color mask over the image data
image_data[mask] = np.array(replacement) # replace the match
if plot:
plt.imshow(image_data)
plt.show()
return data, image_data
data, image_data = plot_image('cabbage.jpg') # load the image, and apply the mask
# Find the locations of all the pixels that are non-white (i.e. 255)
# This returns 3 arrays of the same size)
indices = np.where(image_data != 255)
# Now, calculate the area: in this case, ~ 62.04 %
effective_area = indices[0].size / float(data.size)
The selected region in question is shown here below:
Note that image_data contains the pixel information that has been masked, and would provide the coordinates (albeit in pixel space) of where each occupied (i.e. non-white) pixel occurs. The issue with this of course is that these are pixel coordinates and not a physical one. But, since you know the physical dimensions, extrapolating those quantities are easily done.
Furthermore, with the effective area known, and knowledge of the physical dimension, you have a good estimate of the real area occupied. To obtain better results, tweak the value of the color threshold (i.e. color). In your real-life example, since you know the color, search within a pixel range around that value (to offset noise and lighting issues).
The above method is a bit crude - but effective - and, it may be worth exploring using it in tandem with edge-detection, as that could help improve the region identification, and area selection. (Note that isn't always strictly true!) Also, color deconvolution may be useful: http://scikit-image.org/docs/dev/auto_examples/color_exposure/plot_ihc_color_separation.html#sphx-glr-auto-examples-color-exposure-plot-ihc-color-separation-py
The downside to this is that the analysis requires a high quality image, good lighting; and, most importantly, it's likely that you'll lose some of the more finer details of the edges, which would impact your masses.
2) Instead of resorting to camera work, and given that you have the AutoCAD model, you can use that and the software itself in addition to the above prescribed method.
Since you've colored each material in the model differently, you can use AutoCAD's slicing tool, and can do something similar to what the first method suggests doing physically: slicing the model, and taking pictures of the slice to expose the surface. Then, using a similar method described above of color masking / edge detection / region determination through color selection, you should obtain a much better and (arguably) very accurate result.
The downside to this, is that you're also limited by the image quality used. But, as it's software, that shouldn't be much of an issue, and you can get extremely high accuracy - close to its actual result.
The last suggestion to improve these results would be to script numerous random thin slicing of the AutoCAD model along a particular directional vector shared by every subsequent slice, exporting each exposed surface, analyzing each image in the manner described above, and then collecting those results to given you a Monte Carlo-like and statistically quantifiable determination of the mass (to correct for geometry effects due to slicing along one given axis).
I need to display a chart that can be very large, for example the image resolution could be 100 000 x 1000. However, it seems like I am limited to 32768 x 32768 by the QImage.
I can't reasonably redraw directly the chart at every paintEvent so I need to store it into a QImage (it could be a QPixmap that won't change anything). But then, it doesn't fit.
My first idea was:
Create a list of QImage
Plot on the various QImage
Redraw using the good QImages.
The first and last points have been done quite easily. But the second point is more complex. I'm quite confident that my approach would work but it requires to overload the basic paint methods (draw rectangle, circles, etc.) in order to be able to paint on multiple images.
So, before going any further, I would like to know what could be the other options.
You probably do not want to display more than one QImage of data at a time. Few screens are more than 32k pixels wide or tall.
So you want an abstract type that produces QImages on request for reading, at offsets and possibly at different zoom factors.
The next problem is modifying this abstract type. An easy to use, not maximally performance version consists of letting users blit QImages into your internal storage (whatever that is).
The user still has to "tile" their efforts, but can tile their efforts in ways that is convenient for them.
A higher performance version exposes some of the underlying implementation, which we have not yet mentioned.
A traditional implementation for large images is a tiled image. You have a grid of image tiles that abut each other. When someone asks for a blit from your image, you produce a temporary QImage, and blit the appropriate tiles onto it. And when someone blits to you, you figure out what the appropriate tiles are, and write parts of that source QImage over parts of them.
The higher performance interface exposes these tiles.
A low level interface lets the outside know where your tiles are, and lets them ask for them. This is a poor interface.
A better interface exposes a sub tile iterator. They ask for a region, and you return a pair of iterators that describe the region. The data in the iterators consists of either a tile and a region in that tile as well as the location this region is in the "full image", or a sub-tile object (with linestride, line length, etc) and the location of the sub-tile object.
Another good interface is a foreach style interface. Again, the user of the big image class passes in a region they want to work with, but a callback as well. That callback something similar to either one of the above results of the iterator dereference.
This approach has two large advantages over the iterator approach. First, you can implement parallel image processing algorithms within your large image class. Second, it is much easier to write than rolling your own iterator.
Once you have either of these, drawing is relatively easy. Determine the region you are drawing on (be generous). Iterate over the resulting tiles. On each tile, draw after applying the offset of the tile to the drawing.
You can use the Qt Graphics View Framework. Create a QGraphicsView and a QGraphicsScene for it. Add items using QGraphicsScene::addPixmap (that returns QGraphicsPixmapItem which is derived from QGraphicsItem) and adjust their positions using QGraphicsItem::setPos. QGraphicsView will effectively draw your scene and handle scrolling and zooming if necessary.
You do realize that a 100,000 x 1000 RGBA QImage is 400MBytes? There's no point in wasting all that memory. Really, none.
Just paint it every time, one request, in the paintEvent. Be clever about it so that you only paint what needs to be shown. I'd think one should focus on optimizing the painting process and your data structures so that it can be painted effectively.
At small scales (zoomed out), a lot can be gained by approximating/decimating/interpolating the data so that it looks the same, but you don't waste time painting the same pixel too many times.
I'm coding a game where the viewport follows the player's ship in a finite game world, and I am trying to make it so that the background "wraps" around in all directions (you could think of it as a 2D surface wrapped around a sphere - no matter what direction you travel in, you will end up back where you started).
I have no trouble getting the ship and objects to wrap, but the background doesn't show up until the viewport itself passes an edge of the gameworld. Is it possible to make the background surface "wrap" around?
I'm sorry if I'm not being very articulate. It seems like a simple problem and tons of games do it, but I haven't had any luck finding an answer. I have some idea about how to do it by tiling the background, but it would be nice if I could just tell the surface to wrap.
I don't think so, I have an idea though. I'm guessing your background wraps horizontally and always to the right, then you could attach part of the beginning to the end of the background.
Example, if you have a 10,000px background and your viewport is 1000px, attach the first 1000px to the end of the background, so you'll have a 11,000px background. Then when the vieport reaches the end of the background, you just move it to the 0px position and continue moving right.
I was monkeying around with something similar to what you described that may be of use. I decided to try using a single map class which contained all of my Tiles, and I wanted only part of it loaded into memory at once so I broke it up into Sectors (32x32 tiles). I limited it to only having 3x3 Sectors loaded at once. As my map scrolled to an edge, it would unload the Sectors on the other side and load in new ones.
My Map class would have a Rect of all loaded Sectors, and my camera would have a Rect of where it was located. Each tick I would use those two Rects to find what part of the Map I should blit, and if I should load in new Sectors. Once you start to change what Sectors are loaded, you have to shift
Each sector had the following attributes:
1. Its Coordinate, with (0, 0) being the topleft most possible Sector in the world.
2. Its Relative Sector Coordinate, with (0, 0) being the topleft most loaded sector, and (2,2) the bottom right most if 3x3 were loaded.
3. A Rect that held the area of the Sector
4. A bool to indicate of the Sector was fully loaded
Each game tick would check if the bool to see if Sector was fully loaded, and if not, call next on a generator that would blit X tiles onto the Map surface. I
The entire Surface
Each update would unload, load, or update and existing Sector
When an existing Sector was updated, it would shift
It would unload Sectors on update, and then create the new ones required. After being created, each Sector would start a generator that would blit X amount of tiles per update
Thanks everyone for the suggestions. I ended up doing something a little different from the answers provided. Essentially, I made subsurfaces of the main surface and used them as buffers, displaying them as appropriate whenever the viewport included coordinates outside the world. Because the scrolling is omnidirectional, I needed to use 8 buffers, one for each side and all four corners. My solution may not be the most elegant, but it seems to work well, with no noticeable performance drop.