I need to display a chart that can be very large, for example the image resolution could be 100 000 x 1000. However, it seems like I am limited to 32768 x 32768 by the QImage.
I can't reasonably redraw directly the chart at every paintEvent so I need to store it into a QImage (it could be a QPixmap that won't change anything). But then, it doesn't fit.
My first idea was:
Create a list of QImage
Plot on the various QImage
Redraw using the good QImages.
The first and last points have been done quite easily. But the second point is more complex. I'm quite confident that my approach would work but it requires to overload the basic paint methods (draw rectangle, circles, etc.) in order to be able to paint on multiple images.
So, before going any further, I would like to know what could be the other options.
You probably do not want to display more than one QImage of data at a time. Few screens are more than 32k pixels wide or tall.
So you want an abstract type that produces QImages on request for reading, at offsets and possibly at different zoom factors.
The next problem is modifying this abstract type. An easy to use, not maximally performance version consists of letting users blit QImages into your internal storage (whatever that is).
The user still has to "tile" their efforts, but can tile their efforts in ways that is convenient for them.
A higher performance version exposes some of the underlying implementation, which we have not yet mentioned.
A traditional implementation for large images is a tiled image. You have a grid of image tiles that abut each other. When someone asks for a blit from your image, you produce a temporary QImage, and blit the appropriate tiles onto it. And when someone blits to you, you figure out what the appropriate tiles are, and write parts of that source QImage over parts of them.
The higher performance interface exposes these tiles.
A low level interface lets the outside know where your tiles are, and lets them ask for them. This is a poor interface.
A better interface exposes a sub tile iterator. They ask for a region, and you return a pair of iterators that describe the region. The data in the iterators consists of either a tile and a region in that tile as well as the location this region is in the "full image", or a sub-tile object (with linestride, line length, etc) and the location of the sub-tile object.
Another good interface is a foreach style interface. Again, the user of the big image class passes in a region they want to work with, but a callback as well. That callback something similar to either one of the above results of the iterator dereference.
This approach has two large advantages over the iterator approach. First, you can implement parallel image processing algorithms within your large image class. Second, it is much easier to write than rolling your own iterator.
Once you have either of these, drawing is relatively easy. Determine the region you are drawing on (be generous). Iterate over the resulting tiles. On each tile, draw after applying the offset of the tile to the drawing.
You can use the Qt Graphics View Framework. Create a QGraphicsView and a QGraphicsScene for it. Add items using QGraphicsScene::addPixmap (that returns QGraphicsPixmapItem which is derived from QGraphicsItem) and adjust their positions using QGraphicsItem::setPos. QGraphicsView will effectively draw your scene and handle scrolling and zooming if necessary.
You do realize that a 100,000 x 1000 RGBA QImage is 400MBytes? There's no point in wasting all that memory. Really, none.
Just paint it every time, one request, in the paintEvent. Be clever about it so that you only paint what needs to be shown. I'd think one should focus on optimizing the painting process and your data structures so that it can be painted effectively.
At small scales (zoomed out), a lot can be gained by approximating/decimating/interpolating the data so that it looks the same, but you don't waste time painting the same pixel too many times.
Related
This is my first question ever so bear with me!
Currently in my program, I have a parent widget which acts as a canvas. The user can add or remove widgets to the parent at run-time. Those widgets are then given an absolute position, that is, they are not positioned by a layout. Once added, a widget can be moved around arbitrarily by the user.
I want the user to be able to select a group of widgets by dragging a box around them. I have already coded the part that displays the rectangle while the user is dragging. Now, I want to be able to retrieve all the widgets within that rectangle (region).
I am aware of the findChild() and findChildren() functions, and they indeed do return the children as they are supposed to. But what I'd really need is a way to limit the search to the boundaries of the region since there will most-likely be quite a lot of widgets within the 'canvas'. (There could be thousands of widgets spread over a very large area due to the nature of what I'm doing!)
Here is my question: What would be my best option? Should I just go ahead and use findChildren() and loop through the list to find the children within the region manually. Or should I loop through all the pixels within the region using findChild(x, y)? Or perhaps there is an even simpler solution that would speed up the process? Something along the lines of findChildren(x, y, width, height)?
Hopefully my question made sense. I tried to explain things as best as I could. Thanks!
If you had used QGraphicsScene instead of rolling your own, you could have used the items(..) methods to very efficiently find your children in a particular area.
It's only possible in QGraphicsScene because it uses a BSP spatial acceleration structure, so if you cannot migrate to QGraphicsScene in a reasonable amount of time - you are going to have write your own. It's not as hard as it sounds, I've written numerous bounding volume hierarchy structures and they're quite straightforward.
I have an image find- and "blur-compare"-task. I could not figure out which methods I should use.
The setup is this: A, say, 100x100 box either is mostly filled by an object or not. To the human eye this object is always almost the same, but might change by blur, slight rescaling, tilting 3-dimensionally, moving to the side or up/down by a or two pixel or other very small graphical changes.
What is a simple quick robust and reliable way to check if the transformed object is there or not? Points to python packages as well as code would be nice.
Not sure I entirely understand your question, but I'll give it a shot..
Assuming:
we just want to know if there is some object in a box.
the empty box is always the same
perfect box alignment etc.
You can do this:
subtract the query image from your empty box image.
sum all pixels
if the value is zero the images are identical, therefore no change, so no object.
Obviously there actually is some difference between the box parts of the two images, but the key thing is that the non-object part of the images are as similar as possible for both pictures, if this is the case, then we can use the above method but with a threshold test as the 3rd step. Provided the threshold is set reasonably, it should give a decent prediction of whether the box is empty or not..
This question is related to this other one.
In my program (which uses pygame to draw objects on the video) I have two representation of my world:
A physical one that I use to make all the calculations involved in the simulation and in which objects are located on a 1000x1000 metres surface.
A visual one which I use to draw on the screen, in which my objects are located in a window measuring 100x100 pixels.
What I want to achieve is to be able to pass to my pygame drawing functions (which normally accept inputs in pixels) my physical/real-word coordinates. In other words, I would like to be able to say:
Draw a 20m radius circle at coordinates (200m, 500m)
using the precise pygame syntax:
pygame.draw.circle(surface, (255,255,255), (200,500), 20)
and get my circle of 2px radius at centred on pixels (20,50).
Please note that this question is about a native pygame way to do this, not some sort of workaround to achieve that result (if you want to answer that, you should take a look to the question I already mentioned) instead.
Thanks in advance for your time and support.
There is no native pygame way to do this.
You may be misunderstanding the function of pygame. It is not for drawing vector objects. It is for writing pixels into video surfaces.
Since you have vector objects, you must define how they will be converted into pixels. Doing this is not a workaround - it's how you are intended to use pygame.
Since it seems that PyGame developers do not hang around here too much, I brought the question to the Pygame mailing list where it originated a monster thread and the issue has been debated at large.
The summary would be:
At present there is not such a feature.
There is interest to implement it, or at least to try to implement it...
...although is not a priority of the core devs in any way
There is more than one way to skin a cat:
should be the scaling happen both ways (inputting coordinates and reading them)?
how to deal with lines that have no thickness but that should be visible?
how to deal with visibility of objects at the edge of the image? which of their points should be taken as reference to know if a pixel should be lit or not for them?
and more (see linked thread).
I am using python-clutter 1.0
My question in the form of a challenge
Write code to allow zooming up to a CairoTexture actor, by pressing a key, in steps such that at each the actor can be re-drawn (by cairo) so that the image remains high-res but still scales as expected, without re-sizing the actor.
Think of something like Inkscape and how you can zoom into the vectors; how the vectors remain clean at any magnification. Put a path (bunch of cairo line_to commands, say) onto an CairoTexture actor and then allow the same trick to happen.
More detail
I am aiming at a small SVG editor which uses groups of actors. Each actor is devoted to one path. I 'zoom' by using SomeGroup.set_depth(z) and then make z bigger/smaller. All fine so far. However, the closer the actor(s) get to the camera, the more the texture is stretched to fit their new apparent size.
I can't seem to find a way to get Clutter to do both:
Leave the actor's actual size static (i.e. what it started as.)
Swap-out its underlying surface for larger ones (on zooming in) that I can then re-draw the path onto (and use a cairo matrix to perform the scaling of the context.)
If I use set_size or set_surface_size, the actor gets larger which is not intended. I only want it's surface (underlying data) to get larger.
(I'm not sure of the terminology for this, mipmapping perhaps? )
Put another way: a polygon is getting larger, increase the size of its texture array so that it can map onto the larger polygon.
I have even tried an end-run around clutter by keeping a second surface (using pycairo) that I re-create to the apparent size of the actor (get_transformed_size) and then I use clutter's set_from_rgb_data and point it at my second surface, forcing a re-size of the surface but not of the actor's dimensions.
The problem with this is that a)clutter ignores the new size and only draws into the old width/height and b)the RGBA vs ARGB32 thing kind of causes a colour meltdown.
I'm open to any alternative ideas, I hope I'm standing in the woods missing all the trees!
\d
Well, despite all my tests and hacks, it was right under my nose all along.
Thanks to Neil on the clutter-project list, here's the scoop:
CT = SomeCairoTextureActor()
# record the old height, once:
old_width, old_height = CT.get_size()
Start a loop:
# Do stuff to the depth of CT (or it's parent)
...
# Get the apparent width and height (absolute size in pixels)
appr_w,appr_h = CT.get_transformed_size()
# Make the new surface to the new size
CT.set_surface_size( appr_w, appr_h )
# Crunch the actor back down to old size
# but leave the texture surface something other!
CT.set_size(old_width, old_height)
loop back again
The surface size and the size of the
actor don't have to be the same. The
surface size is just by default the
preferred size of the actor. You can
override the preferred size by just
setting the size on the actor. If the
size of the actor is different from
the surface size then the texture will
be squished to fit in the actor size
(which I think is what you want).
Nice to put this little mystery to bed. Thanks clutter list!
\d
It's a long one so you might want to get that cup of tea/coffee you've been holding off on ;)
I run a game called World of Arl, it's a turn based strategy game akin to Risk or Diplomacy. Each player has a set of cities, armies and whatnot. The question revolves around the display of these things. Currently the map is created using a background image with CSS positioning of team icons on top of that to represent cities. You can see how it looks here: WoA Map
The background image for the map is located here: Map background and created in Omnigraffle. It's not designed to draw maps but I'm hopelessly incompetent with photoshop and this works for my purposes just fine.
The problem comes that I want to perform such fun things as pathfinding and for that I need to have the map somehow stored in code. I have tried using PIL, I have looked at incorporating it with Blender, I tried going "old school" and creating tiles as from many older games and finally I tried to use SVG. I say this so you can see clearly that it's not through lack of trying that I have this problem ;)
I want to be able to store the map layout in code and both create an image from it and use it for things such as pathfinding. I'm using Python but I suspect that most answers will be generic. The cities other such things are stored already and easily drawn on, I want to store the layout of the landmass and features on the landmass.
As for pathfinding, each type of terrain has a movement cost and when the map is stored as just an image I can't access the terrain of a given area. In addition to pathfinding I wish to be able to know the terrain for various things related to the game, cities in mountains produce stone for example.
Is there a good way to do this and what terms should I have used in Google because the terms I tried all came up with unrelated stuff (mapping being something completely different most of the time).
Edit 2:
Armies can be placed anywhere on the map as can cities, well, anywhere but in the water where they'd sink, drown and probably complain (in that order).
After chatting to somebody on MSN who made me go over the really minute details and who has a better understanding of the game (owing to the fact that he's played it) it's occurring to me that tiles are the way to go but not the way I had initially thought. I put the bitmap down as it is now but also have a data layer of tiles, each tile has a given terrain type and thus pathfinding and suchlike can be done on it yet at the same time I still render using Omnigraffle which works pretty well.
I will be making an editor for this as suggested by Adam Smith. I don't know that graphs will be relevant Xynth but I've not had a chance to look into them fully yet.
I really appreciate all those that answered my question, thanks.
I'd store a game map in code as a graph.
Each node would represent a country/city and each edge would represent adjacency. Once you have a map like that, I'm sure you can find many resources on AI (pathfinding, strategy, etc.) online.
If you want to be able to build an image of the map programattically, consider adding an (x, y) coordinate and an image for each node. That way you can display all of the images at the given coordinates to build up a map view.
The key thing to realize here is that you don't have to use just one map. You can use two maps:
The one you already have which is drawn on screen
A hidden map which isn't drawn but which is used for path finding, collision detection etc.
The natural next question then is where does this second map come from? Easy, you create your own tool which can load your first map, and display it. Your tool will then let you draw boundaries around you islands and place markers at your cities. These markers and boundaries (simple polygons e.g.) are stored as your second map and is used in your code to do path finding etc.
In fact you can have your tool emit python code which creates the graphs and polygons so that you don't have to load any data yourself.
I am just basically telling you to make a level editor. It isn't very hard to do. You just need some buttons to click on to define what you are adding. e.g. if you are adding a polygon. Then you can just add each mouse coordinate to an array each time you click on your mouse if you have toggled your add polygon button. You can have another button for adding cities so that each time you click on the map you will record that coordinate for the city and possibly a corresponding name that you can provide in a text box.
You're going to have to translate your map into an abstract representation of some kind. Either a grid (hex or square) or a graph as xynth suggests. That's the only way you're going to be able to apply things like pathfinding algorithms to it.
IMO, the map should be rendered in the first place instead of being a bitmap. What you should be doing is to have separate objects each knowing its dimensions clearly such as a generic Area class and classes like City, Town etc derived from this class. Your objects should have all the information about their location, their terrain etc and should be rendered/painted etc. This way you will have exact knowledge of where everything lies.
Another option is to keep the bitmap as it is and keep this information in your objects as their data. By doing this the objects won't have a draw function but they will have precise information of their placement etc. This is sort of duplicating the data but if you want to go with the bitmap option, I can't think of any other way.
If you just want to do e.g. 2D hit-testing on the map, then storing it yourself is fine. There are a few possibilities for how you can store the information:
A polygon per island
Representing each island as union of a list rectangles (commonly used by windowing systems)
Creating a special (maybe greyscale) bitmap of the map which uses a unique solid colour for each island
Something more complex (perhaps whatever Omnigiraffe's internal representation is)
Asuming the map is fixed (not created on the fly) its "correct" to use a bitmap as graphical representation - you want to make it as pretty as possible.
For any game related features such as pathfinding or whatever fancy stuff you want to add you should add adequate data structures, even if that means some data is redundant.
E.g. describe the boundaries of the isles as polygon splines (either manually or automatically created from the bitmap, thats up to you and how much effort you want to spend and is needed to get the functionality you want).
To sum it up: create data structures matching the problems you have to solve, the bitmap is fine for looks but avoid doing pathfining or other stuff on it.