QTableview - ResizeToContents queries every row? - python

Good afternoon,
I'm using a QTableview+QAbstractTableModel to display a potentially large amount of data (20k+ rows) were each row consists of cells holding text of various length, including newlines, displayed using a custom delegate. The data resides in memory (no database, stream or similar) and might be changed from outside the table. To adapt the row height to changes of the text, I set the Resize Mode of the TableView's vertical header to "ResizeToContents" which correctly uses the sizeHint of my delegate to set the height.
This works well and all, however depending on the size of the table the performance is abysmal (several minutes to load a large table). Once I turn off the resize mode, loading is as fast as lightning but of course the row height is incorrect (causing text with newlines to overlap rows, etc.). It seems that when using the auto-resize mode, every cell in every row is queried for the size hint, which takes a lot of time (confirmed by printing a debug msg in the sizeHint function of my delegate).
I wonder if this is the intended behaviour of the ResizeToContents mode, as I would assume that it would only be necessary to query the actually visible rows of the TableView - not all rows (as given by the rowCounts() call). As only a fraction of the rows are displayed at any one time, this would of course improve the performance noticeably. Is this a bug/shortcoming in the resize code or do I misunderstand something about this functionality? By the way, I'm using PyQt 4.10 so maybe this behaviour changed in newer versions?
Thanks in advance for all hints.

If you set verticalHeader.sizeHint to ResizeToContents, on any row update ALL table will be processed to obtain new column width. This behaviour is life saver for most of us, if we don't have a large table that frequently update.
First, don't use resizeToContents size hint!
Basic solution: use fixed size for columns with stretch option. (i think, it is not for you)
Solution, i use: i have timer to call resizeColumnsToContents() slot at intervals of 2 seconds.
Solution, optimized: You can optimize my solution to your case. Such as, you can wait until all row data updated to call resize slot.
Answer for your suggestion(resize for just visible items): it is not useful.

Related

Presenting parts of a pre-prepared image array in Shady

I'm interested in migrating from psychtoolbox to shady for my stimulus presentation. I looked through the online docs, but it is not very clear to me how to replicate what I'm currently doing in matlab in shady.
What I do is actually very simple. For each trial,
I load from disk a single image (I do luminance linearization off-line), which contains all the frames I plan to display in that trial (the stimulus is 1000x1000 px, and I present 25 frames, hence the image is 5000x5000px. I only use BW images, so I have a single int8 value per pixel).
I transfer the entire image from the CPU to the GPU
At some point (externally controlled) I copy the first frame to the video buffer and present it
At some other point (externally controlled) I trigger the presentation of the
remaining 24 frames (copying the relevant part of the image to video buffer for each video frame, and then calling flip()).
The external control happens by having another machine communicate with the stimulus presentation code over TCP/IP. After the control PC sends a command to the presentation PC and this is executed, the presentation PC needs to send back an acknowledgement message to the control PC. I need to send three ACK messages, one when the first frame appears on screen, one when the 2nd frame appears on screen, and one when the 25th frame appears on screen (this way the control PC can easily verify if a frame has been dropped).
In matlab I do this by calling the blocking method flip() to present a frame, and when it returns I send the ACK to the control PC.
That's it. How would I do that in shady? Is there an example that I should look at?
The places to look for this information are the docstrings of Shady.Stimulus and Shady.Stimulus.LoadTexture, as well as the included example script animated-textures.py.
Like most things Python, there are multiple ways to do what you want. Here's how I would do it:
w = Shady.World()
s = w.Stimulus( [frame00, frame01, frame02, ...], multipage=True )
where each frameNN is a 1000x1000-pixel numpy array (either floating-point or uint8).
Alternatively you can ask Shady to load directly from disk:
s = w.Stimulus('trial01/*.png', multipage=True)
where directory trial01 contains twenty-five 1000x1000-pixel image files, named (say) 00.png through 24.png so that they get sorted correctly. Or you could supply an explicit list of filenames.
Either way, whether you loaded from memory or from disk, the frames are all transferred to the graphics card in that call. You can then (time-critically) switch between them with:
s.page = 0 # or any number up to 24 in your case
Note that, due to our use of the multipage option, we're using the "page" animation mechanism (create one OpenGL texture per frame) instead of the default "frame" mechanism (create one 1000x25000 OpenGL texture) because the latter would exceed the maximum allowable dimensions for a single texture on many graphics cards. The distinction between these mechanisms is discussed in the docstring for the Shady.Stimulus class as well as in the aforementioned interactive demo:
python -m Shady demo animated-textures
To prepare the next trial, you might use .LoadPages() (new in Shady version 1.8.7). This loops through the existing "pages" loading new textures into the previously-used graphics-card texture buffers, and adds further pages as necessary:
s.LoadPages('trial02/*.png')
Now, you mention that your established workflow is to concatenate the frames as a single 5000x5000-pixel image. My solutions above assume that you have done the work of cutting it up again into 1000x1000-pixel frames, presumably using numpy calls (sounds like you might be doing the equivalent in Matlab at the moment). If you're going to keep saving as 5000x5000, the best way of staying in control of things might indeed be to maintain your own code for cutting it up. But it's worth mentioning that you could take the entirely different strategy of transferring it all in one go:
s = w.Stimulus('trial01_5000x5000.png', size=1000)
This loads the entire pre-prepared 5000x5000 image from disk (or again from memory, if you want to pass a 5000x5000 numpy array instead of a filename) into a single texture in the graphics card's memory. However, because of the size specification, the Stimulus will only show the lower-left 1000x1000-pixel portion of the array. You can then switch "frames" by shifting the carrier relative to the envelope. For example, if you were to say:
s.carrierTranslation = [-1000, -2000]
then you would be looking at the frame located one "column" across and two "rows" up in your 5x5 array.
As a final note, remember that you could take advantage of Shady's on-the-fly gamma-correction and dithering–they're happening anyway unless you explicitly disable them, though of course they have no physical effect if you leave the stimulus .gamma at 1.0 and use integer pixel values. So you could generate your stimuli as separate 1000x1000 arrays, each containing unlinearized floating-point values in the range [0.0,1.0], and let Shady worry about everything beyond that.

Displaying very large data sets more efficently

I have a logic analyser project that records several hundred million 16bit values (~100-500 million) and I need to display anything from a few hundred samples to the entire capture as the user zooms.
When you zoom out the whole system gets a huge performance hit as it's loading a massive chunk from the file.
I just though this morning that it would be more efficient to "stride" through the file at the users screen resolution. You can't physically display anything between pixels anyways. This doesn't solve the massive file size hit in memory though.
Is there away I can take a huge data set and stream chunk it down efficiently?
I was thinking streaming from start to start + view size by horiz resolution. This makes a very choppy zoom though.
Program uses python but I am open to calling something in c if it already exists.
Well, I don't know if this is actually question on programming or design overall.
For "zooming" problem with vizualizations I suggest:
Have pre-computed/cached version for some zoom levels. Ideally, gradation should be calculated based on user behaviour.
When user zooms-in, you simultaneously
calculate "proper" data or load pre-computed aggregated data of deeper zoom layer and crop it by your view frame
cheat by rendering low-res data from previous layer or smooth it by some approximation (but make sure to somehow tell user that data is not finalized)
Aside of it, think if you can optimize the way you store data. Trees may make your life way easier, both for partial disk read/search and for storing aggregated data.
In my opinion, there is no point to display even a few hundred samples unless they form some kind of image/shape. I guess one can look at hundred numbers if they are properly structured (colored). Several hundred - doubt it - here you replace actual data with some visualization (plots, charts, maps, ...).
To approach the problem you may define some rule to stop displaying actual data at all. For instance, if digit height becomes less than, say, 10 pixels you display some kind of message selected numbers are from rows 200...300, columns 400..500 or some graphical alterantive with corner coordinates and amount of numbers.

How can I make the icons in an IconView spread out evenly?

I have a gtk.IconView with several icons in it. Sometimes I will resize the window to see more icons. When I do this, the extra space generated isn't distributed evenly between all the columns. Instead, it all gets put on the right until there's enough space for a new column.
I'm not seeing anything in the documentation that would let me do this automatically. Do I need to check for resize signals and then manually set the column and row spacings? If so, which resize signal do I use.
Here's a picture of what I mean. I've marked the extra space in red.
This is what I'd like to see (of course, with the gaps actually evenly spaced, unlike my poor MS Paint job).
We have encountered that problem in Ubuntu Accomplishments Viewer, and as we managed to solve it, I'll present our solution.
The trick is to place the GtkIconView in a GtkScrolledWindow, and set it's hscrollbar_policy to "always". Then, a check-resize signal has to be used, to react when the user resizes the window (note that it must be checked if the size has changed, for the signal is emitted also when e.g. the window is dragged around). When the size changes, the model used by GtkIconView has to be cleared and recreated, as this triggers GtkIconView properly reallocating the newly gained space (or shrinking). Also, as the result the horizontal scrollbar will never be seen, as the GtkIconView uses exactly that much space as the GtkScrolledWindow uses.
Yeap, it seems after a very fat look that you will need to do that on your own. And regardeing the signal, I'd check for GtkContainer::check-resize.
Use the event size_allocate.
I defined my class :
class toto(Gtk.IconView):
def __init__(self):
super(toto,self).__init__()
self.connect("size_allocate",self.resize)
self.set_columns(4)
Then I modify the number of working columns
def resize(self,_iv,_rect):
print("X",rect.x)
print("Y",rect.y)
print("W",rect.width)
print("H",rect.height)
# calculate number of columns, let's say 3
_cols=3
self.set_columns(_cols)
Seems working for me

Speeding up GTK tree view

I'm writing an application for the Maemo platform using pygtk and the rendering speed of the tree view seems to be a problem. Since the application is a media controller I'm using transition animations in the UI. These animations slide the controls into view when moving around the UI. The issue with the tree control is that it is slow.
Just moving the widget around in the middle of the screen is not that slow but if the cells are being exposed the framerate really drops. What makes this more annoying is that if the only area that is being exposed is the title row with the row labels, the framerate remains under control.
Judging by this I'm suspecting the GTK tree view is drawing the full cells again each time a single row of pixels is being exposed. Is there a way to somehow force GTK to draw the whole widget into some buffer even if parts of it are off screen and then use the buffer to draw the widget when animating?
Also is there a difference between using Viewport and scrolling that up and using Layout panel and moving the widgets down? I'd have imagined Viewport is faster but I saw no real difference when I tried both versions.
I understand this isn't necessarily what GTK has been created for. Other alternative I've tried is pygame but I'd prefer some higher level implementaion that has widget based event handling built in. Also pygtk has the benefit of running in Windows and a window so development is easier.
I never did this myself but you could try to implement the caching yourself. Instead of using the predefined cell renderers, implement your own cell renderer (possibly as a wrapper for the actual one), but cache the pixmaps.
In PyGTK, you can use gtk.GenericCellRenderer. In your decorator cell renderer, do the following when asked to render:
keep a cache of off-screen pixmaps (or better, just one large one) and a cache of sizes
if asked to predict the size or render, create a key from the relevant properties
if the key exists in the cache, use the cached pixmap, blit the cached pixmap on the drawable you are given
otherwise, first have the actual cell renderer do the work and then copy it
The last step also implies that caching does incur an overhead during the first time the cell is renderered. This problem can be mitigated a bit by using a caching strategy. You might want to try out different things, based on the distribution of rendered values:
if all cells are unique, not much to do than caching everything up to a certain limit, or some MRU strategy
if you have some kind of Zipf distribution, i.e. some cells are very common, while others are very rare, you should only cache the cells with high frequency and get rid off the caching overhead for rare cell values.
That being said, I can't say if it's going to make any difference. My experience from a somewhat similar problem is that anything involving text is usually slow enough that caching makes sense---sorry that I can't give simpler advice.
Before you try that, you could also simple write a decorating cell renderer which just counts how often your cells are actually rendered and get some timing information, so that you get an idea where the hot spots are and if caching the values would make any sense at all.

How to set and preserve minimal width?

I use some wx.ListCtrl classes in wx.LC_REPORT mode, augmented with ListCtrlAutoWidthMixin.
The problem is: When user double clicks the column divider (to auto resize column), column width is set to match the width of contents. This is done by the wx library and resizes column to just few pixels when the control is empty.
I tried calling
self.SetColumnWidth(colNumber, wx.LIST_AUTOSIZE_USEHEADER)
while creating the list, but it just sets the initial column width, not the minimum allowed width.
Anyone succeeded with setting column minimal width?
EDIT: Tried catching
wx.EVT_LEFT_DCLICK
with no success. This event isn't generated when user double clicks column divider. Also tried with
wx.EVT_LIST_COL_END_DRAG
this event is generated, usually twice, for double click, but I don't see how can I retrieve information about new size, or how to differentiate double click from drag&drop. Anyone has some other ideas?
Honestly, I've stopped using the native wx.ListCtrl in favor of using ObjectListView. There is a little bit of a learning curve, but there are lots of examples. This would be of interest to your question.
Ok, after some struggle I got working workaround for that. It is ugly from design point of view, but works well enough for me.
That's how it works:
Store the initial width of column.
self.SetColumnWidth(colNum, wx.LIST_AUTOSIZE_USEHEADER)
self.__columnWidth[colNum] = self.GetColumnWidth(c)
Register handler for update UI event.
wx.EVT_UPDATE_UI(self, self.GetId(), self.onUpdateUI)
And write the handler function.
def onUpdateUI(self, evt):
for colNum in xrange(0, self.GetColumnCount()-1):
if self.GetColumnWidth(colNum) < self.__columnWidth[colNum]:
self.SetColumnWidth(colNum, self.__columnWidth[colNum])
evt.Skip()
The self.GetColumnCount() - 1 is intentional, so the last column is not resized. I know this is not an elegant solution, but it works well enough for me - you can not make columns too small by double clicking on dividers (in fact - you can't do it at all) and double-clicking on the divider after last column resizes the last column to fit list control width.
Still, if anyone knows better solution please post it.

Categories