Issue with Paint Event on Mac OSX 10.8.5 in wxPython - python

In my program I have an image (bitmap) loaded into a wxScrolledWindow. I'm trying to draw a grid over the image but I just cannot get it to work. My job is to port this program over from Windows, which it was originally developed on, and make it work on Mac as well but it is a bigger pain the butt than I expected.
def OnPaint(self, event):
dc = wx.BufferedPaintDC(self.staticBitmap,self.staticBitmap.GetBitmap())
dc.Clear()
dc.DrawBitmap(self.wxBitmap, 0, 0)
self.drawGrid(dc)
event.Skip()
def drawGrid(self, dc):
gridWid, gridHgt = self.staticBitmap.GetBitmap().GetSize()
numRows, numCols = self.gridSize, self.gridSize
if self.controlPanel.showGridBox.IsChecked():
dc.SetPen(wx.Pen(self.gridColor, self.gridThickness))
dc.SetTextForeground(self.gridColor)
cellWid = float( gridWid - 1) / numRows
cellHgt = float( gridHgt - 1) / numCols
for rowNum in xrange( numRows + 1) :
dc.DrawLine( 0, rowNum*cellHgt, gridWid, rowNum*cellHgt )
for colNum in xrange( numCols + 1 ) :
dc.DrawLine( colNum*cellWid, 0, colNum*cellWid, gridHgt )
This code works just fine on Windows 7, but I keep getting this error when running it on Mac:
Traceback (most recent call last):
File "/Users/kyra/Documents/workspace/ADAPT/src/GUI.py", line 1617, in OnPaint
dc = wx.BufferedPaintDC(self.staticBitmap, self.staticBitmap.GetBitmap())
File "/usr/local/lib/wxPython-3.0.2.0/lib/python2.7/site-packages/wx-3.0-osx_cocoa/wx/_gdi.py", line 5290, in __init__
_gdi_.BufferedPaintDC_swiginit(self,_gdi_.new_BufferedPaintDC(*args, **kwargs))
wx._core.PyAssertionError: C++ assertion "window->MacGetCGContextRef() != NULL" failed at /BUILD/wxPython-src-3.0.2.0/src/osx/carbon/dcclient.cpp(195) in wxPaintDCImpl(): using wxPaintDC without being in a native paint event
self.staticBitmap is a wxStaticBitmap, and self.wxBitmap is the same exact image. My guess is it has something to do with a GraphicsContext, perhaps? There was a similar question asked here: How to send PaintEvent in wxpython but this did not help me. I did what they suggested with self.Refresh() but I have the same error coming up. Why would this be working on Windows but not on Mac? No drawing seems to be occurring on the image.

First, you shouldn't be handling the paint event for a native widget. Sometimes it will work, like this case on Win7, but other times it won't, and it is not officialy supported by wxWidgets. (The behavior is "undefined")
Second, why bother painting the wx.StaticBitmap at all? If you need to change the bitmap that the widget is displaying you can just give it a new one with its SetBitmap method. In this case if the grid you are drawing is dynamic (needs to change over time) then you could use a wx.MemoryDC to make a new bitmap with the grid (IOW, draw the bitmap and call drawGrid on the memory DC) and then pass that new bitmap to SetBitmap.
Third, you don't usually see calls to event.Skip in paint event handlers. There may be cases where this could cause problems too, unless the base classes are expecting it.
Forth, it's not really a problem but using wx.BufferedPaintDC on Mac is superfluous as the platform is already double-buffering everything. GTK does so in most cases as well. There is a wx.AutoBufferedPaintDC that will either be a PaintDC or a BufferedPaintDC depending on if buffering is needed or not for the given platform. Or you can decide which to use in your own code by looking at the return value of window.IsDoubleBuffered().
Finally, if you would rather handle this problem using paint events instead of generating and swapping images in the wx.StaticBitmap then another approach you could take is to make a custom class similar to wx.StaticBitmap that simply paints a bitmap on itself, but also knows how to manage drawing the grid when it is needed, then you could use that class in place of the wx.StaticBitmap. You could use the wx.lib.statbmp module as a starting point.

Related

Python GTK3 How to bring widgets to the front?

I have an application (actually a plugin for another application) that presents a GTK notebook. Each tab contains a technical drawing of an operation, with a set of SpinButtons that allow you to alter the dimensions of the operation.
If you need more context, it's here: https://forum.linuxcnc.org/41-guis/26550-lathe-macros?start=150#82743
As can be seen above, this all worked fine in GTK2. The widgets (first iteration in a GTK_Fixed, then moved to a GTK_Table) were pre-positioned and the image (a particular layer of a single SVG) was plonked in behind.
Then we updated to GTK3 (and Python 3) and it stopped working. The SVG image now appears on top of the input widgets, and they can no-longer be seen or operated.
I am perfectly happy to change the top level container[1], if that will help. But the code that used to work (and now doesn't) is:
def on_expose(self,nb,data=None):
tab_num = nb.get_current_page()
tab = nb.get_nth_page(tab_num)
cr = tab.get_property('window').cairo_create()
cr.set_operator(cairo.OPERATOR_OVER)
alloc = tab.get_allocation()
x, y, w, h = (alloc.x, alloc.y, alloc.width, alloc.height)
sw = self.svg.get_dimensions().width
sh = self.svg.get_dimensions().height
cr.translate(0, y)
cr.scale(1.0 *w / sw, 1.0*h/sh)
#TODO: gtk3 drawing works, but svg is drawn over the UI elements
self.svg.render_cairo_sub(cr = cr, id = '#layer%i' % tab_num)
[1] In fact I will probably go back to GTK_Fixed and move the elements about in the handler when the window resizes, scaled according to the original position. The GTK_Table (deprecated) version takes over 2 minutes to open in the Glade editor.
Unless there is a more elegant way to do this too?

Pyqt5 image coordinates

I display images with Qlabel.I need image coordinates/pixel coordinates but, I use mouseclickevent its show me only Qlabel coordinates.
for examples my image is 800*753 and my Qlabel geometry is (701,451).I reads coordinates in (701,451) but I need image coordinates in (800*753)
def resimac(self):
filename= QtWidgets.QFileDialog.getOpenFileName(None, 'Resim Yükle', '.', 'Image Files (*.png *.jpg *.jpeg *.bmp *.tif)')
self.image=QtGui.QImage(filename[0])
self.pixmap=QtGui.QPixmap.fromImage(self.image)
self.resim1.setPixmap(self.pixmap)
self.resim1.mousePressEvent=self.getPixel
def getPixel(self, event):
x = event.pos().x()
y = event.pos().y()
print("X=",x," y= ",y)
Since you didn't provide a minimal, reproducible example, I'm going to assume that you're probably setting the scaledContents property, but that could also be not true (in case you set a maximum or fixed size for the label).
There are some other serious issues about your answer, I'll address them at the end of this answer.
The point has to be mapped to the pixmap coordinates
When setting a pixmap to a QLabel, Qt automatically resizes the label to its contents.
Well, it does it unless the label has some size constrains: a maximum/fixed size that is smaller than the pixmap, and/or the QLabel has the scaledContents property set to True as written above. Note that this also happens if any of its ancestors has some size constraints (for example, the main window has a maximum size, or it's maximized to a screen smaller than the space the window needs).
In any of those cases, the mousePressEvent will obviously give you the coordinates based on the widget, not on the pixmap.
First of all, even if it doesn't seem to be that important, you'll have to consider that every widget can have some contents margins: the widget will still receive events that happen inside the area of those margins, even if they are outside its actual contents, so you'll have to consider that aspect, and ensure that the event happens within the real geometry of the widget contents (in this case, the pixmap). If that's true, you'll have to translate the event position to that rectangle to get its position according to the pixmap.
Then, if the scaledContents property is true, the image will be scaled to the current available size of the label (which also means that its aspect ratio will not be maintained), so you'll need to scale the position.
This is just a matter of math: compute the proportion between the image size and the (contents of the) label, then multiply the value using that proportion.
# click on the horizontal center of the widget
mouseX = 100
pixmapWidth = 400
widgetWidth = 200
xRatio = pixmapWidth / widgetWidth
# xRatio = 2.0
pixmapX = mouseX * xRatio
# the resulting "x" is the horizontal center of the pixmap
# pixmapX = 200
On the other hand, if the contents are not scaled you'll have to consider the QLabel alignment property; it is usually aligned on the left and vertically centered, but that depends on the OS, the style currently in use and the localization (consider right-to-left writing languages). This means that if the image is smaller than the available size, there will be some empty space within its margins, and you'll have to be aware of that.
In the following example I'm trying to take care about all of that (I'd have to be honest, I'm not 100% sure, as there might be some 1-pixel tolerance due to various reasons, most regarding integer-based coordinates and DPI awareness).
Note that instead of overwriting mousePressEvent as you did, I'm using an event filter, I'll explain the reason for it afterwards.
from PyQt5 import QtCore, QtGui, QtWidgets
class Window(QtWidgets.QWidget):
def __init__(self):
QtWidgets.QWidget.__init__(self)
layout = QtWidgets.QGridLayout(self)
self.getImageButton = QtWidgets.QPushButton('Select')
layout.addWidget(self.getImageButton)
self.getImageButton.clicked.connect(self.resimac)
self.resim1 = QtWidgets.QLabel()
layout.addWidget(self.resim1)
self.resim1.setAlignment(QtCore.Qt.AlignRight|QtCore.Qt.AlignVCenter)
# I'm assuming the following...
self.resim1.setScaledContents(True)
self.resim1.setFixedSize(701,451)
# install an event filter to "capture" mouse events (amongst others)
self.resim1.installEventFilter(self)
def resimac(self):
filename, filter = QtWidgets.QFileDialog.getOpenFileName(None, 'Resim Yükle', '.', 'Image Files (*.png *.jpg *.jpeg *.bmp *.tif)')
if not filename:
return
self.resim1.setPixmap(QtGui.QPixmap(filename))
def eventFilter(self, source, event):
# if the source is our QLabel, it has a valid pixmap, and the event is
# a left click, proceed in trying to get the event position
if (source == self.resim1 and source.pixmap() and not source.pixmap().isNull() and
event.type() == QtCore.QEvent.MouseButtonPress and
event.button() == QtCore.Qt.LeftButton):
self.getClickedPosition(event.pos())
return super().eventFilter(source, event)
def getClickedPosition(self, pos):
# consider the widget contents margins
contentsRect = QtCore.QRectF(self.resim1.contentsRect())
if pos not in contentsRect:
# outside widget margins, ignore!
return
# adjust the position to the contents margins
pos -= contentsRect.topLeft()
pixmapRect = self.resim1.pixmap().rect()
if self.resim1.hasScaledContents():
x = pos.x() * pixmapRect.width() / contentsRect.width()
y = pos.y() * pixmapRect.height() / contentsRect.height()
pos = QtCore.QPoint(x, y)
else:
align = self.resim1.alignment()
# for historical reasons, QRect (which is based on integer values),
# returns right() as (left+width-1) and bottom as (top+height-1),
# and so their opposite functions set/moveRight and set/moveBottom
# take that into consideration; using a QRectF can prevent that; see:
# https://doc.qt.io/qt-5/qrect.html#right
# https://doc.qt.io/qt-5/qrect.html#bottom
pixmapRect = QtCore.QRectF(pixmapRect)
# the pixmap is not left aligned, align it correctly
if align & QtCore.Qt.AlignRight:
pixmapRect.moveRight(contentsRect.x() + contentsRect.width())
elif align & QtCore.Qt.AlignHCenter:
pixmapRect.moveLeft(contentsRect.center().x() - pixmapRect.width() / 2)
# the pixmap is not top aligned (note that the default for QLabel is
# Qt.AlignVCenter, the vertical center)
if align & QtCore.Qt.AlignBottom:
pixmapRect.moveBottom(contentsRect.y() + contentsRect.height())
elif align & QtCore.Qt.AlignVCenter:
pixmapRect.moveTop(contentsRect.center().y() - pixmapRect.height() / 2)
if not pos in pixmapRect:
# outside image margins, ignore!
return
# translate coordinates to the image position and convert it back to
# a QPoint, which is integer based
pos = (pos - pixmapRect.topLeft()).toPoint()
print('X={}, Y={}'.format(pos.x(), pos.y()))
if __name__ == '__main__':
import sys
app = QtWidgets.QApplication(sys.argv)
w = Window()
w.show()
sys.exit(app.exec_())
Now. A couple of suggestions.
Don't overwrite existing child object methods with [other] object's instance attributes
There are various reasons for which this is not a good idea, and, while dealing with Qt, the most important of them is that Qt uses function caching for virtual functions; this means that as soon as a virtual is called the first time, that function will always be called in the future. While your approach could work in simple cases (especially if the overwriting happens within the parent's __init__), it's usually prone to unexpected behavior that's difficult to debug if you're not very careful.
And that's exactly your case: I suppose that resimac is not called upon parent instantiation and until after some other event (possibly a clicked button) happens. But if the user, for some reason, clicks on the label before a new pixmap is loaded, your supposedly overwritten method will never get called: at that time, you've not overwritten it yet, so the user clicks the label, Qt calls the QLabel's base class mousePressEvent implementation, and then that method will always be called from that point on, no matter if you try to overwrite it.
To work around that, you have at least 3 options:
use an event filter (as the example above); an event filter is something that "captures" events of a widgets and allows you to observe (and interact) with it; you can also decide to propagate that event to the widget's parent or not (that's mostly the case of key/mouse events: if a widget isn't "interested" about one of those events, it "tells" its parent to care about it); this is the simplest method, but it can become hard to implement and debug for complex cases;
subclass the widget and manually add it to your GUI within your code;
subclass it and "promote" the widget if you're using Qt's Designer;
You don't need to use a QImage for a QLabel.
This is not that an issue, it's just a suggestion: QPixmap already uses (sort of) fromImage within its C++ code when constructing it with a path as an argument, so there's no need for that.
Always, always provide usable, Minimal Reproducible Example code.
See:
https://stackoverflow.com/help/how-to-ask
https://stackoverflow.com/help/minimal-reproducible-example
It could take time, even hours to get an "MRE", but it's worth it: there'll always somebody that could answer you, but doesn't want to or couldn't dig into your code for various reasons (mostly because it's incomplete, vague, inusable, lacking context, or even too expanded). If, for any reason, there'll be just that one user, you'll be losing your occasion to solve your problem. Be patient, carefully prepare your questions, and you'll probably get plenty of interactions and useful insight from it.

Drawing on a bitmap with wxpython (cross-platform)

I'm aware that there are literally hundred of examples out there for this task, though I don't manage to apply those examples to my specific problem. As you can see on the code below I am trying to draw a polygon on a bitmap "self.image". This code works absolutely fine on MS Windows. On Linux this code will not draw my polygon.
I tried to play around with different "dcs" like MemoryDC according to this How to draw text in a bitmap using wxpython?
but the result was the same.
My questions are:
Why does my code fail on Linux? Why does this work on MS Windows? (a bit off-topic) Why are people often exclusively drawing in the PaintDC in the OnPaint method bound to EVT_PAINT?
class attributes:
self.dc = wx.ClientDC(self.image)
self.dc.SetPen(wx.Pen(colour='red', width=4, style=wx.SOLID))
self.polygon = list()
this method is being called when I want to start drawing:
def start_drawing(self):
self.image.Bind(event=wx.EVT_LEFT_DOWN, handler=self.draw_polygon)
self.dc.BeginDrawing()
this method handles the binding from above:
def draw_polygon(self, event):
self.polygon.append(event.GetPositionTuple())
if len(self.polygon) > 1:
self.dc.DrawLines(points=self.polygon)
I fixed the bug myself.
The problem was actually not visible in the code I've been providing.
I have a method which changes self.image before any drawing is made. Thus the image which I assign to the ClientDC in the class body is not the same image than self.image when I begin to draw. Adding self.dc = wx.ClientDC(self.image) in the image setter solved my problem.
I don't understand why this code worked when being executed on MS Windows. This should not have never worked in the first place.

How to show a png image in Gtk3 with Python?

First of all, it is important to mention that I'm learning Python and Gtk+ 3, so I'm not an advanced programmer in these languages.
I'm trying to make a graphical interface in Gtk3 for a Python script that creates a png image, and I'd like to display it, but the PyGobject documentation is so scarce that I haven't found a way to do that. So far, my interface looks like this:
The buttons and text entries are arranged in a grid, and I'd like to keep empty the big space (represented by the big button) to the right until the script finishes building the image, and then show it in that area. The code is here.
Is there a way to do that using Python in Gtk3?
Thanks in advance,
Germán.
EDIT
Taking a look at the demos pointed out by #gpoo I discovered the Frame widget, and I implemented it in my GUI. This is how it looks like:
Inside the window class, I add the Frame to the grid:
self.frame_rgb = Gtk.Frame(label='RGB image')
self.frame_rgb.set_label_align(0.5, 0.5)
self.frame_rgb.set_shadow_type(Gtk.ShadowType.IN)
self.grid.attach_next_to(self.frame_rgb, self.label_img_name,
Gtk.PositionType.RIGHT, 3, 8)
I also connect the Run button to a callback function, so that when I click on it, my script creates and then displays the png image:
self.button_run = Gtk.Button(stock=Gtk.STOCK_EXECUTE)
self.button_run.connect('clicked', self.on_button_run_clicked)
self.grid.attach_next_to(self.button_run, self.entry_b_img,
Gtk.PositionType.BOTTOM, 1, 1)
Finally, my callback function is (no calculations yet, only render the image to the Frame for testing purposes):
def on_button_run_clicked(self, widget):
self.img = Gtk.Image.new_from_file('astro-tux.png')
self.frame_rgb.add(self.img)
but I got the following error when I click the Run button:
(makeRGB.py:2613): Gtk-WARNING **: Attempting to add a widget with
type GtkImage to a GtkFrame, but as a GtkBin subclass a GtkFrame can
only contain one widget at a time; it already contains a widget of
type GtkImage
Any help is appreciated!
You can use Gtk.Image. If you generate a file, you could use:
img = Gtk.Image.new_from_file('/path/to/my_file.png')
and add img to the container (GtkGrid in your case). Or, if you already have the Gtk.Image there, you can use:
img.set_from_file('/path/to/my_file.png')
Instead of ...from_file you can use from_pixbuf, and you can create a Gdk.Pixbuf from a stream.
In general, you can use the documentation for C and change the idiom to Python. Also, you can check the demos available in PyGObject, in particular, the demo for handling images.

Drawing window border in Python xlib

I'm working on a window manager written using python's xlib bindings and I'm (initially) attempting to mimic dwm's behavior in a more pythonic way. I've gotten much of what I need, but I'm having trouble using X's built in window border functionality to indicate window focus.
Assuming I've got an instance of Xlib's window class and that I'm reading the documentation correctly, this should do what I want to do (at least for now) - set the window border of a preexisting window to a garish color and set the border width to 2px.
def set_active_border(self, window):
border_color = self.colormap.alloc_named_color(\
"#ff00ff").pixel
window.change_attributes(None,border_pixel=border_color,
border_width = 2 )
self.dpy.sync()
However, I get nothing from this - I can add print statements to prove that my program is indeed running the callback function that I associated with the event, but I get absolutely no color change on the border. Can anyone identify what exactly I'm missing here? I can pastebin a more complete example, if it will help. I'm not exactly sure it will though as this is the only bit that handles the border.
Looks like this was complete PEBKAC. I've found an answer. Basically, I was doing this:
def set_active_border(self, window):
border_color = self.colormap.alloc_named_color(
"#ff00ff"
).pixel
window.configure(border_width=2)
window.change_attributes(
None,
border_pixel=border_color,
border_width=2)
self.dpy.sync()
Apparently this was confusing X enough that it was doing nothing. The solution that I've stumbled upon was to remove the border_width portion from the window.change_attributes() call, like so:
def set_active_border(self, window):
border_color = self.colormap.alloc_named_color(
"#ff00ff"
).pixel
window.configure(border_width=2)
window.change_attributes(
None,
border_pixel=border_color
)
self.dpy.sync()
I hope this helps someone later on down the road!

Categories