I have an application (actually a plugin for another application) that presents a GTK notebook. Each tab contains a technical drawing of an operation, with a set of SpinButtons that allow you to alter the dimensions of the operation.
If you need more context, it's here: https://forum.linuxcnc.org/41-guis/26550-lathe-macros?start=150#82743
As can be seen above, this all worked fine in GTK2. The widgets (first iteration in a GTK_Fixed, then moved to a GTK_Table) were pre-positioned and the image (a particular layer of a single SVG) was plonked in behind.
Then we updated to GTK3 (and Python 3) and it stopped working. The SVG image now appears on top of the input widgets, and they can no-longer be seen or operated.
I am perfectly happy to change the top level container[1], if that will help. But the code that used to work (and now doesn't) is:
def on_expose(self,nb,data=None):
tab_num = nb.get_current_page()
tab = nb.get_nth_page(tab_num)
cr = tab.get_property('window').cairo_create()
cr.set_operator(cairo.OPERATOR_OVER)
alloc = tab.get_allocation()
x, y, w, h = (alloc.x, alloc.y, alloc.width, alloc.height)
sw = self.svg.get_dimensions().width
sh = self.svg.get_dimensions().height
cr.translate(0, y)
cr.scale(1.0 *w / sw, 1.0*h/sh)
#TODO: gtk3 drawing works, but svg is drawn over the UI elements
self.svg.render_cairo_sub(cr = cr, id = '#layer%i' % tab_num)
[1] In fact I will probably go back to GTK_Fixed and move the elements about in the handler when the window resizes, scaled according to the original position. The GTK_Table (deprecated) version takes over 2 minutes to open in the Glade editor.
Unless there is a more elegant way to do this too?
I display images with Qlabel.I need image coordinates/pixel coordinates but, I use mouseclickevent its show me only Qlabel coordinates.
for examples my image is 800*753 and my Qlabel geometry is (701,451).I reads coordinates in (701,451) but I need image coordinates in (800*753)
def resimac(self):
filename= QtWidgets.QFileDialog.getOpenFileName(None, 'Resim Yükle', '.', 'Image Files (*.png *.jpg *.jpeg *.bmp *.tif)')
self.image=QtGui.QImage(filename[0])
self.pixmap=QtGui.QPixmap.fromImage(self.image)
self.resim1.setPixmap(self.pixmap)
self.resim1.mousePressEvent=self.getPixel
def getPixel(self, event):
x = event.pos().x()
y = event.pos().y()
print("X=",x," y= ",y)
Since you didn't provide a minimal, reproducible example, I'm going to assume that you're probably setting the scaledContents property, but that could also be not true (in case you set a maximum or fixed size for the label).
There are some other serious issues about your answer, I'll address them at the end of this answer.
The point has to be mapped to the pixmap coordinates
When setting a pixmap to a QLabel, Qt automatically resizes the label to its contents.
Well, it does it unless the label has some size constrains: a maximum/fixed size that is smaller than the pixmap, and/or the QLabel has the scaledContents property set to True as written above. Note that this also happens if any of its ancestors has some size constraints (for example, the main window has a maximum size, or it's maximized to a screen smaller than the space the window needs).
In any of those cases, the mousePressEvent will obviously give you the coordinates based on the widget, not on the pixmap.
First of all, even if it doesn't seem to be that important, you'll have to consider that every widget can have some contents margins: the widget will still receive events that happen inside the area of those margins, even if they are outside its actual contents, so you'll have to consider that aspect, and ensure that the event happens within the real geometry of the widget contents (in this case, the pixmap). If that's true, you'll have to translate the event position to that rectangle to get its position according to the pixmap.
Then, if the scaledContents property is true, the image will be scaled to the current available size of the label (which also means that its aspect ratio will not be maintained), so you'll need to scale the position.
This is just a matter of math: compute the proportion between the image size and the (contents of the) label, then multiply the value using that proportion.
# click on the horizontal center of the widget
mouseX = 100
pixmapWidth = 400
widgetWidth = 200
xRatio = pixmapWidth / widgetWidth
# xRatio = 2.0
pixmapX = mouseX * xRatio
# the resulting "x" is the horizontal center of the pixmap
# pixmapX = 200
On the other hand, if the contents are not scaled you'll have to consider the QLabel alignment property; it is usually aligned on the left and vertically centered, but that depends on the OS, the style currently in use and the localization (consider right-to-left writing languages). This means that if the image is smaller than the available size, there will be some empty space within its margins, and you'll have to be aware of that.
In the following example I'm trying to take care about all of that (I'd have to be honest, I'm not 100% sure, as there might be some 1-pixel tolerance due to various reasons, most regarding integer-based coordinates and DPI awareness).
Note that instead of overwriting mousePressEvent as you did, I'm using an event filter, I'll explain the reason for it afterwards.
from PyQt5 import QtCore, QtGui, QtWidgets
class Window(QtWidgets.QWidget):
def __init__(self):
QtWidgets.QWidget.__init__(self)
layout = QtWidgets.QGridLayout(self)
self.getImageButton = QtWidgets.QPushButton('Select')
layout.addWidget(self.getImageButton)
self.getImageButton.clicked.connect(self.resimac)
self.resim1 = QtWidgets.QLabel()
layout.addWidget(self.resim1)
self.resim1.setAlignment(QtCore.Qt.AlignRight|QtCore.Qt.AlignVCenter)
# I'm assuming the following...
self.resim1.setScaledContents(True)
self.resim1.setFixedSize(701,451)
# install an event filter to "capture" mouse events (amongst others)
self.resim1.installEventFilter(self)
def resimac(self):
filename, filter = QtWidgets.QFileDialog.getOpenFileName(None, 'Resim Yükle', '.', 'Image Files (*.png *.jpg *.jpeg *.bmp *.tif)')
if not filename:
return
self.resim1.setPixmap(QtGui.QPixmap(filename))
def eventFilter(self, source, event):
# if the source is our QLabel, it has a valid pixmap, and the event is
# a left click, proceed in trying to get the event position
if (source == self.resim1 and source.pixmap() and not source.pixmap().isNull() and
event.type() == QtCore.QEvent.MouseButtonPress and
event.button() == QtCore.Qt.LeftButton):
self.getClickedPosition(event.pos())
return super().eventFilter(source, event)
def getClickedPosition(self, pos):
# consider the widget contents margins
contentsRect = QtCore.QRectF(self.resim1.contentsRect())
if pos not in contentsRect:
# outside widget margins, ignore!
return
# adjust the position to the contents margins
pos -= contentsRect.topLeft()
pixmapRect = self.resim1.pixmap().rect()
if self.resim1.hasScaledContents():
x = pos.x() * pixmapRect.width() / contentsRect.width()
y = pos.y() * pixmapRect.height() / contentsRect.height()
pos = QtCore.QPoint(x, y)
else:
align = self.resim1.alignment()
# for historical reasons, QRect (which is based on integer values),
# returns right() as (left+width-1) and bottom as (top+height-1),
# and so their opposite functions set/moveRight and set/moveBottom
# take that into consideration; using a QRectF can prevent that; see:
# https://doc.qt.io/qt-5/qrect.html#right
# https://doc.qt.io/qt-5/qrect.html#bottom
pixmapRect = QtCore.QRectF(pixmapRect)
# the pixmap is not left aligned, align it correctly
if align & QtCore.Qt.AlignRight:
pixmapRect.moveRight(contentsRect.x() + contentsRect.width())
elif align & QtCore.Qt.AlignHCenter:
pixmapRect.moveLeft(contentsRect.center().x() - pixmapRect.width() / 2)
# the pixmap is not top aligned (note that the default for QLabel is
# Qt.AlignVCenter, the vertical center)
if align & QtCore.Qt.AlignBottom:
pixmapRect.moveBottom(contentsRect.y() + contentsRect.height())
elif align & QtCore.Qt.AlignVCenter:
pixmapRect.moveTop(contentsRect.center().y() - pixmapRect.height() / 2)
if not pos in pixmapRect:
# outside image margins, ignore!
return
# translate coordinates to the image position and convert it back to
# a QPoint, which is integer based
pos = (pos - pixmapRect.topLeft()).toPoint()
print('X={}, Y={}'.format(pos.x(), pos.y()))
if __name__ == '__main__':
import sys
app = QtWidgets.QApplication(sys.argv)
w = Window()
w.show()
sys.exit(app.exec_())
Now. A couple of suggestions.
Don't overwrite existing child object methods with [other] object's instance attributes
There are various reasons for which this is not a good idea, and, while dealing with Qt, the most important of them is that Qt uses function caching for virtual functions; this means that as soon as a virtual is called the first time, that function will always be called in the future. While your approach could work in simple cases (especially if the overwriting happens within the parent's __init__), it's usually prone to unexpected behavior that's difficult to debug if you're not very careful.
And that's exactly your case: I suppose that resimac is not called upon parent instantiation and until after some other event (possibly a clicked button) happens. But if the user, for some reason, clicks on the label before a new pixmap is loaded, your supposedly overwritten method will never get called: at that time, you've not overwritten it yet, so the user clicks the label, Qt calls the QLabel's base class mousePressEvent implementation, and then that method will always be called from that point on, no matter if you try to overwrite it.
To work around that, you have at least 3 options:
use an event filter (as the example above); an event filter is something that "captures" events of a widgets and allows you to observe (and interact) with it; you can also decide to propagate that event to the widget's parent or not (that's mostly the case of key/mouse events: if a widget isn't "interested" about one of those events, it "tells" its parent to care about it); this is the simplest method, but it can become hard to implement and debug for complex cases;
subclass the widget and manually add it to your GUI within your code;
subclass it and "promote" the widget if you're using Qt's Designer;
You don't need to use a QImage for a QLabel.
This is not that an issue, it's just a suggestion: QPixmap already uses (sort of) fromImage within its C++ code when constructing it with a path as an argument, so there's no need for that.
Always, always provide usable, Minimal Reproducible Example code.
See:
https://stackoverflow.com/help/how-to-ask
https://stackoverflow.com/help/minimal-reproducible-example
It could take time, even hours to get an "MRE", but it's worth it: there'll always somebody that could answer you, but doesn't want to or couldn't dig into your code for various reasons (mostly because it's incomplete, vague, inusable, lacking context, or even too expanded). If, for any reason, there'll be just that one user, you'll be losing your occasion to solve your problem. Be patient, carefully prepare your questions, and you'll probably get plenty of interactions and useful insight from it.
I'm aware that there are literally hundred of examples out there for this task, though I don't manage to apply those examples to my specific problem. As you can see on the code below I am trying to draw a polygon on a bitmap "self.image". This code works absolutely fine on MS Windows. On Linux this code will not draw my polygon.
I tried to play around with different "dcs" like MemoryDC according to this How to draw text in a bitmap using wxpython?
but the result was the same.
My questions are:
Why does my code fail on Linux? Why does this work on MS Windows? (a bit off-topic) Why are people often exclusively drawing in the PaintDC in the OnPaint method bound to EVT_PAINT?
class attributes:
self.dc = wx.ClientDC(self.image)
self.dc.SetPen(wx.Pen(colour='red', width=4, style=wx.SOLID))
self.polygon = list()
this method is being called when I want to start drawing:
def start_drawing(self):
self.image.Bind(event=wx.EVT_LEFT_DOWN, handler=self.draw_polygon)
self.dc.BeginDrawing()
this method handles the binding from above:
def draw_polygon(self, event):
self.polygon.append(event.GetPositionTuple())
if len(self.polygon) > 1:
self.dc.DrawLines(points=self.polygon)
I fixed the bug myself.
The problem was actually not visible in the code I've been providing.
I have a method which changes self.image before any drawing is made. Thus the image which I assign to the ClientDC in the class body is not the same image than self.image when I begin to draw. Adding self.dc = wx.ClientDC(self.image) in the image setter solved my problem.
I don't understand why this code worked when being executed on MS Windows. This should not have never worked in the first place.
First of all, it is important to mention that I'm learning Python and Gtk+ 3, so I'm not an advanced programmer in these languages.
I'm trying to make a graphical interface in Gtk3 for a Python script that creates a png image, and I'd like to display it, but the PyGobject documentation is so scarce that I haven't found a way to do that. So far, my interface looks like this:
The buttons and text entries are arranged in a grid, and I'd like to keep empty the big space (represented by the big button) to the right until the script finishes building the image, and then show it in that area. The code is here.
Is there a way to do that using Python in Gtk3?
Thanks in advance,
Germán.
EDIT
Taking a look at the demos pointed out by #gpoo I discovered the Frame widget, and I implemented it in my GUI. This is how it looks like:
Inside the window class, I add the Frame to the grid:
self.frame_rgb = Gtk.Frame(label='RGB image')
self.frame_rgb.set_label_align(0.5, 0.5)
self.frame_rgb.set_shadow_type(Gtk.ShadowType.IN)
self.grid.attach_next_to(self.frame_rgb, self.label_img_name,
Gtk.PositionType.RIGHT, 3, 8)
I also connect the Run button to a callback function, so that when I click on it, my script creates and then displays the png image:
self.button_run = Gtk.Button(stock=Gtk.STOCK_EXECUTE)
self.button_run.connect('clicked', self.on_button_run_clicked)
self.grid.attach_next_to(self.button_run, self.entry_b_img,
Gtk.PositionType.BOTTOM, 1, 1)
Finally, my callback function is (no calculations yet, only render the image to the Frame for testing purposes):
def on_button_run_clicked(self, widget):
self.img = Gtk.Image.new_from_file('astro-tux.png')
self.frame_rgb.add(self.img)
but I got the following error when I click the Run button:
(makeRGB.py:2613): Gtk-WARNING **: Attempting to add a widget with
type GtkImage to a GtkFrame, but as a GtkBin subclass a GtkFrame can
only contain one widget at a time; it already contains a widget of
type GtkImage
Any help is appreciated!
You can use Gtk.Image. If you generate a file, you could use:
img = Gtk.Image.new_from_file('/path/to/my_file.png')
and add img to the container (GtkGrid in your case). Or, if you already have the Gtk.Image there, you can use:
img.set_from_file('/path/to/my_file.png')
Instead of ...from_file you can use from_pixbuf, and you can create a Gdk.Pixbuf from a stream.
In general, you can use the documentation for C and change the idiom to Python. Also, you can check the demos available in PyGObject, in particular, the demo for handling images.
I'm working on a window manager written using python's xlib bindings and I'm (initially) attempting to mimic dwm's behavior in a more pythonic way. I've gotten much of what I need, but I'm having trouble using X's built in window border functionality to indicate window focus.
Assuming I've got an instance of Xlib's window class and that I'm reading the documentation correctly, this should do what I want to do (at least for now) - set the window border of a preexisting window to a garish color and set the border width to 2px.
def set_active_border(self, window):
border_color = self.colormap.alloc_named_color(\
"#ff00ff").pixel
window.change_attributes(None,border_pixel=border_color,
border_width = 2 )
self.dpy.sync()
However, I get nothing from this - I can add print statements to prove that my program is indeed running the callback function that I associated with the event, but I get absolutely no color change on the border. Can anyone identify what exactly I'm missing here? I can pastebin a more complete example, if it will help. I'm not exactly sure it will though as this is the only bit that handles the border.
Looks like this was complete PEBKAC. I've found an answer. Basically, I was doing this:
def set_active_border(self, window):
border_color = self.colormap.alloc_named_color(
"#ff00ff"
).pixel
window.configure(border_width=2)
window.change_attributes(
None,
border_pixel=border_color,
border_width=2)
self.dpy.sync()
Apparently this was confusing X enough that it was doing nothing. The solution that I've stumbled upon was to remove the border_width portion from the window.change_attributes() call, like so:
def set_active_border(self, window):
border_color = self.colormap.alloc_named_color(
"#ff00ff"
).pixel
window.configure(border_width=2)
window.change_attributes(
None,
border_pixel=border_color
)
self.dpy.sync()
I hope this helps someone later on down the road!