I am writing a class inheriting from QGraphicsItemGroup and its main child is a QGraphicsPathItem. The whole thing is used to draw polylines. I wanted to have a proper hover system for the lines, si I reimplemented the shape method of the object as follows:
def shape(self):
stroker = QtWidgets.QPainterPathStroker()
stroker.setWidth(10 * self.resolution) # resolution handles zoom and stuff
path = stroker.createStroke(self.__path.path()).simplified()
return path
In the above snippet, self.__path is the QGraphicsPathItem I mentioned ealier.
To make things simple, here are a few pictures. The line I drew, that I see on the screen:
The hover area I want:
The hover area I currently have with the reimplemented shape method shown above:
As you guessed, such a selection area is hardly useful for any purpose. Worst of all, I tried to use the exact same method to generate the outlines of the line, then used toFillPolygon to generate a polygon that I rendered in the same object by adding a QGraphicsPolygonItem child to my object: the shape that appears on my screen is exactly what I want, but when I use the same path to create the hover area via shape, it gives me the useless hover area (image 3) instead.
So, do you know why the path obtained with the QPainterPathStroker allows me to display a polygon that seems to exactly correspond to the hover area I want, but when I use that path in shape, the obtained over area is wonky? If so, do you know how to fix this problem?
Related
I want to be able to click anywhere on the graph and a vertical line would appear along the x-value that was clicked. I wish to be able to display the values of each of the graph's curves at that x-value. I've made a diagram that shows what I mean.
I've been Googling how to do this, but haven't really found anything relevant to what I'm looking for. The only thing I can think to do is to store the coordinates where the user clicked their mouse pull the y-values at that x-value. I would then manually draw the line over the graph. I think this would work, but it's a lot messier than using a built-in function if such a function exists. Does Matplotlib have this capability?
I could not find any sort of built in method for achieving this functionality. Fortunately, it was not that difficult to make manually.
On your FigureCanvasTkAgg object you can call:
figurecanvas.callbacks.connect('button_press_event', onclick)
You will also want to define a variable that stores the index of the line you will move around.
line_index = None
Doing this defines a behavior to occur when when the canvas is clicked. The onclick method must have only one parameter, which is the click event.
def onclick(event):
if event.xdata and event.ydata: # This checks if click is outside the chart or not
# line_index is initially None, so we need to check if there is a
# previous line to delete
if line_index:
del self.ax0.lines[line_index] # this removes the previous line
self.line = self.ax0.axvline(x=event.xdata) # draw a new line where clicked
self.ax0.figure.canvas.draw() # update the canvas
line_index = len(axis.lines) - 1 # store index of the line for future deletion
In the onclick method, you can also include functionality that outputs the value of a line at the point clicked. You can access the x-value of the location clicked with event.xdata. This can be used in conjunction with the data you are graphing to output the y-value at the point clicked.
I display images with Qlabel.I need image coordinates/pixel coordinates but, I use mouseclickevent its show me only Qlabel coordinates.
for examples my image is 800*753 and my Qlabel geometry is (701,451).I reads coordinates in (701,451) but I need image coordinates in (800*753)
def resimac(self):
filename= QtWidgets.QFileDialog.getOpenFileName(None, 'Resim Yükle', '.', 'Image Files (*.png *.jpg *.jpeg *.bmp *.tif)')
self.image=QtGui.QImage(filename[0])
self.pixmap=QtGui.QPixmap.fromImage(self.image)
self.resim1.setPixmap(self.pixmap)
self.resim1.mousePressEvent=self.getPixel
def getPixel(self, event):
x = event.pos().x()
y = event.pos().y()
print("X=",x," y= ",y)
Since you didn't provide a minimal, reproducible example, I'm going to assume that you're probably setting the scaledContents property, but that could also be not true (in case you set a maximum or fixed size for the label).
There are some other serious issues about your answer, I'll address them at the end of this answer.
The point has to be mapped to the pixmap coordinates
When setting a pixmap to a QLabel, Qt automatically resizes the label to its contents.
Well, it does it unless the label has some size constrains: a maximum/fixed size that is smaller than the pixmap, and/or the QLabel has the scaledContents property set to True as written above. Note that this also happens if any of its ancestors has some size constraints (for example, the main window has a maximum size, or it's maximized to a screen smaller than the space the window needs).
In any of those cases, the mousePressEvent will obviously give you the coordinates based on the widget, not on the pixmap.
First of all, even if it doesn't seem to be that important, you'll have to consider that every widget can have some contents margins: the widget will still receive events that happen inside the area of those margins, even if they are outside its actual contents, so you'll have to consider that aspect, and ensure that the event happens within the real geometry of the widget contents (in this case, the pixmap). If that's true, you'll have to translate the event position to that rectangle to get its position according to the pixmap.
Then, if the scaledContents property is true, the image will be scaled to the current available size of the label (which also means that its aspect ratio will not be maintained), so you'll need to scale the position.
This is just a matter of math: compute the proportion between the image size and the (contents of the) label, then multiply the value using that proportion.
# click on the horizontal center of the widget
mouseX = 100
pixmapWidth = 400
widgetWidth = 200
xRatio = pixmapWidth / widgetWidth
# xRatio = 2.0
pixmapX = mouseX * xRatio
# the resulting "x" is the horizontal center of the pixmap
# pixmapX = 200
On the other hand, if the contents are not scaled you'll have to consider the QLabel alignment property; it is usually aligned on the left and vertically centered, but that depends on the OS, the style currently in use and the localization (consider right-to-left writing languages). This means that if the image is smaller than the available size, there will be some empty space within its margins, and you'll have to be aware of that.
In the following example I'm trying to take care about all of that (I'd have to be honest, I'm not 100% sure, as there might be some 1-pixel tolerance due to various reasons, most regarding integer-based coordinates and DPI awareness).
Note that instead of overwriting mousePressEvent as you did, I'm using an event filter, I'll explain the reason for it afterwards.
from PyQt5 import QtCore, QtGui, QtWidgets
class Window(QtWidgets.QWidget):
def __init__(self):
QtWidgets.QWidget.__init__(self)
layout = QtWidgets.QGridLayout(self)
self.getImageButton = QtWidgets.QPushButton('Select')
layout.addWidget(self.getImageButton)
self.getImageButton.clicked.connect(self.resimac)
self.resim1 = QtWidgets.QLabel()
layout.addWidget(self.resim1)
self.resim1.setAlignment(QtCore.Qt.AlignRight|QtCore.Qt.AlignVCenter)
# I'm assuming the following...
self.resim1.setScaledContents(True)
self.resim1.setFixedSize(701,451)
# install an event filter to "capture" mouse events (amongst others)
self.resim1.installEventFilter(self)
def resimac(self):
filename, filter = QtWidgets.QFileDialog.getOpenFileName(None, 'Resim Yükle', '.', 'Image Files (*.png *.jpg *.jpeg *.bmp *.tif)')
if not filename:
return
self.resim1.setPixmap(QtGui.QPixmap(filename))
def eventFilter(self, source, event):
# if the source is our QLabel, it has a valid pixmap, and the event is
# a left click, proceed in trying to get the event position
if (source == self.resim1 and source.pixmap() and not source.pixmap().isNull() and
event.type() == QtCore.QEvent.MouseButtonPress and
event.button() == QtCore.Qt.LeftButton):
self.getClickedPosition(event.pos())
return super().eventFilter(source, event)
def getClickedPosition(self, pos):
# consider the widget contents margins
contentsRect = QtCore.QRectF(self.resim1.contentsRect())
if pos not in contentsRect:
# outside widget margins, ignore!
return
# adjust the position to the contents margins
pos -= contentsRect.topLeft()
pixmapRect = self.resim1.pixmap().rect()
if self.resim1.hasScaledContents():
x = pos.x() * pixmapRect.width() / contentsRect.width()
y = pos.y() * pixmapRect.height() / contentsRect.height()
pos = QtCore.QPoint(x, y)
else:
align = self.resim1.alignment()
# for historical reasons, QRect (which is based on integer values),
# returns right() as (left+width-1) and bottom as (top+height-1),
# and so their opposite functions set/moveRight and set/moveBottom
# take that into consideration; using a QRectF can prevent that; see:
# https://doc.qt.io/qt-5/qrect.html#right
# https://doc.qt.io/qt-5/qrect.html#bottom
pixmapRect = QtCore.QRectF(pixmapRect)
# the pixmap is not left aligned, align it correctly
if align & QtCore.Qt.AlignRight:
pixmapRect.moveRight(contentsRect.x() + contentsRect.width())
elif align & QtCore.Qt.AlignHCenter:
pixmapRect.moveLeft(contentsRect.center().x() - pixmapRect.width() / 2)
# the pixmap is not top aligned (note that the default for QLabel is
# Qt.AlignVCenter, the vertical center)
if align & QtCore.Qt.AlignBottom:
pixmapRect.moveBottom(contentsRect.y() + contentsRect.height())
elif align & QtCore.Qt.AlignVCenter:
pixmapRect.moveTop(contentsRect.center().y() - pixmapRect.height() / 2)
if not pos in pixmapRect:
# outside image margins, ignore!
return
# translate coordinates to the image position and convert it back to
# a QPoint, which is integer based
pos = (pos - pixmapRect.topLeft()).toPoint()
print('X={}, Y={}'.format(pos.x(), pos.y()))
if __name__ == '__main__':
import sys
app = QtWidgets.QApplication(sys.argv)
w = Window()
w.show()
sys.exit(app.exec_())
Now. A couple of suggestions.
Don't overwrite existing child object methods with [other] object's instance attributes
There are various reasons for which this is not a good idea, and, while dealing with Qt, the most important of them is that Qt uses function caching for virtual functions; this means that as soon as a virtual is called the first time, that function will always be called in the future. While your approach could work in simple cases (especially if the overwriting happens within the parent's __init__), it's usually prone to unexpected behavior that's difficult to debug if you're not very careful.
And that's exactly your case: I suppose that resimac is not called upon parent instantiation and until after some other event (possibly a clicked button) happens. But if the user, for some reason, clicks on the label before a new pixmap is loaded, your supposedly overwritten method will never get called: at that time, you've not overwritten it yet, so the user clicks the label, Qt calls the QLabel's base class mousePressEvent implementation, and then that method will always be called from that point on, no matter if you try to overwrite it.
To work around that, you have at least 3 options:
use an event filter (as the example above); an event filter is something that "captures" events of a widgets and allows you to observe (and interact) with it; you can also decide to propagate that event to the widget's parent or not (that's mostly the case of key/mouse events: if a widget isn't "interested" about one of those events, it "tells" its parent to care about it); this is the simplest method, but it can become hard to implement and debug for complex cases;
subclass the widget and manually add it to your GUI within your code;
subclass it and "promote" the widget if you're using Qt's Designer;
You don't need to use a QImage for a QLabel.
This is not that an issue, it's just a suggestion: QPixmap already uses (sort of) fromImage within its C++ code when constructing it with a path as an argument, so there's no need for that.
Always, always provide usable, Minimal Reproducible Example code.
See:
https://stackoverflow.com/help/how-to-ask
https://stackoverflow.com/help/minimal-reproducible-example
It could take time, even hours to get an "MRE", but it's worth it: there'll always somebody that could answer you, but doesn't want to or couldn't dig into your code for various reasons (mostly because it's incomplete, vague, inusable, lacking context, or even too expanded). If, for any reason, there'll be just that one user, you'll be losing your occasion to solve your problem. Be patient, carefully prepare your questions, and you'll probably get plenty of interactions and useful insight from it.
I have one generic icon image, which has an alpha. Lets say a black sphere placed on an square button, with transparancy.
Now I would like to change the color of the icon on the fly, without having several image of sphere_black.png, sphere_red.png etc etc.
Is there a way to colorize the pixmap, respecting the alpha and change HSV on that pixel, for all in the map?
I have something like this, but stuck:
img = QtGui.QImage(kwargs['icon_path']
pxmap = QtGui.QPixmap(img)
for x in range(img.width()):
for y in range(img.height()):
print img.pixel(1, 1), '###'
# ???? #
Any help is appreciated!
QGraphicsColorizeEffect might be what you are looking for. Sadly the QGraphicsEffect class is made to be used with the graphics view framework, it can't easily be applied to a QImage. However there are workarounds for that, as this discussion shows.
The implementation of the effect in QPixmapColorizeFilter::draw() shows how the colourization is done: A coloured rect (with the color having the alpha set to something else than fully opaque) is drawn over the image with QPainter::fillRect(), with an appropriate composition mode set.
I'm trying for a while to get this sorted out in Maya:
I want a script which can hide my unselected lights for example, so the only way which comes to mind (and doesn't work) is this one:
lt=cmds.ls(lt=True,sl=False)
cmds.hide(lt)
I see that the argument False with selection doesn't work, so I want to find out about some other ways...thanks
#goncalops answer will work if you select the light shapes, but not their transforms.
Try:
lights = cmds.ls(type = 'light') or []
lights = set(cmds.listRelatives(*lights, p=True) or [])
for item in lights.difference(set(cmds.ls(sl=True))):
cmds.hide(item)
I think most of the answers go into over engineering land. The question is how to hide non selected lights at the end of your operation, Nothing says you can not hide them all and bring the lights selected back. So conceptually easier is to do (and slightly faster but that's beside the point):
cmds.hide(cmds.ls(lights=True, dag=True))
cmds.showHidden()
One comment: There's no need to fetch shapes separately in this case, as it has a the dag flag for this. See conceptually Maya items are packets of transform and the shape separately. However its so common occurrence that you want to convert between the shape and dag to shape that ls offers a way to do this with the dag and shapes flags.
Second comment: If you do not pass a list to Maya it will operate on selection thats why showHidden works without any data.
PS: conceptually neither my answer nor #theodox answer will work in all cases as you MAY indeed have selected the shape. However most users will not so it will commonly work this way.
Reading the documentation for the ls command in Maya 2011, it doesn't seem to have either lt or sl parameters, although it has lights and selection.
Further, it seems the selection argument only serves the purpose of returning the selected arguments, not of filtering unselected ones.
OTOH, the hide method accepts a single argument.
Try this:
lights= set(cmds.ls(lights=True)) - set(cmds.ls(selection=True))
for light in lights:
cmds.hide(light)
this will work for your condition
hide_light = set(cmds.ls(lights=True, l=True)) - set(cmds.ls(sl=True, dag=True, l=True, leaf=True))
for each_lit in hide_light:
cmds.setAttr("%s.visibility" % each_lit, 0)
Let's discuss the problem a bit:
There are a few things to consider. When users select a light, (from the Viewport or the Outliner), most of the time they would really be selecting the transform node of a light.
When we perform a cmds.ls(type='lights'), we are actually selecting their shape nodes. This is in line with what #theodox is saying.
I don't know about you, but when I hide lights manually, I select lights in Outliner/Viewport. When I hide them (ctrl-h), they grey out in the outliner. What I've done is hidden their transform nodes (not their shape nodes).
To make things more complicated, Maya actually lets us hide shape nodes too. But the transform node will not grey out when the shape node is hidden.
Imagine if my script were to hide the light shape node, in the Outliner there would be no indication that those lights are hidden, if the Outliner is not set to display shape nodes (this is the default setting in the Outliner). Without the greying-out to indicate that the lights are hidden, many artists especially less experienced ones would assume that lights are turned on when they have already been disabled and hidden. This is going to cost a lot of confusion, time wasted, frustration, basically not what we want.
Thus when I write a script like this I'll expect the user to be selecting transform nodes. Also when I hide lights, I will hide the transform nodes of the lights instead of hiding the light shapes directly. That would be my game plan.
import maya.cmds as mc
def hideDeselected(targetNodeType):
# selectedNodeTransforms will contain transform nodes
# of all target node type shapes that are selected
selectedNodeTransforms = []
for selNode in mc.ls(sl=True):
if targetNodeType in mc.nodeType(selNode):
# selected node is the correct type
# add the transform node to selectedNodeTransforms
selectedNodeTransforms.append(mc.listRelatives(selNode, parent=True))
elif mc.listRelatives(selNode, children=True, type=targetNodeType):
# selected node is a transform node
# with a child node of the correct type
# add the transform node to selectedNodeTransforms
selectedNodeTransforms.append(selNode)
if selectedNodeTransforms:
# only if something is selected, do the hiding thing.
# If we do not do this check, and if nothing is selected
# all transform nodes of targetNodeType will be hidden
print 'selected objects:',selectedNodeTransforms
for thisNode in mc.ls(type=targetNodeType):
# loops through all target shapes in the scene
# get the transform node
thisNodeTransform = mc.listRelatives(thisNode, parent=True)[0]
if not thisNodeTransform in selectedNodeTransforms:
print 'hiding', thisNodeTransform
hide(thisNodeTransform)
else:
print 'nothing is selected'
hideDeselected('light')
In the code above, I've made a function out of it so we can pass in any dag node type that is able to have a parent in the scene, and the code will work.
Thus, to hide all the other cameras in the scene that are not currently selected, we just have to call the function with the camera node type:
hideDeselected('camera')
I have the following script:
import bpy
import os
print("Starter")
selection = bpy.context.selected_objects
for obj in selection:
print("Obj selected")
me = obj.data
for edge in me.edges:
vert1 = me.vertices[edge.vertices[0]]
vert2 = me.vertices[edge.vertices[1]]
print("<boundingLine p1=\"{0}f,0.0f,{1}f,1.0f\" p2=\"{2}f,0.0f,{3}f,1.0f\" />".format(vert1.co.x, vert1.co.y, vert2.co.x, vert2.co.y))
Pretty basic, right? It just prints out all the edges into the console, for me to copy paste into an xml document.
When I scale an object, and perform this script on the object, I get the OLD, unscaled values for the object outputed to the console, before it was scaled. I have tried moving every vertice in the object in all axises, which results in the values outputed being those outscaled and then transformed according to my movement.
If i press n to check the vertices global values, they are properly scaled.
Why am I not getting the correct values?!?
This script was supposed to save time, but getting anything to work in blender is a CHORE! It does not help that they has just updated their api, so all example code out there is outdated!
Allright, this is the deal: when you scale, translate or rotate an object in Blender, or otherwise perform an transformation, that transformation is "stored" somehow. What you need to do I choose the object of which you applied the transformation, and use the short cut CTRL + A, and then apply your transformation.
...
So there was no lack of contingency (am I using this word right? Checked out it's definition and it seems right) between the internal data accessible through the blender api, and the values actually displayed.
I am sure this design makes sense, but right now I want to punch the guy that came up with it, in the throat. If I scale something, I intend the thing that got scaled to be scaled!
But anyways, the reason I got weird values was because the scaling was not applied, which you do with CTRL + A, once you in object mode have selected the object that you scaled.
I`m not really a Blender user(but a Maya one), I think you could try something different(I woulds say slower too...), just iterate over the selected vertices, creating a locator or a null object and constraining it to the vertex position and getting it's x,y,z coordinates. I've done it in maya and works.
Lets say something like this:
data_list = []
selection = #selection code here#
for v in selection:
loc = locator()
pointconstraint(v, loc)
data_list.append(loc.translation_attributes)
Mesh objects have an internal coordinate system for their vertices, as well as global translation, scaling, and rotation transforms that apply to the entire object. You can apply the global scaling matrix to the mesh data, and convert the vertex coordinates to the global coordinate system as follows:
bpy.ops.object.select_all(action='SELECT')
bpy.ops.object.transform_apply(scale=True)
bpy.ops.object.select_all(action='DESELECT')
Other options to transform_apply() allow rotation and translation matrices to be applied as well.