I was implementing a ttk progress bar yesterday and saw some code that I didn't quite understand.
A maximum value can be set for a progress bar by using something like the following:
progress_bar["maximum"] = max
I was expecting the ttk Progressbar object would use an instance variable to track the maximum value for a created object, but that syntax would look more like:
progres_bar.maximum = max
So my question is, what exactly is happening with the bracketed syntax here, what's the terminology, and where can I read more on this? When I looked at the Progressbar class, all I saw was
class Progressbar(Widget):
"""Ttk Progressbar widget shows the status of a long-running
operation. They can operate in two modes: determinate mode shows the
amount completed relative to the total amount of work to be done, and
indeterminate mode provides an animated display to let the user know
that something is happening."""
def __init__(self, master=None, **kw):
"""Construct a Ttk Progressbar with parent master.
STANDARD OPTIONS
class, cursor, style, takefocus
WIDGET-SPECIFIC OPTIONS
orient, length, mode, maximum, value, variable, phase
"""
Widget.__init__(self, master, "ttk::progressbar", kw)
I see there's a "widget-specifc option", but I don't understand how progress_bar["maximum"] = max sets that value, or how it's stored.
What is happening is that the ttk module is a thin wrapper around a tcl interpreter with the tk package installed. Tcl/tk has no concept of python classes.
In tcl/tk, the way to set an attribute is with a function call. For example, to set the maximum attribute, you would do something like this:
.progress_bar configure -maximum 100
The ttk wrapper is very similar:
progress_bar.configure(maximum=100)
For a reason only known to the original tkinter developers, they decided to implement a dictionary interface that allows you to use bracket notation. Maybe they thought it was more pythonic? For example:
progress_bar["maximum"] = 100
Almost certainly, the reason they didn't make these attributes of the object (eg: progress_bar.maximum = 100) is because some tcl/tk widget attributes would clash with python reserved words or standard attributes (for example, id). By using a dictionary they avoid such clashes.
Widget extends Tkinter.Widget extends BaseWidget extends Misc which contains:
__getitem__ = cget
def __setitem__(self, key, value):
self.configure({key: value})
You can find this in your Python's library folder; search for Tkinter.py
This is the code which implement the dict interface. There is no implementation for attribute access which would use __getattr__() and __setattr__().
As to why the Tk guys went this way? It's hard to say. Tk integration in Python is pretty old. Or the people felt that __getattr__() with its quirks would cause more trouble.
Related
I am trying to implement a Python class to facilitate easy exploration of relatively large dataset in Jupyter notebook by exposing various (some what compute intensive) filter methods as class attributes using descriptor protocol. Idea was to take advantage of lazyness of descriptor to only compute on accessing particular attribute.
Consider the following snippet:
import time
accessed_attr = [] # I find this easier then using basic logging for jupyter/ipython
class MyProperty:
def __init__(self,name):
self.name=name
def __get__(self, instance, owner):
if instance is None:
return self
accessed_attr.append(f'accessed {self.name} from {instance} at {time.asctime()}')
setattr(instance, self.name, self.name)
return self.name # just return string
class Dummy:
abc=MyProperty('abc')
bce=MyProperty('bce')
cde=MyProperty('cde')
dummy_inst = Dummy() # instantiate the Dummy class
on dummy_inst.<tab>, I assumed Juptyer would show auto completions abc, bce, cde among other hidden methods and not evaluate them. Printing the logging list accessed_attr shows all __get__ methods for the three descriptors were called, which is not what I expect or want.
A hacky way I figured was to deffer first access to descriptor using a counter like shown in image below, but has its own issues.
I tried other ways using __slots__, modifying __dir__ to trick the kernel, but couldn't find a way to get around the issue.
I understand there is another way using __getattribute__, but it still doesn't seem elegant, I am puzzled with what seemed so trivial turned out to be mystery to me. Any hints, pointers and solutions are appreciated.
Here is my Python 3.7 based environment:
{'IPython': '7.18.1',
'jedi': '0.17.2',
'jupyter': '1.0.0',
'jupyter_core': '4.6.3',
'jupyter_client': '6.1.7'}
It's unfortunately a ca and mouse battle, IPython used to aggressively explore attribute, which ended up being deactivated because of side effects. (see for example why the IPCompleter.limit_to__all__ option was added. Though other users come to complain that dynamic attribute don't show up. So it's likely either jedi that look at those attributes. You can try using c.Completer.use_jedi=False to check that. If it's jedi, then you have to ask the jedi author, if not then I'm unsure, but it's a delicate balance.
Lazy vs exploratory is really complicated subject in IPython, you might be able to register a custom completer (even for dict keys) that might make it easier to explore without computing, or use async await for make sure only calling await obj.attr triggers the computation.
This should be a stupid question. I am just curious and could not find the answer on my own.
E.g. I define in PyQt5 some widgets:
self.lbl = QLabel("label")
self.btn = QPushButton("click")
self.txt = QLineEdit("text")
Is there any method to detect what kind of widget the self.lbl, self.btn, or self.txt are?
I could imagine: by detecting the widget type, the input is self.lbl, the output should be QLabel... Or something like this.
I have only found the isWidgetType() method. But it is not what I want to have.
There are several ways to get the name of the widget:
using __class__:
print(self.lbl.__class__.__name__)
using QMetaObject:
print(self.lbl.metaObject().className())
These previous methods return a string with the name of the class, but if you want to verify if an object belongs to a class you can use isinstance():
is_label = isinstance(self.lbl, QLabel)
Another option is to use type() but it is not recommended, if you want to get more information about isinstance() and type() read the following: What are the differences between type() and isinstance()?
You can just use the standard Python means of checking an object type:
print(type(self.lbl))
print(isinstance(self.lbl, QLabel)
I am using Wing to write and debug a Tkinter GUI. I am finding that the Stack Data View doesn't seem to match the actual attributes of my widgets. Take this code for example:
import Tkinter
import ttk
root = Tkinter.Tk()
checkbutton = ttk.Checkbutton(root, text="Test Check Button")
print checkbutton.text
This gives me an attribute error at the last line. However, when I look at the stack, there is clearly an attribute called 'text' with the value that I'm looking for:
Anyone know what's going on?
I'm using:
Wing version: Wing IDE Pro 5.1.3-1 (rev 33002)
Tkinter version: '$Revision: 81008 $'
Python version: 2.7.10
I posted this to the Wing email list, and I got the following response from the developers:
It looks like a ttk.Checkbutton defines keys() and __getitem__()
methods to exposes tk attributes via checkbutton[<name>]. Because
of the keys() and __getitem__(), Wing is displaying the instance
as if it were a dictionary, with the keys and values interspersed with
the attributes. Wing does this because often you want to view an
object that defines keys() and __getitem__() as if it were a
dictionary, but I agree that it's confusing in this instance.
We'll try to improve this in a future release.
What you are calling attributes are not object attributes. Widgets use an internal system for managing the widget options. .text is not an attribute, which is why you get the error. To reference the configuration use .cget(...) to get the value and .configure(...) to change the value.
I am doing dynamic class generation that could be statically determined at "compile" time. The simple case that I have right now looks more or less like this:
class Base(object):
def __init__(self, **kwargs):
self.do_something()
def ClassFactory(*args):
some_pre_processing()
class GenericChild(Base):
def __init__(self, **kwargs):
self.some_processing()
super(GenericChild, self).__init__(*args, **kwargs)
return GenericChild
Child1 = ClassFactory(1, 'Child_setting_value1')
Child2 = ClassFactory(2, 'Child_setting_value2')
Child3 = ClassFactory(3, 'Child_setting_value3')
On import, the Python interpreter seems to compile to bytecode, then execute the file (thus generating Child1, Child2, and Child3) once per Python instance.
Is there a way to tell Python to compile the file, execute it once to unpack the Child classes, then compile that into the pyc file, so that the unpacking only happens once (even across successive executions of the Python script)?
I have other use cases that are more complicated and expansive, so simply getting rid of the factory by hand-writing the Child classes is not really an option. Also, I would like to avoid an extra preprocessor step if possible (like using the C-style macros with the C preprocessor).
No, you'd have to generate Python code instead where those classes are 'baked' to python code instead.
Use some form of string templating where you generate Python source code, save those to .py files, then bytecompile those.
However, the class generation happens only once on startup. Is it really that great a cost to generate these?
If there's no real need to have the child classes separate and you just want to have a 'standard configuration' for those particular sets of objects, you could just make your ObjectFactory a class with the configuration stored in there. Each instance will be able to spit out GenericChildren with the appropriate configuration, completely bypassing the runtime generation of Classes (and the debugging headache associated with it).
I am writing a Python GTK application for studying some sort of math data. The main script has a single class with only three methods: __INIT__, main(self) for starting the loop and delete_event for killing it.
__INIT__ creates the GUI, which includes a TextBuffer and TextView widgets so that the analysis functions (defined on a separate functions.py module) can output their results to a common log/message area. A relevant extract follows:
include module functions(.py)
(...)
class TURING:
def __init__(self):
self.window = gtk.Window(gtk.WINDOW_TOPLEVEL)
(...)
self.logscroll = gtk.ScrolledWindow()
self.logscroll.set_policy(gtk.POLICY_AUTOMATIC, gtk.POLICY_AUTOMATIC)
self.logbuffer = gtk.TextBuffer()
self.logpage = gtk.TextView(self.logbuffer)
self.logpage.set_editable(gtk.FALSE)
self.logpage.set_cursor_visible(gtk.FALSE)
self.logpage.set_wrap_mode(gtk.WRAP_CHAR)
self.logscroll.add(self.logpage)
self.logscroll.show()
self.logpage.show()
(...)
enditer = self.logbuffer.get_end_iter()
self.logbuffer.insert(enditer, 'Welcome!')
(...)
def main(self):
gtk.main()
if __name__ == "__main__":
turing = TURING()
turing.main()
The intermediate two lines successfully print a welcome message onto the message area defined by self.logpage.
Now, one of the functions in method functions checks whether the database is up to date and if not asks the user to load a new batch of raw data.
One way of doing this is to include a menu item that triggers that function, like this:
item_dataCheck.connect("activate", functions.data_check, '')
functions.data_check runs fine however when it tries to write its output to self.logbuffer an error is thrown complaining that menu item item_dataCheck has no property logbuffer. The offending code is
enditer = self.logbuffer.get_end_iter()
self.logbuffer.insert(enditer, 'Please update the database.')
Obviously the name self is representing the widget that invoked the function, viz., item_dataCheck. My question is how can I from functions.data_check refer directly to logbuffer as a member of the turing instance of the TURING class. I tried to write
enditer = turing.logbuffer.get_end_iter()
turing.logbuffer.insert(enditer, 'Please update the database.')
but that's is not working. I have tried hard to find a solution but with no success.
I believe the matter is quite trivial and I know I still have some serious conceptual problems with Python and OOP, but any help will be heartly appreciated. (I started out card punching Fortran programs on a Burroughs mainframe, so I could count on some community mercy...)
You can provide additional arguments when connecting signals to functions or methods. So your handler functions.data_check can accept extra arguments apart from self:
def data_check(self, logbuffer):
# ...
Then, you can connect with arguments:
item_dataCheck.connect("activate", functions.data_check, logbuffer)
Also, the self parameter is normally used as the first parameter in method definitions. This is a very strong convention so you should use obj or something similar instead. Specially since your signal handlers may be methods too; in which case you could mess it up with its arguments.