Just want to know what's the common way to react on events in python. There are several ways in other languages like callback functions, delegates, listener-structures and so on.
Is there a common way? Which default language concepts or additional modules are there and which can you recommend?
Personally I don't see a difference between callbacks, listeners, and delegates.
The observer pattern (a.k.a listeners, a.k.a "multiple callbacks") is easy to implement - just hold a list of observers, and add or remove callables from it. These callables can be functions, bound methods, or classes with the __call__ magic method. All you have to do is define the interface you expect from these - e.g. do they receive any parameters.
class Foo(object):
def __init__(self):
self._bar_observers = []
def add_bar_observer(self, observer):
self._bar_observers.append(observer)
def notify_bar(self, param):
for observer in self._bar_observers:
observer(param)
def observer(param):
print "observer(%s)" % param
class Baz(object):
def observer(self, param):
print "Baz.observer(%s)" % param
class CallableClass(object):
def __call__(self, param):
print "CallableClass.__call__(%s)" % param
baz = Baz()
foo = Foo()
foo.add_bar_observer(observer) # function
foo.add_bar_observer(baz.observer) # bound method
foo.add_bar_observer(CallableClass()) # callable instance
foo.notify_bar(3)
I can't speak for common approaches, but this page (actual copy is unavailable) has an implementation of the observer pattern that I like.
Here's the Internet Archive link:
http://web.archive.org/web/20060612061259/http://www.suttoncourtenay.org.uk/duncan/accu/pythonpatterns.html
It all depends on the level of complexity your application requires. For simple events, callbacks will probably do. For more complex patterns and decoupled levels you should use some kind of a publish-subscribe implementation, such as PyDispatcher or wxPython's pubsub.
See also this discussion.
Most of the Python libraries I have used implement a callback model for their event notifications, which I think suits the language fairly well. Pygtk does this by deriving all objects from GObject, which implements callback-based signal handling. (Although this is a feature of the underlying C GTK implementation, not something inspired by the language.) However, Pygtkmvc does an interesting job of implementing an observer pattern (and MVC) over the top of Pygtk. It uses a very ornate metaclass based implementation, but I have found that it works fairly well for most cases. The code is reasonably straightforward to follow, as well, if you are interested in seeing one way in which this has been done.
Personally, I've only seen callbacks used. However, I haven't seen that much event driven python code so YMMV.
I have seen listeners and callbacks used. But AFAIK there is no Python way. They should be equally feasible if the application in question is suitable.
The matplotlib.cbook module contains a class CallbackRegistry that you might want to have a look at. From the documentation:
Handle registering and disconnecting for a set of signals and
callbacks:
signals = 'eat', 'drink', 'be merry'
def oneat(x):
print 'eat', x
def ondrink(x):
print 'drink', x
callbacks = CallbackRegistry(signals)
ideat = callbacks.connect('eat', oneat)
iddrink = callbacks.connect('drink', ondrink)
#tmp = callbacks.connect('drunk', ondrink) # this will raise a ValueError
callbacks.process('drink', 123) # will call oneat
callbacks.process('eat', 456) # will call ondrink
callbacks.process('be merry', 456) # nothing will be called
callbacks.disconnect(ideat) # disconnect oneat
callbacks.process('eat', 456) # nothing will be called
You probably do not want a dependency to the matplotlib package. I suggest you simply copy-paste the class into your own module from the source code.
I'm searching for an implementation to register and handle events in Python. My only experience is with Gobject, but have only used it with PyGtk. It is flexible, but might be overly complicated for some users. I have come across of few other interesting candidates as well, but it's not clear how exactly they compare to one another.
Pyevent, a wrapper around libevent.
Zope Event
Related
As I cannot find any guidelines about syntax highlighting, I decided to prepare simple write-as-plain-text-and-then-highlight-everything-in-html-preview, which is enough for my scope at the moment.
By overriding many custom meta-model classes I have to_source method, which actually reimplements the whole syntax in reverse, as reverse parsing is not yet available. It's fine, but it ignores user formatting.
To retain user formatting we can use only available thing: _tx_position and _tx_position_end. Descending from main textX rule to its children by stored custom meta-model classes attributes works for most cases, but it fails with primitives.
# textX meta-model file
NonsenseProgram:
"begin" foo=Foo "," count=INT "end";
;
Foo:
"fancy" a=ID "separator" b=ID "finished"
;
# textX custom meta-model classes
class NonsenseProgram():
def __init__(foo, count):
self.foo = foo
self.count = count
def to_source(self):
pass # some recursive magic that use _tx_position and _tx_position_end
class Foo():
def __init__(parent, a, b):
self.parent = parent
self.a = a
self.b = b
def to_source(self):
pass # some recursive magic that use _tx_position and _tx_position_end
Let's consider given example. As we have NonsenseProgram and Foo classes that we can override, we are in control about it's returning source as a whole. We can modify NonsenseProgram generated code, NonsenseProgram.foo fragment (by overriding Foo), by accessing its _tx_* attributes. We can't do the same with NonsenseProgram.count, Foo.a and Foo.b as we have primitive string or int value.
Depending of the usage of primitives is out grammar we have following options:
Wrap every primitive with rule that contains only that primitive and nothing else.
Pros: It just works right now!
Cons: Produces massive overhead of nested values that our grammar toolchain need to handle. It's actually messing with grammar only for being pretty...
Ignore syntax from user and use only our reverse parsing rules.
Pros: It just works too!
Cons: You need reimplement your syntax with nearly every grammar element. It's forces code reformat on every highlight try.
Use some external rules of highlighting.
Pros: It would work...
Cons: Again grammar reimplementation.
Use language server.
Pros: Would be the best option on long run.
Cons: It's only mentioned once without any in-depth docs.
Any suggestions about any other options?
You are right. There is no information on position for primitive types. It seems that you have covered available options at the moment.
What would be an easy to implement option is to add bookkeeping of position directly to textX of all attributes as a special structure on each created object (e.g. a dict keyed by attribute name). It should be straightforward to implement so you can register a feature request in the issue tracker if you wish.
There was some work in the past to support full language services to the textX based languages. The idea is to get all the features you would expect from a decent code editor/IDE for any language specified using textX.
The work staled for a while but resumed recently as the full rewrite. It should be officially supported by the textX team. You can follow the progress here. Although, the project doesn't mention syntax highlighting at the moment, it is on our agenda.
I've been learning and experimenting with decorators. I understand what they do: they allow you to write modular code by allowing you to add functionality to existing functions without changing them.
I found a great thread which really helped me learn how to do it by explaining all the ins and outs here: How to make a chain of function decorators?
But what is missing from this thread (and from other resources I've looked at) is WHY do we need the decorator syntax? Why do I need nested functions to create a decorator? Why can't I just take an existing function, write another one with some extra functionality, and feed the first into the 2nd to make it do something else?
Is it just because this is the convention? What am I missing? I assume my inexperience is showing here.
I will try to keep it simple and explain it with examples.
One of the things that I often do is measure the time taken by API's that I build and then publish them on AWS.
Since it is a very common use case I have created a decorator for it.
def log_latency():
def actual_decorator(f):
#wraps(f)
def wrapped_f(*args, **kwargs):
t0 = time()
r = f(*args, **kwargs)
t1 = time()
async_daemon_execute(public_metric, t1 - t0, log_key)
return r
return wrapped_f
return actual_decorator
Now if there is any method that I want to measure the latency, I just put annotate it with the required decorator.
#log_latency()
def batch_job_api(param):
pass
Suppose you want to write a secure API which only works if you send a header with a particular value then you can use a decorator for it.
def secure(f):
#wraps(f)
def wrapper(*args, **kw):
try:
token = request.headers.get("My_Secret_Token")
if not token or token != "My_Secret_Text":
raise AccessDenied("Required headers missing")
return f(*args, **kw)
return wrapper
Now just write
#secure
def my_secure_api():
pass
I have also been using the above syntax for API specific exceptions, and if a method needs a database interaction instead of acquiring a connection I use #session decorator which tells that this method will use a database connection and you don't need to handle one yourself.
I could have obviously avoided it, by writing a function that checks for the header or prints time taken by API on AWS, but that just looks a bit ugly and nonintuitive.
There is no convention for this, at least I am not aware of them but it definitely makes the code more readable and easier to manage.
Most of the IDE's also have different color syntax for annotation which makes it easier to understand and organize code.
So, I would it may just be because of inexperience; if you start using them you will automatically start knowing where to use them.
From PEP 318 -- Decorators for Functions and Methods (with my own added emphasis):
Motivation
The current method of applying a transformation to a
function or method places the actual transformation after the function
body. For large functions this separates a key component of the
function's behavior from the definition of the rest of the function's
external interface. For example:
def foo(self):
perform method operation
foo = classmethod(foo)
This becomes less readable with longer methods. It also seems less than pythonic to name
the function three times for what is conceptually a single
declaration. A solution to this problem is to move the transformation
of the method closer to the method's own declaration. The intent of
the new syntax is to replace
def foo(cls):
pass
foo = synchronized(lock)(foo)
foo = classmethod(foo)
with an alternative that places the decoration in the function's declaration:
#classmethod
#synchronized(lock)
def foo(cls):
pass
I came across a neatly-written article which explains decorators in Python, why we should leverage them, how to use them and more.
Some of the benefits of using decorators in Python include:
Allows us to extend functions without actually modifying them.
Helps us stick with the Open-Closed Principle.
It helps reduce code duplication -- we can factorize out repeated code into a decorator.
Decorators help keep a project organized and easy to maintain.
Reference: https://tomisin.dev/blog/understanding-decorators-in-python
A great example of why decorators are useful is the numba package.
It provides a decorator to speed up Python functions:
#jit
def my_slow_function():
# ...
The #jit decorator does some very amazing and complex operations - it compiles the whole function (well, parts of it) to machine code. If Python did not have decorators you would have to write the function in machine code yourself... or more seriously, the syntax would look like this:
def my_slow_function():
# ...
my_slow_function = jit(my_slow_function)
The purpose of the decorator syntax is to make the second example nicer. It's just syntactic sugar so you don't have to type the function name three times.
In programming for fun, I've noticed that managing dependencies feels like a boring chore that I want to minimize. After reading this, I've come up with a super trivial dependency injector, whereby the dependency instances are looked up by a string key:
def run_job(job, args, instance_keys, injected):
args.extend([injected[key] for key in instance_keys])
return job(*args)
This cheap trick works since calls in my program are always lazily defined (where the function handle is stored separately from its arguments) in an iterator, e.g.:
jobs_to_run = [[some_func, ("arg1", "arg2"), ("obj_key",)], [other_func,(),()]]
The reason is because of a central main loop that must schedule all events. It has a reference to all dependencies, so the injection for "obj_key" can be passed in a dict object, e.g.:
# inside main loop
injection = {"obj_key" : injected_instance}
for (callable, with_args, and_dependencies) in jobs_to_run:
run_job(callable, with_args, and_dependencies, injection)
So when an event happens (user input, etc.), the main loop may call an update() on a particular object who reacts to that input, who in turn builds a list of jobs for the main loop to schedule when there's resources. To me it is cleaner to key-reference any dependencies for someone else to inject rather than having all objects form direct relationships.
Because I am lazily defining all callables (functions) for a game clock engine to run them on its own accord, the above naive approach worked with very little added complexity. Still, there is a code stink in having to reference objects by strings. At the same time, it was stinky to be passing dependencies around, and constructor or setter injection would be overkill, as would perhaps most large dependency injection libraries.
For the special case of injecting dependencies in callables that are lazily defined, are there more expressive design patterns in existence?
I've noticed that managing dependencies feels like a boring chore that I want to minimize.
First of all, you shouldn't assume that dependency injection is a means to minimize the chore of dependency management. It doesn't go away, it is just deferred to another place and time and possibly delegated to someone else.
That said, if what you are building is going to be used by others it would thus be wise to include some form of version checking into your 'injectables'so that your users will have an easy way to check if their version matches the one that is expected.
are there more expressive design patterns in existence?
Your method as I understand it is essentially a Strategy-Pattern, that is the job's code (callable) relies on calling methods on one of several concrete objects. The way you do it is perfectly reasonable - it works and is efficient.
You may want to formalize it a bit more to make it easier to read and maintain, e.g.
from collections import namedtuple
Job = namedtuple('Job', ['callable', 'args', 'strategies'])
def run_job(job, using=None):
strategies = { k: using[k] for k in job.strategies] }
return job.callable(*args, **strategies)
jobs_to_run = [
Job(callable=some_func, args=(1,2), strategies=('A', 'B')),
Job(callable=other_func, ...),
]
strategies = {"A": injected_strategy, ...}
for job in jobs_to_run:
run_job(job, using=strategies)
# actual job
def some_func(arg1, arg2, A=None, B=None):
...
As you can see the code still does the same thing, but it is instantly more readable, and it concentrates knowledge about the structure of the Job() objects in run_job. Also the call to a job function like some_func will fail if the wrong number of arguments are passed, and the job functions are easier to code and debug due to their explicitely listed and named arguments.
About the strings you could just make'em constants in a dependencies.py file an use these constants.
An more robust option with still little overhead would to be to use a dependency injection framework such as Injectable:
#autowired
def job42(some_instance: Autowired("SomeInstance", lazy=true)):
...
# some_instance is autowired to job42 calls and
# it will be automatically injected for you
job42()
Disclosure: I am the project maintainer.
Is there a library or something similar to lodash, for Python? We use the library extensively on our API and while we move on to creating a series of Python workers, it would make sense to create a similar structure to our API syntactics.
pydash is exactly like lodash, only for Python.
pydash is something which you can use as a replacement of lodash in python. Almost everything is covered under it, the names of functions are also similar just keeping python's conventions as is.
Important: I've been using pydash for around 07 months, It helped me a lot. One thing which you should not forget is, avoid using pydash functions directly on self, its behaviour is very erratic and gives random results. I'm showing one example below.
Sample Class
class MyClass:
def get(self):
self.obj = {}
self.obj['a'] = True
Usage
class TestClass:
def init(self):
self.inst = MyClass()
def test_function(self):
# **Not Recommended Method**
# _.get(self, "inst.a") # shows random behavior
# **Highly recommended**
_.get(self.inst, "a") # Will always return True
Well I'm not sure if this is exactly what you're looking for but when I think of javascript libraries like underscore and Lodash, I think of libraries that add more functional programming functions(though I do believe both underscore and lodash have a little more utility than that) to a language.
There are a bunch of Python Libraries that try and add some of the same functionality. A quick search got me pytoolz https://github.com/pytoolz/toolz which I don't have much experience with but looks interesting. If that isn't what you're looking for, try searching for other functional programming python libraries until you find one you like.
Hope that helped
I'm coding a poker hand evaluator as my first programming project. I've made it through three classes, each of which accomplishes its narrowly-defined task very well:
HandRange = a string-like object (e.g. "AA"). getHands() returns a list of tuples for each specific hand within the string:
[(Ad,Ac),(Ad,Ah),(Ad,As),(Ac,Ah),(Ac,As),(Ah,As)]
Translation = a dictionary that maps the return list from getHands to values that are useful for a given evaluator (yes, this can probably be refactored into another class).
{'As':52, 'Ad':51, ...}
Evaluator = takes a list from HandRange (as translated by Translator), enumerates all possible hand matchups and provides win % for each.
My question: what should my "domain" class for using all these classes look like, given that I may want to connect to it via either a shell UI or a GUI? Right now, it looks like an assembly line process:
user_input = HandRange()
x = Translation.translateList(user_input)
y = Evaluator.getEquities(x)
This smells funny in that it feels like it's procedural when I ought to be using OO.
In a more general way: if I've spent so much time ensuring that my classes are well defined, narrowly focused, orthogonal, whatever ... how do I actually manage work flow in my program when I need to use all of them in a row?
Thanks,
Mike
Don't make a fetish of object orientation -- Python supports multiple paradigms, after all! Think of your user-defined types, AKA classes, as building blocks that gradually give you a "language" that's closer to your domain rather than to general purpose language / library primitives.
At some point you'll want to code "verbs" (actions) that use your building blocks to perform something (under command from whatever interface you'll supply -- command line, RPC, web, GUI, ...) -- and those may be module-level functions as well as methods within some encompassing class. You'll surely want a class if you need multiple instances, and most likely also if the actions involve updating "state" (instance variables of a class being much nicer than globals) or if inheritance and/or polomorphism come into play; but, there is no a priori reason to prefer classes to functions otherwise.
If you find yourself writing static methods, yearning for a singleton (or Borg) design pattern, writing a class with no state (just methods) -- these are all "code smells" that should prompt you to check whether you really need a class for that subset of your code, or rather whether you may be overcomplicating things and should use a module with functions for that part of your code. (Sometimes after due consideration you'll unearth some different reason for preferring a class, and that's allright too, but the point is, don't just pick a class over a module w/functions "by reflex", without critically thinking about it!).
You could create a Poker class that ties these all together and intialize all of that stuff in the __init__() method:
class Poker(object):
def __init__(self, user_input=HandRange()):
self.user_input = user_input
self.translation = Translation.translateList(user_input)
self.evaluator = Evaluator.getEquities(x)
# and so on...
p = Poker()
# etc, etc...