I have some tasks stored in db for later execution. For example i can fix task of sending email. And by cron exec task (send it). I search for best way to store code in db for later execution. For ex store it in raw string of python code and than do eval, but also i must store relative imports here..
for example for send email i must fix string like this:
s = "from django.core.mail import send_mail\n
send_mail('subj', 'body', 'email#box.ru',['email1#box.ru'], fail_silently=False)"
and later eval.. any ideas to do it best way or mb better pattern for this kind of task?
What you're doing is a bad idea mainly because you allow for way too much variability in what code will be executed. A code string can do anything, and I'm guessing there are only a few kinds of tasks you want to store for later execution.
So, figure out what the variables in those tasks are (variables in a non-programming sense: things that vary), and only store those variables, perhaps as a tuple of function arguments and a dictionary of keyword arguments to be applied to a known function.
To be even more fancy, you can have some kind of container object with a bunch of functions on it, and store the name of the function to call along with its arguments. That container could be something as simple as a module into which you import functions like Django's send_mail as in your example.
Then you can store your example call like this:
func = 'send_mail'
args = ('subj', 'body', 'email#box.ru', ['email1#box.ru'])
kwargs = {'fail_silently': False}
my_call = cPickle.dumps((func, args, kwargs))
And use it like this:
func, args, kwargs = cPickle.loads(my_call)
getattr(my_module, func)(*args, **kwargs)
Use celery for this. That's the best approach.
http://celeryproject.org/
I wouldn't use this solution at all. I would create a different handler for each task (sending a mail, deleting a file, etc). Storing code in this manner is hackish.
EDIT
An example would be creating your own format for handlers. For example each line one handler in this format:
handlername;arg1;arg2;arg3;arg4
Next you use python to read out the lines and parse them. For example this would be a stored line:
sendmail;nightcracker#nclabs.org;subject;body
Which would be parsed like this:
for line in database:
handler, *args = line.split(";")
if handler == "sendmail":
recipient, subject, body, = args[:3]
# do stuff
elif handler == "delfile":
#etc
I'd store logical commands, and exec them with something like
def run_command(cmd):
fields = map(unescape, cmd.split(";"))
handlers[fields[0]](fields[1:])
...
#handler("mail")
def mail_handler(address, template):
import whatever
...
send_mail(address, get_template(template) % user_info, ...)
this way you can have both the flexibility to add handlers without having to touching any code in the dispatcher and yet you're not writing the code details in the database that would make harder doing inspections/stats or just hot fixing jobs that didn't start yet.
To directly answer your question, eval is really only for evaluating code that will produce a result. For example:
>>> eval('1 + 1')
2
However if you simply want to execute code, possibly several lines of code, you want exec(), which by default executes inside the caller's namespace:
>>> exec("x = 5 + 5")
>>> print x
10
Note that only trusted code should be passed to either exec or eval. See also execfile to execute a file.
Having said all that, I agree with other posters that you should find a way to problematically do what you want to do instead of storing arbitrary code. You could, for example, do something like this:
def myMailCommand(...):
...
def myOtherCommand(...):
...
available_commands = {'mail': myMailCommand,
'other': myOtherCommand}
to_execute = [('mail', (arg1, arg2, arg3)),
('other', (arg1, arg2))]
for cmd, args in to_execute:
available_commands[cmd](*args)
In the above pseudo-code, I defined two methods. Then I have a dictionary mapping actions to commands. Then I go through a data structure of actions and arguments, and call the appropriate argument accordingly. You get the idea.
Related
I want to use standard output if a function() have args use_standard_output = True.
Like this:
def function(use_standard_output = True):
~ SOME PROCESS ~
if(use_standard_output):
print("print something to monitor the process")
Is there smarter way to implement this?
Thanks.
Look into the logging module. It comes equipped with different levels.
For example, you could replace your call to print with logging.info("Print something to monitor the process")
If you configure it with logging.basicConfig(level=logging.INFO), you will see the output. If you raise the logging level (e.g. logging.basicConfig(level=logging.WARNING), it will be ignored.
For a complete example:
import logging
def function():
logging.info("print something to monitor the process")
logging.basicConfig(level=logging.INFO)
function()
logging.basicConfig(leve=logging.WARNING)
function()
Whether it's "smart" or not, you can redefine the print function. This was the rationale for making it a function in Python 3. Since you'll be "shadowing" the built-in function (i.e. re-using its name locally, effectively redefining it) you do, of course, have to retain a reference to the built_in function so you can use it inside your redefinition.
Then a global (here, OUTPUT_REQUIRED) can determine whether or not it produces any output:
system_print = print
def print(*args, **kwargs):
if OUTPUT_REQUIRED:
system_print(*args, **kwargs)
The *args, *kwargs notation may not be familiar to you. Using it as the code does, in both the definition and the call, it is a simple way to call system_print with the same positional and keyword arguments that your print function was called with.
You could continue to use the additional argument by explicitly naming it in the definition, and not passing it through to print:
system_print = print
def print(OUTPUT_REQUIRED, *args, **kwargs):
if OUTPUT_REQUIRED:
system_print(*args, **kwargs)
This represents a change to the API which would make switching back to the standard function more difficult. I'd recommend simply using a different name in this case.
The logging module, while extremely comprehensive, takes a little more effort to understand.
If you want either write the message to the terminal or to a log file you can work with streams. You simply work with a stream object pointing to a file or STDOUT.
import sys
if use_standard_output:
stream = sys.stdout
else:
stream = open('logfile', 'w')
print("print something to monitor the process", file=stream)
I've a got a code that needs certain variable to be shared as like:
def example(arg):
req = urllib2.Request(r'{}'.format(arg))
...
def exampe2(arg):
# i need this func to access req
# i think i can't use req as a global var since the program might need to get imported and it would run from main() (which is again a function)
Would really like your help!
As said in the comments; you can do a pass-by-parameter methodology which'd be applied like this:
def example2(arg, req):
....
def example(arg):
req = urllib2.Request(r'{}'.format(arg))
...
return example2(..., req)
Or you could just as easily integrate the two functions, as you could combine the two arg parameters on example and example2.
this example might help i guess
def example1(arg):
example1.request = "from example1"
....
def example2(arg):
print(example1.request)
example1("arg1")
example2("arg2")
> from example one
otherwise you can make request as global and use that request varable inside your example2 function. But all you need to do is execute example1 before example2. Or you can return the request from example1 and assign example1 return value to another variable inside example2.
Just pass it as return values and parameters? This is a feature, since it allows you to keep things as local as possible. If your function needs lots of arguments or gives lots of output, it's often a sign that it can be broken up into multiple functions (a function should ideally do one clearly separated thing and be named as such).
In some cases of course, you want to pass around some data such as configuration options: you might create some new object for this but why not simply a dictionary?
def make_request(arg, config):
req = urllib2.Request(r'{}'.format(arg))
config['req'] = req
return config
Note that I returned the dict config, even though it is not necessary since dicts are mutable in Python. This just makes it clear in the code that I am modifying it. Now we can use the config:
def exampe2(arg, config):
arg = config['arg']
...do stuff..
Windows7, Python2.7 MPD2.
I am writing a program to control MPD.
MPD has several (over 50) different functions.
Normally one would make a call in the form:
mpd_client.pause()
#or
mpd_client.playlistmove(playlist_name, old_pos, new_pos)
I want to encapsulate all the separate calls in one function so I can use a single try / except.
I am thinking I want to use some sort of lambda, and *args but I have little experience with either of those.
In the body of my program, I want to call something like this:
MPD('pause')
#or
MPD('playlistmove', playlist_name, old_pos, new_pos)
I envision my function looking something like...
def MPD(required_param, *args):
try:
mpd_client.required_param(args)
except:
...
of course, this isn't working.
Short of writing a huge switch statement and 50 different try structures, is there a way I can use lambda?
maybe something like:
lambda m=mpd_client.required_param: m(args)
but, this isn't working either.
I don't know.
Thanks, Mark.
You need to use getattr() to retrieve the actual method to call by name:
getattr(mpd_client, required_param)(*args)
(Note that you also need the * in front of the args for the function call as well, to re-expand the argument list back into separate arguments.)
what you need is object.__dict__, as in your code:
func = mpd_client.__dict__['pause']
func()
func = mpd_client.__dict__['playlistmove']
func(playlist_name, old_pos, new_pos)
I'm writing a python command line program which has some interdependent options, I would like for the user to be able to enter the options in whichever order they please.
Currently I am using the getopts library to parse the command line options, unfortunately that parses them in-order. I've thrown together a system of boolean flags to leave the processing of certain command line arguments until the one they're dependent on is processed, however I had the idea of using a Priority Queue of function calls which would execute after all the command line options are parsed.
I know that Python can store functions under variable names, but that seems to call the function at the same time.
For example:
help = obj.PrintHelp()
heapq.heappush(commandQ, (0, help))
Will print the help dialog immediately. How would I go about implementing my code such that it won't call PrintHelp() immediately upon assigning it a name.
EDIT:
Oh i just realized I was pushing into a queue called help, that's my mistake.
Thanks for the tip on removing the () after PrintHelp.
What if I want to now call a function that requires more than the self argument?
myFun = obj.parseFile(path)
heapq.heappush(commandQ, (1, myFun))
Would I just make the tuple bigger and take the command line argument?
If you heappush like this:
myFun = obj.parseFile
heapq.heappush(commandQ, (1, myFun, path))
then to later call the function, you could do this:
while commandQ:
x=heapq.heappop(commandQ)
func=x[1]
args=x[2:]
func(*args)
Use
help = obj.PrintHelp
without the parentheses. This makes help reference the function.
Later, you can call the function with help().
Note also (if I understand your situation correctly), you could just use the optparse or (if you have Python2.7 or better) argparse modules in the standard library to handle the command-line options in any order.
PS. help is a built-in function in Python. Naming a variable help overrides the built-in, making it difficult (though not impossible) to access the built-in. Generally, it's a good idea not to overwrite the names of built-ins.
Instead of using getopts, I would suggest using optparse (argparse, if you are using a newer python version): most probably, you will get everything you need, already implemented.
That said, in your example code, you are actually calling the function, while you should simply get its name:
help = obj.PrintHelp
heapq.heappush(help, (0, help))
If you want to store a complete function call in Python, you can do it one of two ways:
# option 1: hold the parameters separately
# I've also skipped saving the function in a 'help' variable'
heapq.heappush(commandQ, (0, obj.PrintHelp, param1, param2))
# later:
command = commandQ[0]
heapq.heappop(commandQ)
command[1](*command[2:]) # call the function (second item) with args (remainder of items)
Alternatively, you can use a helper to package the arguments up via lambda:
# option 2: build a no-argument anonymous function that knows what arguments
# to give the real one
# module scope
def makeCall(func, *args):
return lambda: func(*args)
# now you can:
help = makeCall(obj.PrintHelp, param1, param2)
heapq.heappush(commandQ, (0, help))
If you need keyword arguments, let me know and I'll edit to take care of those too.
I am developing a medium size program in python spread across 5 modules. The program accepts command line arguments using OptionParser in the main module e.g. main.py. These options are later used to determine how methods in other modules behave (e.g. a.py, b.py). As I extend the ability for the user to customise the behaviour or the program I find that I end up requiring this user-defined parameter in a method in a.py that is not directly called by main.py, but is instead called by another method in a.py:
main.py:
import a
p = some_command_line_argument_value
a.meth1(p)
a.py:
meth1(p):
# some code
res = meth2(p)
# some more code w/ res
meth2(p):
# do something with p
This excessive parameter passing seems wasteful and wrong, but has hard as I try I cannot think of a design pattern that solves this problem. While I had some formal CS education (minor in CS during my B.Sc.), I've only really come to appreciate good coding practices since I started using python. Please help me become a better programmer!
Create objects of types relevant to your program, and store the command line options relevant to each in them. Example:
import WidgetFrobnosticator
f = WidgetFrobnosticator()
f.allow_oncave_widgets = option_allow_concave_widgets
f.respect_weasel_pins = option_respect_weasel_pins
# Now the methods of WidgetFrobnosticator have access to your command-line parameters,
# in a way that's not dependent on the input format.
import PlatypusFactory
p = PlatypusFactory()
p.allow_parthenogenesis = option_allow_parthenogenesis
p.max_population = option_max_population
# The platypus factory knows about its own options, but not those of the WidgetFrobnosticator
# or vice versa. This makes each class easier to read and implement.
Maybe you should organize your code more into classes and objects? As I was writing this, Jimmy showed a class-instance based answer, so here is a pure class-based answer. This would be most useful if you only ever wanted a single behavior; if there is any chance at all you might want different defaults some of the time, you should use ordinary object-oriented programming in Python, i.e. pass around class instances with the property p set in the instance, not the class.
class Aclass(object):
p = None
#classmethod
def init_p(cls, value):
p = value
#classmethod
def meth1(cls):
# some code
res = cls.meth2()
# some more code w/ res
#classmethod
def meth2(cls):
# do something with p
pass
from a import Aclass as ac
ac.init_p(some_command_line_argument_value)
ac.meth1()
ac.meth2()
If "a" is a real object and not just a set of independent helper methods, you can create an "p" member variable in "a" and set it when you instantiate an "a" object. Then your main class will not need to pass "p" into meth1 and meth2 once "a" has been instantiated.
[Caution: my answer isn't specific to python.]
I remember that Code Complete called this kind of parameter a "tramp parameter". Googling for "tramp parameter" doesn't return many results, however.
Some alternatives to tramp parameters might include:
Put the data in a global variable
Put the data in a static variable of a class (similar to global data)
Put the data in an instance variable of a class
Pseudo-global variable: hidden behind a singleton, or some dependency injection mechanism
Personally, I don't mind a tramp parameter as long as there's no more than one; i.e. your example is OK for me, but I wouldn't like ...
import a
p1 = some_command_line_argument_value
p2 = another_command_line_argument_value
p3 = a_further_command_line_argument_value
a.meth1(p1, p2, p3)
... instead I'd prefer ...
import a
p = several_command_line_argument_values
a.meth1(p)
... because if meth2 decides that it wants more data than before, I'd prefer if it could extract this extra data from the original parameter which it's already being passed, so that I don't need to edit meth1.
With objects, parameter lists should normally be very small, since most appropriate information is a property of the object itself. The standard way to handle this is to configure the object properties and then call the appropriate methods of that object. In this case set p as an attribute of a. Your meth2 should also complain if p is not set.
Your example is reminiscent of the code smell Message Chains. You may find the corresponding refactoring, Hide Delegate, informative.