I want to use standard output if a function() have args use_standard_output = True.
Like this:
def function(use_standard_output = True):
~ SOME PROCESS ~
if(use_standard_output):
print("print something to monitor the process")
Is there smarter way to implement this?
Thanks.
Look into the logging module. It comes equipped with different levels.
For example, you could replace your call to print with logging.info("Print something to monitor the process")
If you configure it with logging.basicConfig(level=logging.INFO), you will see the output. If you raise the logging level (e.g. logging.basicConfig(level=logging.WARNING), it will be ignored.
For a complete example:
import logging
def function():
logging.info("print something to monitor the process")
logging.basicConfig(level=logging.INFO)
function()
logging.basicConfig(leve=logging.WARNING)
function()
Whether it's "smart" or not, you can redefine the print function. This was the rationale for making it a function in Python 3. Since you'll be "shadowing" the built-in function (i.e. re-using its name locally, effectively redefining it) you do, of course, have to retain a reference to the built_in function so you can use it inside your redefinition.
Then a global (here, OUTPUT_REQUIRED) can determine whether or not it produces any output:
system_print = print
def print(*args, **kwargs):
if OUTPUT_REQUIRED:
system_print(*args, **kwargs)
The *args, *kwargs notation may not be familiar to you. Using it as the code does, in both the definition and the call, it is a simple way to call system_print with the same positional and keyword arguments that your print function was called with.
You could continue to use the additional argument by explicitly naming it in the definition, and not passing it through to print:
system_print = print
def print(OUTPUT_REQUIRED, *args, **kwargs):
if OUTPUT_REQUIRED:
system_print(*args, **kwargs)
This represents a change to the API which would make switching back to the standard function more difficult. I'd recommend simply using a different name in this case.
The logging module, while extremely comprehensive, takes a little more effort to understand.
If you want either write the message to the terminal or to a log file you can work with streams. You simply work with a stream object pointing to a file or STDOUT.
import sys
if use_standard_output:
stream = sys.stdout
else:
stream = open('logfile', 'w')
print("print something to monitor the process", file=stream)
Related
Given the following code,
def myfunc(a=None, b=None, c=None, **kw):
func(arga=a, argb=b, **kw)
#do something with c
def func(arga=None, argb=None, argc=None):
....
Can I replicate part of the signature of func, namely the missing args, without imitating every missing arg of func manually?
Put it more simply, I want to see argc in keywords of myfunc such that myfunc? would be different. It would contain argc. myfunc(a=None,b=None,c=None,argc=None)
#functools.wraps allows for wrapping a complete functions. Using partial can subtract args. But don't know to add.
yes, it is possible, though not trivial -
Python's introspection capabilities allow you to check all parameters the target function declares, and it is possible to build a new function programmatically that will include those attributes automatically.
I have written this for a project of mine, and had exposed the relevant code as my answer here: Signature-changing decorator: properly documenting additional argument
I will not mark this as duplicate, since the other question is more worried about documenting the new function.
If you want to give a try, with your code, maybe with something simpler, you can check the inspect.signature call from the standard library, which allows one to discover everything about parameters and default arguments of the target function.
Building a new function from this information is a bit more tricky, but possible - but one can always resort to a exec call which will can create a new function from a string template. The answer there follows this line.
I'm not sure what is being asked here either but I have here alternative code to functools.partial that might be adapted ???
(edit>)
The difference here from partial is that the mkcall argument is a string rather than a series of arguments. This string can then be formatted and analysed according to whatever appropriate requirements are needed before the target function is called.
(<edit)
def mkcall(fs, globals=None,locals=None):
class func:
def __init__(f,fcnm=None,params=None,globals=None,locals=None):
f.nm = fcnm
f.pm = params
f.globals = globals
f.locals = locals
def __call__(f):
s = f.nm + f.pm
eval(s,f.globals,f.locals)
if '(' in fs:
funcn,lbr,r = fs.partition('(')
tp = lbr + r
newf = func(funcn,tp,globals,locals)
callf = newf.__call__
else:
callf = eval(fs,globals,locals)
return callf
#call examples
# mkcall("func(arg)")
# mkcall("func")
Given a simple Python function with an optional argument, like:
def wait(seconds=3):
time.sleep(seconds)
How do I create a function that calls this and passes on an optional argument? For example, this does NOT work:
def do_and_wait(message, seconds=None):
print(message)
wait(seconds)
Note: I want to be able to call wait from other functions with the optional seconds argument without having to know and copy the current default seconds value in the underlying wait function to every other function which calls it.
As above, if I call it with the optional argument, like do_and_wait(2) then it works, but trying to rely on wait's default, e.g. calling it like do_and_wait() causes a TypeError because inside wait seconds == None.
Is there a simple and clean way to make this work? I know I can abuse kwargs like this:
def do_and_wait(message, **kwargs):
print(message)
wait(**kwargs)
But that seems unclear to the reader and user of this function since there is no useful name on the argument.
Note: This is a stupidly simplified example.
I understand you've simplified your question, but I think you mean how one can call a function with optional arguments that could be None. Does the following work for you?
import time
def func1(mess, sec):
if sec != None:
time.sleep(sec)
print(mess)
func1('success', sec=None)
I don't think you've quite explained your problem completely because I don't expect an answer should be this simple, but I would just use the same default value (and data type) in do_and_wait() as wait() uses, like so:
def do_and_wait(message, seconds=3):
print(message)
wait(seconds)
After thinking a bit more, I came up with something like this; Han's answer suggested this and reminded me that I think PEP even suggests it somewhere. This especially avoids having to know and copy the default value into any function that calls wait and wants to support a variable value for seconds.
def wait(seconds=None):
time.sleep(seconds if seconds is not None else 3)
def do_and_wait(message, seconds=None):
print(message)
wait(seconds)
def something_else(callable, seconds=None):
callable()
wait(seconds)
I would like to open a file using the a_reader function. I would then like to use a second function to print the file. I want to do this because I would like to be able to call the open file function later without printing it. Any ideas on the best way to do this? Here is a sample code that I know does not work but it may help explain what I want to do
def main ():
a_reader = open ('C:\Users\filexxx.csv','r')
fileName = a_reader.read()
a_reader.close()
def print():
print fileName
main()
print()
Please see this day old thread: What is the Pythonic way to avoid reference before assignment errors in enclosing scopes?
The user in that post had the exact same issue, he wanted to define a function within another function (in your case main) as adviced both by me and others you don't nest functions!
There's no need to use nested functions in Python, it just adds useless complexity that doesn't give you any real practical advantages.
I would do:
def main ():
a_reader = open ('C:\\Users\\filexxx.csv','r')
fileName = a_reader.read()
a_reader.close()
return fileName
print(main())
or
class main():
def __init__(self):
a_reader = open ('C:\\Users\\filexxx.csv','r')
self.fileName = a_reader.read()
a_reader.close()
def _print(self):
print(self.fileName)
a = main()
a._print()
It's never a good idea to define your function-/class-names as the same as the default Python functions/classes. print being one of them.
But here's a solution if you really wanna go with your original setup:
def main ():
a_reader = open ('C:\\Users\\filexxx.csv','r')
fileName = a_reader.read()
a_reader.close()
def _print():
print fileName
_print()
main()
Oh, and btw.. strings with backslashes should be escaped or you need to use r'..' :)
First - you can't name your function as print as this is a function already existing in python and will return an error.
And it seems like a class is what you are looking for to be your main(), not a function.
I don't want to rock-the-boat here, I actually agree with the above two answers. Perhaps a class is the right way to go, and almost certainly it would be unwise to override the native print() function.
One of Python's strengths is that it covers a whole range of programming paradigms. You want a direct, procedural implementation -Python! You want to re-use your code and make some generic, reusable classes - OOP and Python! Python also allows for functional programming - Python's built-in functions map and zip are classic examples of functional programming.
Not saying that this is what was being asked in this question, but you could do this functionally:
def my_name_function(n):
return ' '.join(['This is my Name:', n])
def say_hello(x):
print('Hello, world!', x)
say_hello(my_name_function('Nick'))
--> Hello, world! This is my Name: Nick
Again, I don't think this is what the question is really asking. I do agree that, in this case, the best implementation would be a class, in the OOP sense. (Probably the more Pythonic way to go even :p)
But, to say there is no need for nested functions in Python, when Python leaves this option open to us? When Python has recently (over the last few years) opened the door to functional programming concepts??? It does have its advantages (and disadvantages) - if it didn't there would be no way that Guido, as the Benevolent Dictator for Life, would have opened this box.
If you want a_reader to be 'function', you should call it as a function and not use it as a variable. In python that would be by using a_reader().
The following implements a class Reader of which an instance a_reader can be called as a function. Only at the point of calling is the file opened. Like others already have indicated, you should escape backslashes in string literals ("..\\.."), or use raw strings (r"..\.."). It is also good practise to put the normal code in python file under an if __name__ == '__main__': statement, that way you can import functions/classes from the file without invoking the (test) code.
class Reader(object):
def __init__(self, file_name):
self._file_name = file_name
self._fp = None
def __call__(self):
if self._fp:
return self._fp
return open(self._file_name, 'r')
def main():
a_reader = Reader(r"C:\\Users\\filexxx.csv")
# no file opened yet
file_content = a_reader().read()
a_reader().close()
print file_content
if __name__ == '__main__': # only call main() if not imported
main()
I have some tasks stored in db for later execution. For example i can fix task of sending email. And by cron exec task (send it). I search for best way to store code in db for later execution. For ex store it in raw string of python code and than do eval, but also i must store relative imports here..
for example for send email i must fix string like this:
s = "from django.core.mail import send_mail\n
send_mail('subj', 'body', 'email#box.ru',['email1#box.ru'], fail_silently=False)"
and later eval.. any ideas to do it best way or mb better pattern for this kind of task?
What you're doing is a bad idea mainly because you allow for way too much variability in what code will be executed. A code string can do anything, and I'm guessing there are only a few kinds of tasks you want to store for later execution.
So, figure out what the variables in those tasks are (variables in a non-programming sense: things that vary), and only store those variables, perhaps as a tuple of function arguments and a dictionary of keyword arguments to be applied to a known function.
To be even more fancy, you can have some kind of container object with a bunch of functions on it, and store the name of the function to call along with its arguments. That container could be something as simple as a module into which you import functions like Django's send_mail as in your example.
Then you can store your example call like this:
func = 'send_mail'
args = ('subj', 'body', 'email#box.ru', ['email1#box.ru'])
kwargs = {'fail_silently': False}
my_call = cPickle.dumps((func, args, kwargs))
And use it like this:
func, args, kwargs = cPickle.loads(my_call)
getattr(my_module, func)(*args, **kwargs)
Use celery for this. That's the best approach.
http://celeryproject.org/
I wouldn't use this solution at all. I would create a different handler for each task (sending a mail, deleting a file, etc). Storing code in this manner is hackish.
EDIT
An example would be creating your own format for handlers. For example each line one handler in this format:
handlername;arg1;arg2;arg3;arg4
Next you use python to read out the lines and parse them. For example this would be a stored line:
sendmail;nightcracker#nclabs.org;subject;body
Which would be parsed like this:
for line in database:
handler, *args = line.split(";")
if handler == "sendmail":
recipient, subject, body, = args[:3]
# do stuff
elif handler == "delfile":
#etc
I'd store logical commands, and exec them with something like
def run_command(cmd):
fields = map(unescape, cmd.split(";"))
handlers[fields[0]](fields[1:])
...
#handler("mail")
def mail_handler(address, template):
import whatever
...
send_mail(address, get_template(template) % user_info, ...)
this way you can have both the flexibility to add handlers without having to touching any code in the dispatcher and yet you're not writing the code details in the database that would make harder doing inspections/stats or just hot fixing jobs that didn't start yet.
To directly answer your question, eval is really only for evaluating code that will produce a result. For example:
>>> eval('1 + 1')
2
However if you simply want to execute code, possibly several lines of code, you want exec(), which by default executes inside the caller's namespace:
>>> exec("x = 5 + 5")
>>> print x
10
Note that only trusted code should be passed to either exec or eval. See also execfile to execute a file.
Having said all that, I agree with other posters that you should find a way to problematically do what you want to do instead of storing arbitrary code. You could, for example, do something like this:
def myMailCommand(...):
...
def myOtherCommand(...):
...
available_commands = {'mail': myMailCommand,
'other': myOtherCommand}
to_execute = [('mail', (arg1, arg2, arg3)),
('other', (arg1, arg2))]
for cmd, args in to_execute:
available_commands[cmd](*args)
In the above pseudo-code, I defined two methods. Then I have a dictionary mapping actions to commands. Then I go through a data structure of actions and arguments, and call the appropriate argument accordingly. You get the idea.
I have to open a file-like object in python (it's a serial connection through /dev/) and then close it. This is done several times in several methods of my class. How I WAS doing it was opening the file in the constructor, and then closing it in the destructor. I'm getting weird errors though and I think it has to do with the garbage collector and such, I'm still not used to not knowing exactly when my objects are being deleted =\
The reason I was doing this is because I have to use tcsetattr with a bunch of parameters each time I open it and it gets annoying doing all that all over the place. So I want to implement an inner class to handle all that so I can use it doing
with Meter('/dev/ttyS2') as m:
I was looking online and I couldn't find a really good answer on how the with syntax is implemented. I saw that it uses the __enter__(self) and __exit(self)__ methods. But is all I have to do implement those methods and I can use the with syntax? Or is there more to it?
Is there either an example on how to do this or some documentation on how it's implemented on file objects already that I can look at?
Those methods are pretty much all you need for making the object work with with statement.
In __enter__ you have to return the file object after opening it and setting it up.
In __exit__ you have to close the file object. The code for writing to it will be in the with statement body.
class Meter():
def __init__(self, dev):
self.dev = dev
def __enter__(self):
#ttysetattr etc goes here before opening and returning the file object
self.fd = open(self.dev, MODE)
return self
def __exit__(self, type, value, traceback):
#Exception handling here
close(self.fd)
meter = Meter('dev/tty0')
with meter as m:
#here you work with the file object.
m.fd.read()
Easiest may be to use standard Python library module contextlib:
import contextlib
#contextlib.contextmanager
def themeter(name):
theobj = Meter(name)
try:
yield theobj
finally:
theobj.close() # or whatever you need to do at exit
# usage
with themeter('/dev/ttyS2') as m:
# do what you need with m
m.read()
This doesn't make Meter itself a context manager (and therefore is non-invasive to that class), but rather "decorates" it (not in the sense of Python's "decorator syntax", but rather almost, but not quite, in the sense of the decorator design pattern;-) with a factory function themeter which is a context manager (which the contextlib.contextmanager decorator builds from the "single-yield" generator function you write) -- this makes it so much easier to separate the entering and exiting condition, avoids nesting, &c.
The first Google hit (for me) explains it simply enough:
http://effbot.org/zone/python-with-statement.htm
and the PEP explains it more precisely (but also more verbosely):
http://www.python.org/dev/peps/pep-0343/