I have some simple code testing latency for browsers which opens multiple instances of Selenium:
with Pool(processes=args.number_of_browsers) as pool:
for i in range(args.number_of_browsers):
logging.info("Starting job on browser #" + str(i))
pool.apply_async(run, args=(args.refresh_rate, args.jitter, args.duration, args.url, str(i)))
For the purposes of the question, the run function could be as simple as:
def run():
logging.debug("ANYTHING")
I haven't been able to figure out how to get console output from the pool library.
here a basic working logging example in python
import logging
logging.basicConfig(format='%(asctime)s %(levelname)s:%(message)s',
datefmt='%Y%m%d%H%M%S%p', level=logging.DEBUG)
NODE_NAME = 'Test'
logger = logging.getLogger(NODE_NAME)
logger.info('hello')
correct logging needs more config
Related
So, I create a small program that uses flask to receive some requests and do a few things over selenium. All bits that deal with selenium are in another file that I tried to run first using a thread, and when it did not worked, a process. I believe that the problem is because I use a while true to keep my selenium working. The selenium part knows what to do because it keep checking the variable that I update from them flask part...
This is pretty much my main class that runs the selenium and them start flask, but it never start flask. It get locked on the .start().
if __name__ == "__main__":
# Logging
log_format = '%(asctime)s [%(filename)s:%(lineno)d] %(message)s'
logging.basicConfig(format=log_format,
level=logging.INFO,
stream=sys.stdout)
# Start Selenium
browser = Process(target=selenium_file.run_stuff())
browser.start()
print('TEST')
# Flask
app.run(debug=True)
Not really sure how I could solve this problem (if it's a problem)...
Exchange browser = Process(target=selenium_file.run_stuff()) with browser = Process(target=selenium_file.run_stuff)
You don't pass the function run_stuff but you already execute it and hence it blocks your program until run_stuff returns.
I have a python-daemon process that logs to a file via a ThreadedTCPServer (inspired by the cookbook example: https://docs.python.org/2/howto/logging-cookbook.html#sending-and-receiving-logging-events-across-a-network, as I will have many such processes writing to the same file). I am controlling the spawning of the daemon process using subprocess.Popen from an ipython console, and this is how the application will be run. I am able to successfully write to the log file from both the main ipython process, as well as the daemon process, but I am unable to change the level of both by just simply setting the level of the root logger in ipython. Is this something that should be possible? Or will it require custom functionality to set the logging.level of the daemon separately?
Edit: As requested, here is an attempt to provide a pseudo-code example of what I am trying to achieve. I hope that this is a sufficient description.
daemon_script.py
import logging
import daemon
from other_module import function_to_run_as_daemon
class daemon(object):
def __init__(self):
self.daemon_name = __name__
logging.basicConfig() # <--- required, or I don't get any log messages
self.logger = logging.getLogger(self.daemon_name)
self.logger.debug( "Created logger successfully" )
def run(self):
with daemon.daemonContext( files_preserve = [self.logger.handlers[0].stream] )
self.logger.debug( "Daemonised successfully - about to enter function" )
function_to_run_as_daemon()
if __name__ == "__main__":
d = daemon()
d.run()
Then in ipython i would run something like
>>> import logging
>>> rootlogger = logging.getLogger()
>>> rootlogger.info( "test" )
INFO:root:"test"
>>> subprocess.Popen( ["python" , "daemon_script.py"] )
DEBUG:__main__:"Created logger successfully"
DEBUG:__main__:"Daemonised successfully - about to enter function"
# now i'm finished debugging and testing, i want to reduce the level for all the loggers by changing the level of the handler
# Note that I also tried changing the level of the root handler, but saw no change
>>> rootlogger.handlers[0].setLevel(logging.INFO)
>>> rootlogger.info( "test" )
INFO:root:"test"
>>> print( rootlogger.debug("test") )
None
>>> subprocess.Popen( ["python" , "daemon_script.py"] )
DEBUG:__main__:"Created logger successfully"
DEBUG:__main__:"Daemonised successfully - about to enter function"
I think that I may not be approaching this correctly, but, its not clear to me what would work better. Any advice would be appreciated.
The logger you create in your daemon won't be the same as the logger you made in ipython. You could test this to be sure, by just printing out both logger objects themselves, which will show you their memory addresses.
I think a better pattern would be be that you pass if you want to be in "debug" mode or not, when you run the daemon. In other words, call popen like this:
subprocess.Popen( ["python" , "daemon_script.py", "debug"] )
It's up to you, you could pass a string meaning "debug mode is on" as above, or you could pass the log level constant that means "debug", e.g.:
subprocess.Popen( ["python" , "daemon_script.py", "10"] )
(https://docs.python.org/2/library/logging.html#levels)
Then in the daemon's init function use argv for example, to get that argument and use it:
...
import sys
def __init__(self):
self.daemon_name = __name__
logging.basicConfig() # <--- required, or I don't get any log messages
log_level = int(sys.argv[1]) # Probably don't actually just blindly convert it without error handling
self.logger = logging.getLogger(self.daemon_name)
self.logger.setLevel(log_level)
...
The following code used to emit logs at some point, but no longer seems to do so. Shouldn't configuration of the logging mechanism in each worker permit logs to appear on stdout? If not, what am I overlooking?
import logging
from distributed import Client, LocalCluster
import numpy as np
def func(args):
i, x = args
logging.basicConfig(level=logging.INFO,
format='%(asctime)s %(name)s %(levelname)s %(message)s')
logger = logging.getLogger('func %i' % i)
logger.info('computing svd')
return np.linalg.svd(x)
if __name__ == '__main__':
lc = LocalCluster(10)
c = Client(lc)
data = [np.random.rand(50, 50) for i in range(50)]
fut = c.map(func, zip(range(len(data)), data))
results = c.gather(fut)
lc.close()
As per this question, I tried putting the logger configuration code into a separate function invoked via c.run(init_logging) right after instantiation of the client, but that didn't make any difference either.
I'm using distributed 1.19.3 with Python 3.6.3 on Linux. I have
logging:
distributed: info
distributed.client: info
in ~/.dask/config.yaml.
Evidently the submitted functions do not actually execute until one tries to retrieve the results from the generated futures, i.e., using the line
print(list(results))
before shutting down the local cluster. I'm not sure how to reconcile this with the section in the online docs that seems to state that direct submissions to a cluster are executed immediately.
I am using autobahn[twisted] to achieve some WAMP communication. While subscribing to a topic a getting feed from it i print it. When i do it i get something like this:
2016-09-25T21:13:29+0200 (u'USDT_ETH', u'12.94669009', u'12.99998074', u'12.90000334', u'0.00035594', u'18396.86929477', u'1422.19525455', 0, u'13.14200000', u'12.80000000')
I have sacrificed too many hours to take it out. And yes, i tested other things to print, it print without this timestamp. This is my code:
from twisted.internet.defer import inlineCallbacks
from autobahn.twisted.wamp import ApplicationSession, ApplicationRunner
class PushReactor(ApplicationSession):
#inlineCallbacks
def onJoin(self, details):
print "subscribed"
yield self.subscribe(self.onTick, u'ticker')
def onTick(self, *args):
print args
if __name__ == '__main__':
runner = ApplicationRunner(u'wss://api.poloniex.com', u'realm1')
runner.run(PushReactor)
How can i remove this timestamp?
Well, sys.stderr and sys.stdout are redirected to a twisted logger.
You need to change the logging format before running you app.
See: https://twistedmatrix.com/documents/15.2.1/core/howto/logger.html
How to reproduce
You can reproduce your problem with this simple application:
from autobahn.twisted.wamp import ApplicationRunner
if __name__ == '__main__':
print("hello1")
runner = ApplicationRunner(u'wss://api.poloniex.com', u'realm1')
print("hello2")
runner.run(None)
print("hello3")
When the process is killed, you'll see:
hello1
hello2
2016-09-26T14:08:13+0200 Received SIGINT, shutting down.
2016-09-26T14:08:13+0200 Main loop terminated.
2016-09-26T14:08:13+0200 hello3
During application launching, stdout (and stderr) are redirected to a file-like object (of class twisted.logger._io.LoggingFile).
Every call to print or write are changed in twister log messages (one for each line).
The redirection is done in the class twisted.logger._global.LogBeginner, look at the beginLoggingTo method.
I would like to capture log o/p from a python based subprocess . Here is part of my code. How do I ridirect my log also to this txt ctrl
Here is mytest.py:
import logging
log=logging.getLogger('test')
class MyTestClass():
def TestFunction(self) :
log.info("start function"
# runs for 5 - 10 mins and has lots of log statments
print "some stuff"
log.info("after Test Function")
# for now
return a,b
#sys.exit(2)
if __name__ == "__main__":
myApp=MyTestClass()
myApp.TestFunction()
I am doing something of this sort in my maingui:
class WxLog(logging.Handler):
def __init__(self, ctrl):
logging.Handler.__init__(self)
self.ctrl = ctrl
def emit(self, record):
if self.ctrl:
self.ctrl.AppendText(self.format(record)+"\n")
and in my gui
self.log = wx.TextCtrl(self, -1, "", style=wx.TE_MULTILINE| wx.TE_RICH2)
#logging.basicConfig(level=logging.INFO)
self.logr = logging.getLogger('')
self.logr.setLevel(logging.INFO)
hdlr = WxLog(self.log)
hdlr.setFormatter(logging.Formatter('%(message)s '))
self.logr.addHandler(hdlr)
#snip
prog = os.path.join(mydir,"mytest.py")
params = [sys.executable,prog]
# Start the subprocess
outmode = subprocess.PIPE
errmode = subprocess.STDOUT
self._proc = subprocess.Popen(params,
stdout=outmode,
stderr=errmode,
shell=True
)
# Read from stdout while there is output from process
while self._proc.poll() == None:
txt = self._proc.stdout.readline()
print txt
# also direct log to txt ctrl
txt = 'Return code was ' + str(self._proc.returncode) +'\n'
# direct
self.logr.info("On end ")
You can try following the suggestion in this post.
Update: You can set the logger in the subprocess to use a SocketHandler and set up a socket server in the GUI to listen for messages from the subprocess, using the technique in the linked-to post to actually make things appear in the GUI. A working socket server is included in the logging documentation.
I wrote an article about how I redirect a few things like ping and traceroute using subprocess to my TextCtrl widget here: http://www.blog.pythonlibrary.org/2010/06/05/python-running-ping-traceroute-and-more/
That might help you figure it out. Here's a more generic article that doesn't use subprocess: http://www.blog.pythonlibrary.org/2009/01/01/wxpython-redirecting-stdout-stderr/
I haven't tried redirecting with the logging module yet, but that may be something I'll do in the future.