pexpect.spawn seems to fail when I put it in a function. In the example below, I expect it to touch both '/tmp/no.txt' and '/tmp/yes.txt'. It only touches /tmp/yes.txt. I've tried all of the obvious locations for the 'import pexpect'. It doesn't throw errors, just doesn't run. Thanks for any suggestions!
import pexpect
def fun():
import pexpect
fail = pexpect.spawn('touch /tmp/no.txt')
fun()
succeed = pexpect.spawn('touch /tmp/yes.txt')
$ ls /tmp/*.txt
/tmp/yes.txt
$
You need to wait for the command run:
import pexpect
def fun():
#import pexpect you've already imported pexpect you don't need to import it again
fail = pexpect.spawn('touch /tmp/no.txt')
fail.wait()
fun()
succeed = pexpect.spawn('touch /tmp/yes.txt')
succeed.wait()
Related
I make a python script running in the console, and I want to create another console for printing important messages without running another python script to do that.
I first tried to use win32console.AllocConsole() directly, but it got Access is denied
(Seemingly because one process can attach to at most one console according to the docs).
So I tried creating a new process by using multiprocessing :
import sys, os
import win32api, win32con, win32console
import multiprocessing
def ShowConsole():
win32console.FreeConsole()
win32console.AllocConsole()
sys.stdout = open("CONOUT$", "w")
sys.stderr = open("CONOUT$", "w")
print("Test")
os.system("pause")
if __name__ == '__main__':
p = multiprocessing.Process(target=ShowConsole)
p.start()
But when I ran the code in Powershell, it exited directly with no message while no new console is created.
None of the possible solutions I found in stackoverflow works for me. What should I do?
Update: It turns out that it is because multiprocessing.Process fails to call ShowConsole function. I use multiprocessing.dummy.Process as the alternative and it works as expected.
The reason why multiprocessing.Process fails to call target is still unclear.
There's nothing wrong with your example above, it pops the console as shown below. I added a "hello" in the main section to differentiate.
But since you want to values from the first console to the second,
here's a better example. Utilize put/get to pass the information from the first console to the second console.
import win32console
import multiprocessing
import time
def secondconsole(output):
win32console.FreeConsole()
win32console.AllocConsole()
while True:
print(output.get())
if __name__ == "__main__":
output = multiprocessing.Queue()
multiprocessing.Process(target=secondconsole, args=[output]).start()
while True:
print("Hello World")
output.put("Hello to second console") #here you will provide the data to the second console
time.sleep(3) #sleep for 3 seconds just for testing
It looks like the issue might be with the way you are trying to open the console using sys.stdout and sys.stderr. Try using the following code instead:
import sys, os
import win32api, win32con, win32console
import multiprocessing
def ShowConsole():
win32console.FreeConsole()
win32console.AllocConsole()
os.dup2(win32console.GetStdHandle(win32console.STD_OUTPUT_HANDLE), sys.stdout.fileno())
os.dup2(win32console.GetStdHandle(win32console.STD_ERROR_HANDLE), sys.stderr.fileno())
print("Test")
os.system("pause")
if __name__ == '__main__':
p = multiprocessing.Process(target=ShowConsole)
p.start()
When I set the name for a Python thread, it doesn't show up on htop or ps. The ps output only shows python as the thread name. Is there any way to set a thread name so that it shows up on system reports like them?
from threading import Thread
import time
def sleeper():
while True:
time.sleep(10)
print "sleeping"
t = Thread(target=sleeper, name="Sleeper01")
t.start()
t.join()
ps -T -p {PID} output
PID SPID TTY TIME CMD
31420 31420 pts/30 00:00:00 python
31420 31421 pts/30 00:00:00 python
First install the prctl module. (On debian/ubuntu just type sudo apt-get install python-prctl)
from threading import Thread
import time
import prctl
def sleeper():
prctl.set_name("sleeping tiger")
while True:
time.sleep(10)
print "sleeping"
t = Thread(target=sleeper, name="Sleeper01")
t.start()
t.join()
This prints
$ ps -T
PID SPID TTY TIME CMD
22684 22684 pts/29 00:00:00 bash
23302 23302 pts/29 00:00:00 python
23302 23303 pts/29 00:00:00 sleeping tiger
23304 23304 pts/29 00:00:00 ps
Note: python3 users may wish to use pyprctl.
Prctl module is nice and provide many features, but depends libcap-dev package. Libcap2 is most likely installed because it is a dependency of many packages (systemd for example). So if you only need set thread name, use libcap2 over ctypes.
See improved Grief answer below.
LIB = 'libcap.so.2'
try:
libcap = ctypes.CDLL(LIB)
except OSError:
print(
'Library {} not found. Unable to set thread name.'.format(LIB)
)
else:
def _name_hack(self):
# PR_SET_NAME = 15
libcap.prctl(15, self.name.encode())
threading.Thread._bootstrap_original(self)
threading.Thread._bootstrap_original = threading.Thread._bootstrap
threading.Thread._bootstrap = _name_hack
On Python 2, I use the following monkey patch to propagate the Thread's name to the system if prctl is installed in the system:
try:
import prctl
def set_thread_name(name): prctl.set_name(name)
def _thread_name_hack(self):
set_thread_name(self.name)
threading.Thread.__bootstrap_original__(self)
threading.Thread.__bootstrap_original__ = threading.Thread._Thread__bootstrap
threading.Thread._Thread__bootstrap = _thread_name_hack
except ImportError:
log('WARN: prctl module is not installed. You will not be able to see thread names')
def set_thread_name(name): pass
After the execution of this code, you can set thread's name as usual:
threading.Thread(target=some_target, name='Change monitor', ...)
That means, that if you already set names for threads, you don't need to change anything. I cannot guarantee, that this is 100% safe, but it works for me.
I was confused after I found a tool--py-spy to show python thread while running.
install: pip3 install -i https://pypi.doubanio.com/simple/ py-spy
usage: py-spy dump --pid process-number
for example, py-spy dump --pid 1234 can show all the thread stacks,name,id of python process 1234
An alternative solution (actually a dirty one, since it sets the process name, not the thread name) is to use the setproctitle module from pypi.
You can install it with pip install setproctitle and use it as follow:
import setproctitle
import threading
import time
def a_loop():
setproctitle.setproctitle(threading.currentThread().name)
# you can otherwise explicitly declare the name:
# setproctitle.setproctitle("A loop")
while True:
print("Looping")
time.sleep(99)
t = threading.Thread(target=a_loop, name="ExampleLoopThread")
t.start()
https://pypi.org/project/namedthreads/ provides a way to patch threading.Thread.start to call pthread_setname_np with the Python Thread.name.
It is compatible with Python 2.7 & 3.4+ (I've tested it with 3.10)
To activate it,
import namedthreads
namedthreads.patch()
Note that thread names in Python are unlimited, but pthreads has a limit of 15 char, so the Python name will be trimmed.
I attempted to follow answers here to install python-prctl or pyprctl. However none of them could be installed because the need for a gcc that we don't have.
After some digging on the net, this python issue 15500 gave a nice solution [https://bugs.python.org/issue15500]. Here is what I've got based on it:
import ctypes, os, threading
def set_thread_name_np(the_name):
the_lib_path = "/lib/libpthread-2.42.so"
if not os.path.isfile(the_lib_path):
return None
try:
libpthread = ctypes.CDLL(the_lib_path)
except:
return None
if hasattr(libpthread, "pthread_setname_np"):
pthread_setname_np = libpthread.pthread_setname_np
pthread_setname_np.argtypes = [ctypes.c_void_p,
ctypes.c_char_p]
pthread_setname_np.restype = ctypes.c_int
if isinstance(the_name, str):
the_name = the_name.encode('ascii', 'replace')
if type(the_name) is not bytes:
return None
the_thread = threading.current_thread()
ident = getattr(the_thread, "ident", None)
if ident is not None:
pthread_setname_np(ident, the_name[:15])
return True
return None
I'd like to know immediately when I've broken a test in Django. Instead of always running manage.py test separately, is there a way to run tests in the background when running manage.py runserver and report them to that same terminal? The tests would ideally be re-run when a file is saved, just like how the server is reloaded normally.
This would be great to detect mistakes sooner. And even better that it'd be right in your face instead of hiding behind a manual test step.
Is this possible?
I ended up overriding the management command.
app-name\management\commands\runserver.py:
from __future__ import print_function
import subprocess
from threading import Thread
from django.core.management.commands.runserver import Command as BaseCommand
# or: from devserver.management.commands.runserver import Command as BaseCommand
from django.conf import settings
from termcolor import colored
BEEP_CHARACTER = '\a'
def call_then_log():
try:
output = subprocess.check_output('manage.py test --failfast',
stderr=subprocess.STDOUT, shell=True)
except subprocess.CalledProcessError as ex:
print(colored(ex.output, 'red', attrs=['bold']))
print(BEEP_CHARACTER, end='')
return
print(output)
def run_background_tests():
print('Running tests...')
thread = Thread(target=call_then_log, name='runserver-background-tests')
thread.daemon = True
thread.start()
class Command(BaseCommand):
def inner_run(self, *args, **options):
if settings.DEBUG and not settings.TESTING:
run_background_tests()
super(Command, self).inner_run(*args, **options)
requirements.txt:
termcolor
This runs your tests in a background thread, which runs each time Django auto-reloads. The old thread will be stopped. If any test fails, it will beep and the first failure result will be printed in red to the terminal.
This answer is also worth reading for speeding up your tests for an even faster feedback loop.
I'm using Python 2.7.6 and I have two scripts:
outer.py
import sys
import os
print "Outer file launching..."
os.system('inner.py')
calling inner.py:
import sys
import os
print "[CALLER GOES HERE]"
I want the second script (inner.py) to print the name of the caller script (outer.py).
I can't pass to inner.py a parameter with the name of the first script because I have tons of called/caller scripts and I can't refactor all the code.
Any idea?
One idea is to use psutil.
#!env/bin/python
import psutil
me = psutil.Process()
parent = psutil.Process(me.ppid())
grandparent = psutil.Process(parent.ppid())
print grandparent.cmdline()
This is ofcourse dependant of how you start outer.py.
This solution is os independant.
On linux you can get the process id and then the caller name like so.
p1.py
import os
os.system('python p2.py')
p2.py
import os
pid = os.getppid()
cmd = open('/proc/%d/cmdline' % (pid,)).read()
caller = ' '.join(cmd.split(' ')[1:])
print caller
running python p1.py will yield p1.py
I imagine you can do similar things in other OS as well.
Another, a slightly shorter version for unix only
import os
parent = os.system('readlink -f /proc/%d/exe' % os.getppid())
If applicable to your situation you could also simply pass an argument that lets inner.py differentiate:
import sys
import os
print "Outer file launching..."
os.system('inner.py launcher')
innter.py
import sys
import os
try:
if sys.argv[0] == 'launcher':
print 'outer.py called us'
except:
pass
I have the following problem. Whenever a child thread wants to perform some IO operation (writing to file, downloading a file) the program hangs. In the following example the program hangs on opener.retrieve. If I execute python main.py the program is blocked on an retrieve function. If I execute python ./src/tmp.py everything is fine. I don't understand why. Can anybody explain me what is happening?
I am using python2.7 on Linux system (kernel 3.5.0-27).
File ordering:
main.py
./src
__init__.py
tmp.py
main.py
import src.tmp
tmp.py
import threading
import urllib
class DownloaderThread(threading.Thread):
def __init__(self, pool_sema, i):
threading.Thread.__init__(self)
self.pool_sema = pool_sema
self.daemon = True
self.i = i
def run(self):
try:
opener = urllib.FancyURLopener({})
opener.retrieve("http://www.greenteapress.com/thinkpython/thinkCSpy.pdf", "/tmp/" + str(self.i) + ".pdf")
finally:
self.pool_sema.release()
class Downloader(object):
def __init__(self):
maxthreads = 1
self.pool_sema = threading.BoundedSemaphore(value=maxthreads)
def download_folder(self):
for i in xrange(20):
self.pool_sema.acquire()
print "Downloading", i
t = DownloaderThread(self.pool_sema,i)
t.start()
d = Downloader()
d.download_folder()
I managed to get it to work by hacking urllib.py - if you inspect it you will see many import statements dispersed within the code - i.e. it uses imports stuff 'on the fly' and not just when the module loads.
So, the real reason is still unknown - but not worth investigating - probably some deadlock in Python's import system. You just shouldn't run nontrivial code during an import - that's just asking for trouble.
If you insist, you can get it to work if you move all these weird import statements to the beginning of urllib.py.