I am using multiprocessing module, and using pools to start multiple workers. But the file descriptors which are opened at the parent process are closed in the worker processes. I want them to be open..! Is there any way to pass file descriptors to be shared across parent and children?
On Python 2 and Python 3, functions for sending and receiving file descriptors exist in multiprocessing.reduction module.
Example code (Python 2 and Python 3):
import multiprocessing
import os
# Before fork
child_pipe, parent_pipe = multiprocessing.Pipe(duplex=True)
child_pid = os.fork()
if child_pid:
# Inside parent process
import multiprocessing.reduction
import socket
# has socket_to_pass socket object which want to pass to the child
socket_to_pass = socket.socket(socket.AF_UNIX, socket.SOCK_DGRAM)
socket_to_pass.connect("/dev/log")
# child_pid argument to send_handle() can be arbitrary on Unix,
# on Windows it has to be child PID
multiprocessing.reduction.send_handle(parent_pipe, socket_to_pass.fileno(), child_pid)
socket_to_pass.send("hello from the parent process\n".encode())
else:
# Inside child process
import multiprocessing.reduction
import socket
import os
fd = multiprocessing.reduction.recv_handle(child_pipe)
# rebuild the socket object from fd
received_socket = socket.fromfd(fd, socket.AF_INET, socket.SOCK_STREAM)
# socket.fromfd() duplicates fd, so we can close the received one
os.close(fd)
# and now you can communicate using the received socket
received_socket.send("hello from the child process\n".encode())
There is also a fork of multiprocessing called multiprocess, which replaces pickle with dill. dill can pickle file descriptors, and thus multiprocess can easily pass them between processes.
>>> f = open('test.txt', 'w')
>>> _ = map(f.write, 'hello world')
>>> f.close()
>>> import multiprocess as mp
>>> p = mp.Pool()
>>> f = open('test.txt', 'r')
>>> p.apply(lambda x:x, f)
'hello world'
>>> f.read()
'hello world'
>>> f.close()
multiprocessing itself has helper methods for transferring file descriptors between processes on Windows and Unix platforms that support sending file descriptors over Unix domain sockets in multiprocessing.reduction: send_handle and recv_handle. These are not documented but are in the module's __all__ so it may be safe to assume they are part of the public API. From the source it looks like these have been available since at least 2.6+ and 3.3+.
All platforms have the same interface:
send_handle(conn, handle, destination_pid)
recv_handle(conn)
Where:
conn (multiprocessing.Connection): connection over which to send the file descriptor
handle (int): integer referring to file descriptor/handle
destination_pid (int): integer pid of the process that is receiving the file descriptor - this is currently only used on Windows
There isn't a way that I know of to share file descriptors between processes.
If a way exists, it is most likely OS specific.
My guess is that you need to share data on another level.
Related
Destination path /tmp/abc in both Process 1 & Process 2
Say there are N number of process running
we need to retain the file generated by the latest one
Process1
import shutil
shutil.move(src_path, destination_path)
Process 2
import os
os.remove(destination_path)
Solution
1. Handle the process saying if copy fails with [ErrNo2]No Such File or Directory
Is this the correct solution? Is there a better way to handle this
Useful Link A safe, atomic file-copy operation
You can use FileNotFoundError error
try :
shutil.move(src_path, destination_path)
except FileNotFoundError:
print ('File Not Found')
# Add whatever logic you want to execute
except :
print ('Some Other error')
The primary solutions that come to mind are either
Keep partial information in per-process staging file
Rely on the os for atomic moves
or
Keep partial information in per-process memory
Rely on interprocess communication / locks for atomic writes
For the first, e.g.:
import tempfile
import os
FINAL = '/tmp/something'
def do_stuff():
fd, name = tempfile.mkstemp(suffix="-%s" % os.getpid())
while keep_doing_stuff():
os.write(fd, get_output())
os.close(fd)
os.rename(name, FINAL)
if __name__ == '__main__':
do_stuff()
You can choose to invoke individually from a shell (as shown above) or with some process wrappers (subprocess or multiprocessing would be fine), and either way will work.
For interprocess you would probably want to spawn everything from a parent process
from multiprocessing import Process, Lock
from cStringIO import StringIO
def do_stuff(lock):
output = StringIO()
while keep_doing_stuff():
output.write(get_output())
with lock:
with open(FINAL, 'w') as f:
f.write(output.getvalue())
output.close()
if __name__ == '__main__':
lock = Lock()
for num in range(2):
Process(target=do_stuff, args=(lock,)).start()
I'm trying to start a new process from within an already created namespace (named 'test').
I've looked into a few methods including nsenter:
import subprocess
from nsenter import Namespace
with Namespace(mypid, 'net'):
# output network interfaces as seen from within the mypid's net NS:
subprocess.check_output(['ip', 'a'])
But I cant seem to find a reference of where to find the var, mypid...!
Ideally I'd like to keep dependancies like nsenter to a minimum (for portability) so i'd probably like to go down the ctypes route, something like (although there is no syscall for netns...):
nsname = 'test'
netnspath = '%s%s' % ('/run/netns/', nsname)
netnspath = netnspath.encode('ascii')
libc = ctypes.CDLL('libc.so.6')
printdir(libc)
fd = open(netnspath)
print libc.syscall(???, fd.fileno())
OR (taken from http://tech.zalando.com/posts/entering-kernel-namespaces-with-python.html)
import ctypes
libc = ctypes.CDLL('libc.so.6')
# replace MYPID with the container's PID
fd = open('/proc/<MYPID>/ns/net')
libc.setns(fd.fileno(), 0)
# we are now inside MYPID's network namespace
However, I still have to know the PID, plus my libc does not have setns!
Any thoughts on how I could obtain the PID would be great!
TIA!
The problem with the nsenter module is that you need to provide it with the PID of a process that is already running inside your target namespace. This means that you can't actually use this module to make use of a network namespace that you have created using something like ip netns add.
The kernel's setns() system call takes a file descriptor rather than a PID. If you're willing to solve it with ctypes, you can do something like this:
from ctypes import cdll
libc = cdll.LoadLibrary('libc.so.6')
_setns = libc.setns
CLONE_NEWIPC = 0x08000000
CLONE_NEWNET = 0x40000000
CLONE_NEWUTS = 0x04000000
def setns(fd, nstype):
if hasattr(fd, 'fileno'):
fd = fd.fileno()
_setns(fd, nstype)
def get_netns_path(nspath=None, nsname=None, nspid=None):
'''Generate a filesystem path from a namespace name or pid,
and return a filesystem path to the appropriate file. Returns
the nspath argument if both nsname and nspid are None.'''
if nsname:
nspath = '/var/run/netns/%s' % nsname
elif nspid:
nspath = '/proc/%d/ns/net' % nspid
return nspath
If your libc doesn't have the setns() call, you may be out of luck
(although where are you running that you have a kernel recent enough
to support network namespaces but a libc that doesn't?).
Assuming you have a namespace named "blue" available (ip netns add
blue) you can run:
with open(get_netns_path(nsname="blue")) as fd:
setns(fd, CLONE_NEWNET)
subprocess.check_call(['ip', 'a'])
Note that you must run this code as root.
This works, however I'm unsure at what the 0 does as part of the syscall. So if someone could enlighten me that would be great!
import ctypes
nsname = 'test'
netnspath = '%s%s' % ('/run/netns/', nsname)
netnspath = netnspath.encode('ascii')
libc = ctypes.CDLL('libc.so.6')
fd = open(netnspath)
print libc.syscall(308, fd.fileno(), 0)
After finding this question we've updated python-nsenter so it is now able to enter namespaces via an arbitrary path in addition to providing the pid.
For example if you wanted to enter a namespace created by ip netns add you can now do something like:
with Namespace('/var/run/netns/foo', 'net'):
# do something in the namespace
pass
Version 0.2 is now available via PyPi with this update.
How I can implement the multiprocessing to my function.I tried like this but did not work.
def steric_clashes_parallel(system):
rna_st = system[MolWithResID("G")].molecule()
for i in system.molNums():
peg_st = system[i].molecule()
if rna_st != peg_st:
print(peg_st)
for i in rna_st.atoms(AtomIdx()):
for j in peg_st.atoms(AtomIdx()):
# print(Vector.distance(i.evaluate().center(), j.evaluate().center()))
dist = Vector.distance(i.evaluate().center(), j.evaluate().center())
if dist<2:
return print("there is a steric clash")
return print("there is no steric clashes")
mix = PDB().read("clash_1.pdb")
system = System()
system.add(mix)
from multiprocessing import Pool
p = Pool(4)
p.map(steric_clashes_parallel,system)
I've thousand of pdb or system files to test through this function. It took 2 h for one file on a single core without multiprocessing module. Any suggestion would be great help.
My traceback looks something like this:
self.run()
File "/home/sajid/sire.app/bundled/lib/python3.3/threading.py", line 858,
in run self._target(*self._args, **self._kwargs)
File "/home/sajid/sire.app/bundled/lib/python3.3/multiprocessing/pool.py", line 351,
in _handle_tasks put(task)
File "/home/sajid/sire.app/bundled/lib/python3.3/multiprocessing/connection.py", line 206,
in send ForkingPickler(buf, pickle.HIGHEST_PROTOCOL).dump(obj)
RuntimeError: Pickling of "Sire.System._System.System" instances is not enabled
(boost.org/libs/python/doc/v2/pickle.html)
The problem is that Sire.System._System.System can't be serialized so it can't be sent to the child process. Multiprocessing uses the pickle module for serialization and you can frequently do a sanity check in the main program with pickle.dumps(my_mp_object) to verify.
You have another problem, though (or I think you do, based on variable names). the map method takes an iterable and fans its iterated objects out to pool members, but it appears that you want to process system itself, not something at it iterates.
One trick to multiprocessing is to keep the payload that you send from the parent to the child simple and let the child do the heavy lifting of creating its objects. Here, you might be better off just sending down filenames and letting the children do most of the work.
def steric_clashes_from_file(filename):
mix = PDB().read(filename)
system = System()
system.add(mix)
steric_clashes_parallel(system)
def steric_clashes_parallel(system):
rna_st = system[MolWithResID("G")].molecule()
for i in system.molNums():
peg_st = system[i].molecule()
if rna_st != peg_st:
print(peg_st)
for i in rna_st.atoms(AtomIdx()):
for j in peg_st.atoms(AtomIdx()):
# print(Vector.distance(i.evaluate().center(), j.evaluate().center()))
dist = Vector.distance(i.evaluate().center(), j.evaluate().center())
if dist<2:
return print("there is a steric clash")
return print("there is no steric clashes")
filenames = ["clash_1.pdb",]
from multiprocessing import Pool
p = Pool(4, chunksize=1)
p.map(steric_clashes_from_file,filenames)
# martineau:
I tested pickle command and it gave me;
----> 1 pickle.dumps(clash_1.pdb)
RuntimeError: Pickling of "Sire.Mol._Mol.MoleculeGroup" instances is not enabled (http://www.boost.org/libs/python/doc/v2/pickle.html)
----> 1 pickle.dumps(system)
RuntimeError: Pickling of "Sire.System._System.System" instances is not enabled (http://www.boost.org/libs/python/doc/v2/pickle.html)
With your script it took the same time and using a single core only. dist line is iterable though. Can i run this single line over multicores ? I modify the line as;
for i in rna_st.atoms(AtomIdx()):
icent = i.evaluate().center()
for j in peg_st.atoms(AtomIdx()):
dist = Vector.distance(icent, j.evaluate().center())
There is one trick you can do to get a faster computation for each file -- processing each file sequentially, but processing the contents of the file in parallel. This relies on a number of caveats:
Your are running on a system that can fork processes (such as Linux).
The computations you are doing do not have side effects that effect the result of future computations.
It seems like this is the case in your situation, but I can't be 100% sure.
When a process is forked, all the memory in the child process is duplicated from the parent process (what's more it is duplicated in an efficient manner -- bits of memory that are only read from aren't duplicated). This makes it easy to share big, complex initial states between processes. However, once the child processes have started they will not see any changes to objects made in the parent process though (and vice versa).
Sample code:
import multiprocessing
system = None
rna_st = None
class StericClash(Exception):
"""Exception used to halt processing of a file. Could be modified to
include information about what caused the clash if this is useful."""
pass
def steric_clashes_parallel(system_index):
peg_st = system[system_index].molecule()
if rna_st != peg_st:
for i in rna_st.atoms(AtomIdx()):
for j in peg_st.atoms(AtomIdx()):
dist = Vector.distance(i.evaluate().center(),
j.evaluate().center())
if dist < 2:
raise StericClash()
def process_file(filename):
global system, rna_st
# initialise global values before creating pool
mix = PDB().read(filename)
system = System()
system.add(mix)
rna_st = system[MolWithResID("G")].molecule()
with multiprocessing.Pool() as pool:
# contents of file processed in parallel
try:
pool.map(steric_clashes_parallel, range(system.molNums()))
except StericClash:
# terminate called to halt current jobs and further processing
# of file
pool.terminate()
# wait for pool processes to terminate before returning
pool.join()
return False
else:
pool.close()
pool.join()
return True
finally:
# reset globals
system = rna_st = None
if __name__ == "__main__":
for filename in get_files_to_be_processed():
# files are being processed in serial
result = process_file(filename)
save_result_to_disk(filename, result)
I'm trying to understand how to use the new AsyncIO functionality in Python 3.4 and I'm struggling with how to use the event_loop.add_reader(). From the limited discussions that I've found it looks like its for reading the standard out of a separate process as opposed to the contents of an open file. Is that true? If so it appears that there's no AsyncIO specific way to integrate standard file IO, is this also true?
I've been playing with the following code. The output of the following gives the exception PermissionError: [Errno 1] Operation not permitted from line 399 of /python3.4/selectors.py self._epoll.register(key.fd, epoll_events) that is triggered by the add_reader() line below
import asyncio
import urllib.parse
import sys
import pdb
import os
def fileCallback(*args):
pdb.set_trace()
path = sys.argv[1]
loop = asyncio.get_event_loop()
#fd = os.open(path, os.O_RDONLY)
fd = open(path, 'r')
#data = fd.read()
#print(data)
#fd.close()
pdb.set_trace()
task = loop.add_reader(fd, fileCallback, fd)
loop.run_until_complete(task)
loop.close()
EDIT
For those looking for an example of how to use AsyncIO to read more than one file at a time like I was curious about, here's an example of how it can be accomplished. The secret is in the line yield from asyncio.sleep(0). This essentially pauses the current function, putting it back in the event loop queue, to be called after all other ready functions are executed. Functions are determined to be ready based on how they were scheduled.
import asyncio
#asyncio.coroutine
def read_section(file, length):
yield from asyncio.sleep(0)
return file.read(length)
#asyncio.coroutine
def read_file(path):
fd = open(path, 'r')
retVal = []
cnt = 0
while True:
cnt = cnt + 1
data = yield from read_section(fd, 102400)
print(path + ': ' + str(cnt) + ' - ' + str(len(data)))
if len(data) == 0:
break;
fd.close()
paths = ["loadme.txt", "loadme also.txt"]
loop = asyncio.get_event_loop()
tasks = []
for path in paths:
tasks.append(asyncio.async(read_file(path)))
loop.run_until_complete(asyncio.wait(tasks))
loop.close()
These functions expect a file descriptor, that is, the underlying integers the operating system uses, not Python's file objects. File objects that are based on file descriptors return that descriptor on the fileno() method, so for example:
>>> sys.stderr.fileno()
2
In Unix, file descriptors can be attached to files or a lot of other things, including other processes.
Edit for the OP's edit:
As Max in the comments says, you can not use epoll on local files (and asyncio uses epoll). Yes, that's kind of weird. You can use it on pipes, though, for example:
import asyncio
import urllib.parse
import sys
import pdb
import os
def fileCallback(*args):
print("Received: " + sys.stdin.readline())
loop = asyncio.get_event_loop()
task = loop.add_reader(sys.stdin.fileno(), fileCallback)
loop.run_forever()
This will echo stuff you write on stdin.
you cannot use add_reader on local files, because:
It cannot be done using select/poll/epoll
It depends on the operating system
It cannot be fully asynchronous because of os limitations (linux does not support async fs metadata read/write)
But, technically, yes you should be able to do async filesystem read/write, (almost) all systems have DMA mechanism for doing i/o "in the background". And no, local i/o is not really fast such that no one would want it, the CPU are in the order of millions times faster that disk i/o.
Look for aiofile or aiofiles if you want to try async i/o
I'm using python's ftplib to write a small FTP client, but some of the functions in the package don't return string output, but print to stdout. I want to redirect stdout to an object which I'll be able to read the output from.
I know stdout can be redirected into any regular file with:
stdout = open("file", "a")
But I prefer a method that doesn't uses the local drive.
I'm looking for something like the BufferedReader in Java that can be used to wrap a buffer into a stream.
from cStringIO import StringIO # Python3 use: from io import StringIO
import sys
old_stdout = sys.stdout
sys.stdout = mystdout = StringIO()
# blah blah lots of code ...
sys.stdout = old_stdout
# examine mystdout.getvalue()
There is a contextlib.redirect_stdout() function in Python 3.4+:
import io
from contextlib import redirect_stdout
with io.StringIO() as buf, redirect_stdout(buf):
print('redirected')
output = buf.getvalue()
Here's a code example that shows how to implement it on older Python versions.
Just to add to Ned's answer above: you can use this to redirect output to any object that implements a write(str) method.
This can be used to good effect to "catch" stdout output in a GUI application.
Here's a silly example in PyQt:
import sys
from PyQt4 import QtGui
class OutputWindow(QtGui.QPlainTextEdit):
def write(self, txt):
self.appendPlainText(str(txt))
app = QtGui.QApplication(sys.argv)
out = OutputWindow()
sys.stdout=out
out.show()
print "hello world !"
A context manager for python3:
import sys
from io import StringIO
class RedirectedStdout:
def __init__(self):
self._stdout = None
self._string_io = None
def __enter__(self):
self._stdout = sys.stdout
sys.stdout = self._string_io = StringIO()
return self
def __exit__(self, type, value, traceback):
sys.stdout = self._stdout
def __str__(self):
return self._string_io.getvalue()
use like this:
>>> with RedirectedStdout() as out:
>>> print('asdf')
>>> s = str(out)
>>> print('bsdf')
>>> print(s, out)
'asdf\n' 'asdf\nbsdf\n'
Starting with Python 2.6 you can use anything implementing the TextIOBase API from the io module as a replacement.
This solution also enables you to use sys.stdout.buffer.write() in Python 3 to write (already) encoded byte strings to stdout (see stdout in Python 3).
Using StringIO wouldn't work then, because neither sys.stdout.encoding nor sys.stdout.buffer would be available.
A solution using TextIOWrapper:
import sys
from io import TextIOWrapper, BytesIO
# setup the environment
old_stdout = sys.stdout
sys.stdout = TextIOWrapper(BytesIO(), sys.stdout.encoding)
# do something that writes to stdout or stdout.buffer
# get output
sys.stdout.seek(0) # jump to the start
out = sys.stdout.read() # read output
# restore stdout
sys.stdout.close()
sys.stdout = old_stdout
This solution works for Python 2 >= 2.6 and Python 3.
Please note that our new sys.stdout.write() only accepts unicode strings and sys.stdout.buffer.write() only accepts byte strings.
This might not be the case for old code, but is often the case for code that is built to run on Python 2 and 3 without changes, which again often makes use of sys.stdout.buffer.
You can build a slight variation that accepts unicode and byte strings for write():
class StdoutBuffer(TextIOWrapper):
def write(self, string):
try:
return super(StdoutBuffer, self).write(string)
except TypeError:
# redirect encoded byte strings directly to buffer
return super(StdoutBuffer, self).buffer.write(string)
You don't have to set the encoding of the buffer the sys.stdout.encoding, but this helps when using this method for testing/comparing script output.
This method restores sys.stdout even if there's an exception. It also gets any output before the exception.
import io
import sys
real_stdout = sys.stdout
fake_stdout = io.BytesIO() # or perhaps io.StringIO()
try:
sys.stdout = fake_stdout
# do what you have to do to create some output
finally:
sys.stdout = real_stdout
output_string = fake_stdout.getvalue()
fake_stdout.close()
# do what you want with the output_string
Tested in Python 2.7.10 using io.BytesIO()
Tested in Python 3.6.4 using io.StringIO()
Bob, added for a case if you feel anything from the modified / extended code experimentation might get interesting in any sense, otherwise feel free to delete it
Ad informandum ... a few remarks from extended experimentation during finding some viable mechanics to "grab" outputs, directed by numexpr.print_versions() directly to the <stdout> ( upon a need to clean GUI and collecting details into debugging-report )
# THIS WORKS AS HELL: as Bob Stein proposed years ago:
# py2 SURPRISEDaBIT:
#
import io
import sys
#
real_stdout = sys.stdout # PUSH <stdout> ( store to REAL_ )
fake_stdout = io.BytesIO() # .DEF FAKE_
try: # FUSED .TRY:
sys.stdout.flush() # .flush() before
sys.stdout = fake_stdout # .SET <stdout> to use FAKE_
# ----------------------------------------- # + do what you gotta do to create some output
print 123456789 # +
import numexpr # +
QuantFX.numexpr.__version__ # + [3] via fake_stdout re-assignment, as was bufferred + "late" deferred .get_value()-read into print, to finally reach -> real_stdout
QuantFX.numexpr.print_versions() # + [4] via fake_stdout re-assignment, as was bufferred + "late" deferred .get_value()-read into print, to finally reach -> real_stdout
_ = os.system( 'echo os.system() redir-ed' )# + [1] via real_stdout + "late" deferred .get_value()-read into print, to finally reach -> real_stdout, if not ( _ = )-caught from RET-d "byteswritten" / avoided from being injected int fake_stdout
_ = os.write( sys.stderr.fileno(), # + [2] via stderr + "late" deferred .get_value()-read into print, to finally reach -> real_stdout, if not ( _ = )-caught from RET-d "byteswritten" / avoided from being injected int fake_stdout
b'os.write() redir-ed' )# *OTHERWISE, if via fake_stdout, EXC <_io.BytesIO object at 0x02C0BB10> Traceback (most recent call last):
# ----------------------------------------- # ? io.UnsupportedOperation: fileno
#''' ? YET: <_io.BytesIO object at 0x02C0BB10> has a .fileno() method listed
#>>> 'fileno' in dir( sys.stdout ) -> True ? HAS IT ADVERTISED,
#>>> pass; sys.stdout.fileno -> <built-in method fileno of _io.BytesIO object at 0x02C0BB10>
#>>> pass; sys.stdout.fileno()-> Traceback (most recent call last):
# File "<stdin>", line 1, in <module>
# io.UnsupportedOperation: fileno
# ? BUT REFUSES TO USE IT
#'''
finally: # == FINALLY:
sys.stdout.flush() # .flush() before ret'd back REAL_
sys.stdout = real_stdout # .SET <stdout> to use POP'd REAL_
sys.stdout.flush() # .flush() after ret'd back REAL_
out_string = fake_stdout.getvalue() # .GET string from FAKE_
fake_stdout.close() # <FD>.close()
# +++++++++++++++++++++++++++++++++++++ # do what you want with the out_string
#
print "\n{0:}\n{1:}{0:}".format( 60 * "/\\",# "LATE" deferred print the out_string at the very end reached -> real_stdout
out_string #
)
'''
PASS'd:::::
...
os.system() redir-ed
os.write() redir-ed
/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
123456789
'2.5'
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Numexpr version: 2.5
NumPy version: 1.10.4
Python version: 2.7.13 |Anaconda 4.0.0 (32-bit)| (default, May 11 2017, 14:07:41) [MSC v.1500 32 bit (Intel)]
AMD/Intel CPU? True
VML available? True
VML/MKL version: Intel(R) Math Kernel Library Version 11.3.1 Product Build 20151021 for 32-bit applications
Number of threads used by default: 4 (out of 4 detected cores)
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
>>>
EXC'd :::::
...
os.system() redir-ed
/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
123456789
'2.5'
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Numexpr version: 2.5
NumPy version: 1.10.4
Python version: 2.7.13 |Anaconda 4.0.0 (32-bit)| (default, May 11 2017, 14:07:41) [MSC v.1500 32 bit (Intel)]
AMD/Intel CPU? True
VML available? True
VML/MKL version: Intel(R) Math Kernel Library Version 11.3.1 Product Build 20151021 for 32-bit applications
Number of threads used by default: 4 (out of 4 detected cores)
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
Traceback (most recent call last):
File "<stdin>", line 9, in <module>
io.UnsupportedOperation: fileno
'''
In Python3.6, the StringIO and cStringIO modules are gone, you should use io.StringIO instead.So you should do this like the first answer:
import sys
from io import StringIO
old_stdout = sys.stdout
old_stderr = sys.stderr
my_stdout = sys.stdout = StringIO()
my_stderr = sys.stderr = StringIO()
# blah blah lots of code ...
sys.stdout = self.old_stdout
sys.stderr = self.old_stderr
// if you want to see the value of redirect output, be sure the std output is turn back
print(my_stdout.getvalue())
print(my_stderr.getvalue())
my_stdout.close()
my_stderr.close()
Use pipe() and write to the appropriate file descriptor.
https://docs.python.org/library/os.html#file-descriptor-operations
Here's another take on this. contextlib.redirect_stdout with io.StringIO() as documented is great, but it's still a bit verbose for every day use. Here's how to make it a one-liner by subclassing contextlib.redirect_stdout:
import sys
import io
from contextlib import redirect_stdout
class capture(redirect_stdout):
def __init__(self):
self.f = io.StringIO()
self._new_target = self.f
self._old_targets = [] # verbatim from parent class
def __enter__(self):
self._old_targets.append(getattr(sys, self._stream)) # verbatim from parent class
setattr(sys, self._stream, self._new_target) # verbatim from parent class
return self # instead of self._new_target in the parent class
def __repr__(self):
return self.f.getvalue()
Since __enter__ returns self, you have the context manager object available after the with block exits. Moreover, thanks to the __repr__ method, the string representation of the context manager object is, in fact, stdout. So now you have,
with capture() as message:
print('Hello World!')
print(str(message)=='Hello World!\n') # returns True