I am running the following python line of code within a small windows service application which is multiprocessed.
multiprocessing.Manager()
The issue is there appears to be no attribute argv within the module sys set when running a windows service. As a result of this I get the following error occur within the python multiprocessing forking library. I was hoping someone might be able to shed some light on this issue.
Stacktrace of issue (when running multiprocessing.Manager within windows service):
File "C:\python27\lib\multiprocessing\__init__.py", line 99, in Manager
m.start()
File "C:\python27\lib\multiprocessing\managers.py", line 524, in start
self._process.start()
File "C:\python27\lib\multiprocessing\process.py", line 130, in start
self._popen = Popen(self)
File "C:\python27\lib\multiprocessing\forking.py", line 252, in __init__
cmd = get_command_line() + [rhandle]
File "C:\python27\lib\multiprocessing\forking.py", line 339, in get_command_line
if process.current_process()._identity==() and is_forking(sys.argv):
AttributeError: 'module' object has no attribute 'argv
Update
One possible solution to my problem is that I manually set the sys.argv value if it is not set at runtime but this seems very unpythonic. But might be the only solution. What do stackoverflow-ers think?
if not hastattr(sys, 'argv'):
sys.argv = []
But, this then leads me to a new issue with the multiprocessing.manager where by an unexpected EOFError occurs within the code.
File "C:\python27\lib\multiprocessing\__init__.py", line 99, in Manager
m.start()
File "C:\python27\lib\multiprocessing\managers.py", line 528, in start
self._address = reader.recv()
EOFError
Since setting sys.argv = [] didn't work, if there is a script name you might be able to use do sys.argv = ['scriptname'] or just sys.argv = ['']. The latter is what you get if you run python, import sys and then look at sys.argv, e.g.,
~$ python
>>> import sys
>>> sys.argv
['']
Call this function after Python initialization:
PySys_SetArgv(argc, argv);
The interpreter can't get access to argc/argv in other way rather then passing them explicitly from main().
Related
First off, I am very new to multiprocessing, and I can't seem to make a very simple and straightforward example work. This is the example I working with:
import multiprocessing
def worker():
"""worker function"""
print 'Worker'
return
if __name__ == '__main__':
jobs = []
for i in range(5):
p = multiprocessing.Process(target=worker)
jobs.append(p)
p.start()
everytime I run a code I am getting this error multiple times :
C:\Anaconda2\lib\site-packages\IPython\utils\traitlets.py:5: UserWarning: IPython.utils.traitlets has moved to a top-level traitlets package.
warn("IPython.utils.traitlets has moved to a top-level traitlets package.")
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Anaconda2\lib\multiprocessing\forking.py", line 381, in main
self = load(from_parent)
File "C:\Anaconda2\lib\pickle.py", line 1384, in load
return Unpickler(file).load()
File "C:\Anaconda2\lib\pickle.py", line 864, in load
dispatch[key](self)
File "C:\Anaconda2\lib\pickle.py", line 1096, in load_global
klass = self.find_class(module, name)
File "C:\Anaconda2\lib\pickle.py", line 1132, in find_class
klass = getattr(mod, name)
AttributeError: 'module' object has no attribute 'worker'
I know that this question is very vague but I if anyone could point me in the right direction I would appreciate it.
I am on Windows, I run it in Anaconda with python 2.7, the code is exactly the same as above, nothing more nothing less! I run it directly in the console in the IDE
EDIT: It looks like when I run the code directly in command prompt it works just fine, but doing it the console using Anaconda won't work. anybody knows why?
Anaconda doesn't like multiprocessing as explained in this
answer.
From the answer:
This is because of the fact that multiprocessing does not work well in the interactive interpreter. The main reason is that there is no fork() function applicable in windows. It is explained on their web page itself.
Thank you!
I am trying to run SUMO through traci interface. I copy pasted this example from this link. The code is as follows
import os, sys
import subprocess
if 'SUMO_HOME' in os.environ:
tools = os.path.join(os.environ['SUMO_HOME'], 'tools')
sys.path.append(tools)
else:
sys.exit("please declare environment variable 'SUMO_HOME'")
PORT = 8813
sumoBinary = "C:/Program Files (x86)/DLR/Sumo/bin/sumo-gui"
sumoProcess = subprocess.Popen([sumoBinary, "-c", "example.sumocfg", \
"--remote-port", str(PORT)], stdout=sys.stdout, stderr=sys.stderr)
import traci
import traci.constants as tc
traci.init(PORT)
traci.vehicle.subscribe(vehID, (tc.VAR_ROAD_ID, tc.VAR_LANEPOSITION))
print(traci.vehicle.getSubscriptionResults(vehID))
for step in range(3):
print("step", step)
traci.simulationStep()
print(traci.vehicle.getSubscriptionResults(vehID))
traci.close()
When I try to run the code, it throws me the following error
File "C:\Anaconda3\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 685, in runfile
execfile(filename, namespace)
File "C:\Anaconda3\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 85, in execfile
exec(compile(open(filename, 'rb').read(), filename, 'exec'), namespace)
File "C:/Users/Raja/Documents/vehicomPhd/SUMOTraffic/traci.py", line 22, in <module>
"--remote-port", str(PORT)], stdout=sys.stdout, stderr=sys.stderr)
File "C:\Anaconda3\lib\subprocess.py", line 823, in __init__
errread, errwrite) = self._get_handles(stdin, stdout, stderr)
File "C:\Anaconda3\lib\subprocess.py", line 1037, in _get_handles
c2pwrite = msvcrt.get_osfhandle(stdout.fileno())
File "C:\Anaconda3\lib\site-packages\IPython\kernel\zmq\iostream.py", line 205, in fileno
raise UnsupportedOperation("IOStream has no fileno.")
UnsupportedOperation: IOStream has no fileno.
Anyone know what is wrong.
Looks like you're running in an ipython notebook. They have non-standard "standard" I/O streams that can't be used like a "true" file object (because they're really data queues, not pipes, so they don't have a file descriptor to use for low level I/O).
You can't use them with libraries (like subprocess) that perform low level I/O; the error is there to tell you this.. You'll need to use a real file-like object, possibly something as simple as sending output to a tempfile.TemporaryFile and then copying output from the file to stdout if that's what you need.
It's possible it would work by just not passing Popen stdout and stderr arguments at all; the default behavior for subprocess is to use the same stdout and stderr as the parent, so if there are valid file handles opened (even if the notebook replaced sys.stdout/sys.stderr for Python use), it might "just work" (where "just work" includes the possibility that data sent to the underlying file descriptors 0 and 1 is discarded, so you never see it).
Or just don't run in the ipython notebook.
The problem was that my command line used Python 2 whereas the spyder IDE used python 3. Since I wont be needing to pass any more arguments to the stdin, I removed the option and simply tried to open the sumo-gui with subprocess as follows. It works now.
PORT = 8813
sumoBinary = 'C:/Program Files (x86)/DLR/Sumo/bin/sumo-gui'
sumoProcess = subprocess.Popen([sumoBinary, "-c", "Kaiserslautern.sumocfg", \
"--remote-port", str(PORT)])
Let us consider Linux platform where I need to execute a program called smart.exe which uses input.dat file. Both the files are placed in the same directory with each file having the same file permission 777.
Now if I run the following command in the terminal window smart.exe is fully executed without any error.
$./smart.exe input.dat
On the other hand, if I use the following python script called my_script.py placed in the same directory, then I get an error.
my_script.py has the following code:
#!/usr/bin/python
import os, subprocess
exit_code = subprocess.call("./smart.exe input.dat", shell = False)
The error is as follows:
File "my_script.py", line 4, in <module>
exit_code = subprocess.call("./smart.exe input.dat", shell = False)
File "/usr/lib64/python2.6/subprocess.py", line 478, in call
p = Popen(*popenargs, **kwargs)
File "/usr/lib64/python2.6/subprocess.py", line 642, in __init__
errread, errwrite)
File "/usr/lib64/python2.6/subprocess.py", line 1234, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
Can someone please tell me why this is happening. Please note that the smart.exe should take around 10 sec to fully complete. This may be a clue for the problem.
Please also advise if there is any other way to run smart.exe from my_script.py. Your solution is much appreciated!
You should decide if you want shell support or not.
If you want the shell to be used (which is not necessary here), you should use exit_code = subprocess.call("./smart.exe input.dat", shell=True). Then the shell interprets your command line.
If you don't want it (as you don't need it and want to avoid unnecessary complexity), you should do exit_code = subprocess.call(["./smart.exe", "input.dat"], shell=False).
(And there is no point naming your binarys .exe under Linux.)
My friend and I tried to reproduce this mininet-test experiment: https://github.com/mininet/mininet-tests/tree/master/dctcp
We self created a VM and installed Mininet 2.2 on ubuntu with kernel version 3.18.9, which included dctcp and tcp_probe function.
Since author suggest kernel version 3.0.1 didn't support those function we needed in Mininetnet. We simply changed the some function name in dctcp.py, such as: add_host changed to addHost, add_switch changed to addSwitch ,add_link changed to addLink.
And we also imported some python function we need, such as : python-matplotlib, python-termcolor and bwm-ng.
But we still encounter the following problems when we plotting the graph - cwnd.png.
Are we missing some important lib or any code need to change?
.Traceback (most recent call last):
File "dctcp.py", line 250, in <module>
main()
File "dctcp.py", line 244, in main
net.stop()
File "build/bdist.linux-x86_64/egg/mininet/net.py", line 514, in stop
File "build/bdist.linux-x86_64/egg/mininet/link.py", line 479, in stop
File "build/bdist.linux-x86_64/egg/mininet/link.py", line 472, in delete
File "build/bdist.linux-x86_64/egg/mininet/link.py", line 199, in delete
File "build/bdist.linux-x86_64/egg/mininet/link.py", line 64, in cmd
File "build/bdist.linux-x86_64/egg/mininet/node.py", line 350, in cmd
File "build/bdist.linux-x86_64/egg/mininet/node.py", line 269, in sendCmd
AssertionError
s1
s1-eth1
s1-eth2
s1-eth3
total
['tcp-n3-bw100/qlen_s1-eth1.txt']
I ran into this as well, and found that if you look at: https://github.com/bigswitch/mininet/blob/master/mininet/node.py and in the monitor() function section you will see that this function sets the waiting flag = False.
So the code can be
h1.sendCmd(startbackground_service)
h2.cmd(something_else)
h3.cmd(use_h1_service)
.....
#at some point end h1's background service, naturally or unnaturally
h1.monitor() # will check the service, and set waiting=False
For me h1.monitor() didn't worked in a similar situation on mininet 2.3.0 so I replaced the sendCmd() with popen() and then used the terminate() function.
Using the example above I changed it to:
p1 = h1.popen(startbackground_service)
h2.cmd(something_else)
h3.cmd(use_h1_service)
.....
p1.terminate()
The code seems to set a flag waiting at the end of a sendCmd() call, the flag is check to be False on every run of the method. It seem to be reset when the result of the command is received.
As the result you can't send a second command until the response of the first is received. This might just be a race condition but IMHO the library should handle that case in a more clever and transparent way.
I have a python cgi script that runs an application via subprocess over and over again (several thousand times). I keep getting the same error...
Traceback (most recent call last):
File "/home/linuser/Webpages/cgi/SnpEdit.py", line 413, in <module>
webpage()
File "/home/linuser/Webpages/cgi/SnpEdit.py", line 406, in main
displayOmpResult(form['odfFile'].value)
File "/home/linuser/Webpages/cgi/SnpEdit.py", line 342, in displayContainerDiv
makeSection(position,sAoiInput)
File "/home/linuser/Webpages/cgi/SnpEdit.py", line 360, in displayData
displayTable(i,j,lAmpAndVars,dOligoSet[key],position)
File "/home/linuser/Webpages/cgi/SnpEdit.py", line 247, in displayTable
p = subprocess.Popen(['/usr/bin/pDat',sInputFileLoc,sOutputFileLoc],stdout=fh, stderr=fh)
File "/usr/lib/python2.6/subprocess.py", line 633, in __init__
errread, errwrite)
File "/usr/lib/python2.6/subprocess.py", line 1039, in _execute_child
errpipe_read, errpipe_write = os.pipe()
OSError: [Errno 24] Too many open files
The function causing it is below.
def displayTable(sData):
# convert the data to the proper format
sFormattedData = convertToFormat(sData)
# write the formatted data to file
sInputFile = tempfile.mkstemp(prefix='In_')[1]
fOpen = open(sInputFile,'w')
fOpen.write(sFormattedData)
fOpen.close()
sOutputFileLoc = sInputFile.replace('In_','Out_')
# run app, requires two files; an input and an output
# temp file to holds stdout stderr of subprocess
fh = tempfile.TemporaryFile(mode='w',dir=tempfile.gettempdir())
p = subprocess.Popen(['/usr/bin/pDat',sInputFileLoc,sOutputFileLoc],stdout=fh, stderr=fh)
p.communicate()
fh.close()
# open output file and print parsed data into a list of dictionaries
sOutput = open(sOutputFileLoc).read()
lOutputData = parseOutput(sOutput)
displayTableHeader(lOutputData)
displaySimpleTable(lOutputData)
As far as I can tell, I'm closing the files properly. When I run...
import resource
print resource.getrlimit(resource.RLIMIT_NOFILE)
I get...
(1024, 1024)
Do I have to increase this value? I read that subprocess opens several file descriptors. I tried adding "close_fds = True" and I tried using the with statement when creating my file but the result was the same. I suspect the problem may be with the application that I'm subprocessing, pDat, but this program was made by someone else. It requires two inputs; an input file and the location of where you want the output file written to. I suspect it may not be closing the output file that it creates. Aside from this, I can't see what I might be doing wrong. Any suggestions? Thanks.
EDIT:
I'm on ubuntu 10.04 running python 2.6.5 and apache 2.2.14
Instead of this...
sInputFile = tempfile.mkstemp(prefix='In_')[1]
fOpen = open(sInputFile,'w')
fOpen.write(sFormattedData)
fOpen.close()
I should have done this...
iFileHandle,sInputFile = tempfile.mkstemp(prefix='In_')
fOpen = open(sInputFile,'w')
fOpen.write(sFormattedData)
fOpen.close()
os.close(iFileHandle)
The mkstemp function makes OS level handles to a file and I wasn't closing them. The solution is described in more detail here...
http://www.logilab.org/blogentry/17873
You want to add close_fds=True to the popen call (just in case).
Then, here:
# open output file and print parsed data into a list of dictionaries
sOutput = open(sOutputFileLoc).read()
lOutputData = parseOutput(sOutput)
...I might remember wrong, but unless you use the with syntax, I do not think that the output file descriptor has been closed.
UPDATE: the main problem is that you need to know which files are open. On Windows this would require something like Process Explorer. In Linux it's a bit simpler; you just have to invoke the CGI from command line, or be sure that there is only one instance of the CGI running, and fetch its pid with ps command.
Once you have the pid, run a ls -la on the content of the /proc/<PID>/fd directory. All open file descriptors will be there, with the name of the files they point to. Knowing that file so-and-so is opened 377 times, that goes a long way towards finding out where exactly that file is opened (but not closed).