I have a python application which is bundled using pyinstaller --onefile method. When running with multiprocessing start method spwan, it causes error in middle of the application.
Traceback (most recent call last):
File "web.py", line 1028, in <module>
File "PyInstaller/hooks/rthooks/pyi_rth_multiprocessing.py", line 49, in _freeze_support
File "multiprocessing/spawn.py", line 105, in spawn_main
File "multiprocessing/spawn.py", line 114, in _main
File "multiprocessing/spawn.py", line 225, in prepare
File "multiprocessing/spawn.py", line 277, in _fixup_main_from_path
File "runpy.py", line 261, in run_path
File "runpy.py", line 231, in _get_code_from_file
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/_MEIRtGMqX/web.py'
[11797] Failed to execute script 'web' due to unhandled exception!
start method fork seems to be fine, but sometimes it hangs due to resource lock issue, so I prefer spawn. Does any one have any idea why this error occurs?
I have been using freeze_support() as mentioned in multiprocessing documentation even though it doesn't have any impact on Linux.
if __name__ == "__main__":
freeze_support()
set_start_method('spawn')
OS - Amazon linux 2
Python - 3.6.8
PyInstaller - 4.10
As I understood about spwan, It runs a new Python interpreter whenever creates a new process and tell it to import the main module and then execute. So my main module is web.py. So the error happens when a new process is created. But this issue is not consistent.
I think the reason for this problem is that the program cannot find web.py after packaging.
You can try adding the following code where you are using about path.
if getattr(sys, 'frozen', False):
bundle_dir = sys._MEIPASS
else:
bundle_dir = os.path.dirname(os.path.abspath(__file__))
this code can help program find the file after packaging.
pyinstaller's document about this:https://pyinstaller.org/en/stable/runtime-information.html
At this point of time, Only one solution I have found is start the multiprocessing with method forkserver. I am not sure whether it leads to any hang in application.
set_start_method('forkserver')
I ran into a similar issue when processes were spawned by threads. Using the "spawn" method from pyinstaller binaries on Linux is not thread-safe. If you are spawning processes concurrently from multiple threads, this issue can arise.
Follow https://github.com/pyinstaller/pyinstaller/issues/7410 for updates.
UPDATE
Per the changelog, a fix has been released in PyInstaller 5.8.0.
Related
I am writing a simple network scanner with python using scapy following is my code :
import scapy.all as scapy
def scan(ip):
scapy.arping(ip)
scan("192.168.1.1/24")
Error I am getting :
Traceback (most recent call last):
File "ipScanner.py", line 10, in <module>
scan("192.168.1.1/24")
File "ipScanner.py", line 8, in scan
scapy.arping(ip)
File "/Users/omairkhan/opt/anaconda3/lib/python3.7/site-packages/scapy/layers/l2.py", line 648, in arping
filter="arp and arp[7] = 2", timeout=timeout, iface_hint=net, **kargs) # noqa: E501
File "/Users/omairkhan/opt/anaconda3/lib/python3.7/site-packages/scapy/sendrecv.py", line 553, in srp
filter=filter, nofilter=nofilter, type=type)
File "/Users/omairkhan/opt/anaconda3/lib/python3.7/site-packages/scapy/arch/bpf/supersocket.py", line 242, in __init__
super(L2bpfListenSocket, self).__init__(*args, **kwargs)
File "/Users/omairkhan/opt/anaconda3/lib/python3.7/site-packages/scapy/arch/bpf/supersocket.py", line 62, in __init__
(self.ins, self.dev_bpf) = get_dev_bpf()
File "/Users/omairkhan/opt/anaconda3/lib/python3.7/site-packages/scapy/arch/bpf/core.py", line 114, in get_dev_bpf
raise Scapy_Exception("No /dev/bpf handle is available !")
scapy.error.Scapy_Exception: No /dev/bpf handle is available !
Exception ignored in: <function _L2bpfSocket.__del__ at 0x105984c20>
Traceback (most recent call last):
File "/Users/omairkhan/opt/anaconda3/lib/python3.7/site-packages/scapy/arch/bpf/supersocket.py", line 139, in __del__
self.close()
File "/Users/omairkhan/opt/anaconda3/lib/python3.7/site-packages/scapy/arch/bpf/supersocket.py", line 211, in close
if not self.closed and self.ins is not None:
AttributeError: 'L2bpfSocket' object has no attribute 'ins'
Can anyone please help understand it.
NOTE: I am running it on mac OS.
I wrote this exact program when I first started programming with matching syntax, and it ran correctly on my systems when run as administrator. I develop on Linux and Windows rather than Mac, but I will offer what I can.
Are you running this script through your IDE or calling it from the shell?
I recommend only running it from the shell. This simply gives you more control over the files like specifying which version of python the script is, and if you need administrative privileges for a script, you can elevate the script permissions in the shell.
Also, in my OS, I was taught to always use, and have experienced the mistakes of forgetting this, always add:
#!/usr/bin/env python
as the first line of every script. At least in Linux, it tells the PC how to treat the file (it tells it to treat the file as a python file--yes I acknowledge that its already running it as python). I would check to see if that is valid for MacOS file system.
Most of what I have recommended so far comes down to no /dev/bpf handle is available, only ever being an issue for me when I'm not running script as an administrator (although Linux states permission denied). And I shouldn't leave out that using Anaconda on Windows in the past (before I understood the structure of my file systems) prevented me from using common modules like pygame and scapy. I could only guess in that case Anaconda prevented the PC from knowing where to find every piece of that module by making the computer think it had its own one of that module under Anaconda directory when it was in a different PATH.
Big picture is I'm trying to automate my deployment process of building with maven and deploying to a web logic server. Little picture is I'm using subprocess to see if I can call maven from within python. When I attempt this subprocess mistakes mvn for a file.
Here is my code so far:
import subprocess
def main():
print(subprocess.check_output(["mvn", "-v"]))
if __name__ == '__main__':
main()
And here's my error:
C:\pythondev\python.exe "C:/pythondev/development/deployment scripts/redploy-to-localhost.py"
Traceback (most recent call last):
File "C:/pythondev/development/deployment scripts/redploy-to-localhost.py", line 9, in <module>
main()
File "C:/pythondev/development/deployment scripts/redploy-to-localhost.py", line 5, in main
subprocess.check_output(["a"])
File "C:\pythondev\lib\subprocess.py", line 376, in check_output
**kwargs).stdout
File "C:\pythondev\lib\subprocess.py", line 453, in run
with Popen(*popenargs, **kwargs) as process:
File "C:\pythondev\lib\subprocess.py", line 756, in __init__
restore_signals, start_new_session)
File "C:\pythondev\lib\subprocess.py", line 1155, in _execute_child
startupinfo)
FileNotFoundError: [WinError 2] The system cannot find the file specified
Process finished with exit code 1
Although my issue is with subprocess I'm open to answers that suggest a better alternative.
I ran into the same issue and was hesistant to use shell=True, because the internet tells me this is evil.
When I run where mvn in my cmd.exe, I can see that there are two matches:
mvn, which is a Unix shell-script (it starts with #!/bin/sh),
mvn.cmd, which is a Windows batch file.
I think what happens when you execute mvn something -something in cmd.exe is the following: Windows tries finding an executable called mvn. It finds the mvn file, but realizes that this file is not executable. It then tries finding files like mvn.com, mvn.exe, ... (see the %PATHEXT% system variable). When it finds mvn.cmd, it executes that and everyone is happy.
As far as I understand it, the problem with subprocess.check_output (and subprocess.run, and so on) is that the path-"expansion" via %PATHEXT% is not being performed. So the solution is that you have to give the extension manually and run your command as
print(subprocess.check_output(["mvn.cmd", "-v"]))
Try this it worked for me.
print(subprocess.check_output(["mvn", "-v"], shell=True))
So yesterday I updated to Enthought version 1.1 and now it refuses to open. I've rebooted my computer as well as did a re-install of enthought canopy. I keep getting the following error
Traceback (most recent call last):
File "build/bdist.macosx-10.5-i386/egg/canopy/app/bootstrap.py", line 1989, in main
File "build/bdist.macosx-10.5-i386/egg/canopy/app/bootstrap.py", line 1021, in main
File "build/bdist.macosx-10.5-i386/egg/canopy/app/bootstrap.py", line 1012, in _ kill_leftover_procs
File "build/bdist.macosx-10.5-i386/egg/canopy/app/running_process_manager.py", line 116, in kill_leftover_procs
File "/Applications/Canopy.app/appdata/canopy-1.1.0.1371.macosx-x86_64/Canopy.app/Contents/lib/python2.7/contextlib.py", line 17, in __enter__
return self.gen.next()
File "build/bdist.macosx-10.5-i386/egg/canopy/app/running_process_manager.py", line 59, in lock
LockError: Lock could not be acquired
I have no idea what's going on here. I've sent the error report to enthought but does anyone have any ideas?
I think I did it. Try searching for these files in your Terminal. They're inside your .canopy folder. Make sure that you're working on your root directory. They're not searchable via Finder. My Canopy's finally working now. Hope this helps.
proc_manager.lock
process.lck
running_procs.pkl
Somehow it seems like a lock file didn't get cleaned in the process. Look into the ~/.canopy folder and remove the process.lck file. You may also start your Activity Monitor and make sure there is no stray canopy or python process, and kill it if there is (or log out of OSX and log back in, which will do the same thing). Canopy will run as normally after that.
I am trying to understand a project's source code these days. I run the project, line by line, everything works fine until this line:
res = subprocess.call(command, env=os.environ)
I checked the variable "command" and realized that this function just throw a command to another python script and try to execute it in a subprocess. So I jumped out of Eclipse, and tried to execute the command through Terminal while under the same directory.
Now this is what I got:
Traceback (most recent call last):
File "/home/elderry/Projects/git/tahoe-lafs/support/bin/tahoe", line 6, in <module>
from pkg_resources import load_entry_point
File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 2850, in <module>
working_set.require(__requires__)
File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 696, in require
needed = self.resolve(parse_requirements(requirements))
File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 594, in resolve
raise DistributionNotFound(req)
pkg_resources.DistributionNotFound: allmydata-tahoe==1.10.0.post27
Then I completely lost my direction, where did the subprocess continue to run? Why did the script work well in program but not in Terminal? Since that script is also included in the project, with some hope I set some break points in it in Eclipse, which didn't catch anything. Is there any way to debug into the subprocess, not dive into the code of subprocess module's code itself?
I guess your main project alters PYTHONPAH (sys.path). Look in os.environ of your project and try to run second script with this environment.
I'm fairly new to programming and decided to setup a simple python script that would open all the applications I use for webapp development. The code I am using is (for GAE):
google_appengine = r'C:\Applications\google_app_engine\launcher\GoogleAppEngineLauncher.exe'
subprocess.Popen(google_appengine)
This works fine for the other programs I am opening, but I am unable to run any applications within App Engine after I have opened it this way. I get the following error in my App Engine log file:
Exception in thread Thread-2:
Traceback (most recent call last):
File "threading.pyc", line 486, in __bootstrap_inner
File "launcher\taskthread.pyc", line 65, in run
File "subprocess.pyc", line 587, in __init__
File "subprocess.pyc", line 700, in _get_handles
File "subprocess.pyc", line 745, in _make_inheritable
WindowsError: [Error 6] The handle is invalid
I'm guessing it is the way subprocess.Popen() works, but I haven't been able to find any alternatives. I'm running Windows 7 if that makes a difference. Thanks for looking.
if you want to manage the local dev_appserver, this is the wrong approach.
the best way to do this is clone the sdk repository (https://code.google.com/p/googleappengine/) directly to your drive and then add that path to your environment PYTHONPATH variable.
here's a link to a script template i created & often use to manage startup & killing of the dev_appserver process: https://gist.github.com/4514647
i'm not too familiar with managing a python environment on Windows, so you'd have to take my notes on a highlevel and research the specific implementation for that platform.