I need to pack an application written in Python.
I can't use PyInstaller because I can't have a .exe or anything like that, I need pure python scripts being executed (I have Popen in my code calling some of my scripts and passing parameters);
I need it to work cross-plataform (I KNOW that to do so, I need to generate the package in each system because a package generated on Linux won't work in a Mac and vice-versa);
Since I can't really pack it, my idea is to have a folder called modules and have all the dependencies inside it, then, in my code I would just point the imports to this folder. I would zip the entire project and ship it, the user would just unpack and run it.
The problem is:
How to not only download the package in a specific folder but also install it there? (I can't alter my user's environment, this is a MUST);
How to direct my imports to this local folder? I think I can do something like import modules.numpy for example, but then, numpy have dependencies of its own... how to make sure it will look into my custom folder?
My scenario:
I have a requirements.txt for my project; I have several local files that are used inside the project; I have a Popen that calls one of my file.py; I am using Python3;
I can't use virtualenv because I am using wxPython and there's a conflict between it and virtualenv (no idea why - something related with main thread...)
Related
I created a server and client file and I was about to convert it to exe when I realized that I also need to convert the module it uses to exe. What do I do about the module?
It looks like Auto PY to EXE uses PyInstaller to do the real work here:
A .py to .exe converter using a simple graphical interface and PyInstaller in Python.
PyInstaller will automatically bundle any dependencies it finds:
What other modules and libraries does your script need in order to run? (These are sometimes called its "dependencies".)
To find out, PyInstaller finds all the import statements in your script. It finds the imported modules and looks in them for import statements, and so on recursively, until it has a complete list of modules your script may use.
So as long as you import your dependencies somewhere reachable from your main script and your virtual environment is active, you should get the right behaviour automatically.
Note that you'll probably need to build two executables: one for your server and a separate one for your client.
This is my directory structure
src\
dirCode\
dirSubCode\
test.py
Django\
DjangoApp\
Home\
views.py
From within views.py I'm trying to import a class (let's call it Main) from test.py. The line in test.py is simply
from dirCode.dirSubCode.test import Main
and I'm getting a ModuleNotFoundError when I try to run the server. I printed out the os.sys.path and the second entry points to 'src\' while the first entry is simply ''. Since this is not the only app from where I'm going to need to call "external" code I'm trying to avoid hard-coding
sys.path.insert(0,'path\to\src\dirCode\dirSubCode\')
if that would even work. I've tried looking through documentation, looking at other StackOverflow questions (Import module from subfolder), etc. but I'm at a loss as to how to do this. I've tried going into the src\Django\DjangoApp\Home directory and just running a python console and simply trying to import the class but it's giving me the same error.
One odd development that I'm running into is that when I try to run it from the Anaconda Prompt (yes, I'm running Windows) it doesn't work but when I run the file in PyCharm it does work. If that helps provide some insight as to what might be going on I'd appreciate the help.
The proper way to do this is to introduce a shared package that you can install into a virtualenv for each webapp. Either from an actual release, or by using
python setup.py develop
Form within the shared package you essentially create a symlink in the venv that allows you to just import.
A more modern and simpler version is
pip install -e .
inside the library. See Python setup.py develop vs install
I believe you need to set the PYTHONPATH variable, which is used to set the default search path for modules.
https://docs.python.org/2/using/cmdline.html#envvar-PYTHONPATH
As per the python docs the syntax is like so...
https://docs.python.org/3.6/using/windows.html#excursus-setting-environment-variables
To temporarily set environment variables, open Command Prompt and use the set command:
C:\>set PATH=C:\Program Files\Python 3.6;%PATH%
C:\>set PYTHONPATH=%PYTHONPATH%;C:\My_python_lib
C:\>python
Pycharm- in the Run/Debug configuration- sets it when you have it checked off (which is the default).
Here is the problem I am trying to solve. I don't have a specific question in the title because I don't even know what I need.
We have an ancient Hadoop computing cluster with a very old version of Python installed. What we have done is installed a new version (2.7.9) to a local directory (that we have perms on) visible to the entire cluster, and have a virtualenv with the packages we need. Let's call this path /n/2.7.9/venv/
We are using Hadoopy to distribute Python jobs on the cluster. Hadoopy distributes the python code (the mappers and reducers) to the cluster, which are assumed to be executable and come with a shebang, but it doesn't do anything like activate a virtualenv.
If I hardcode the shebang in the .py files to /n/2.7.9/venv/, everything works. But I want to put the .py files in a library; these files should have some generic shebang like #!/usr/bin/env python. But I tried this and it does not work, because at runtime the virtualenv is not "activated" by the script and therefore it bombs with import errors.
So if anyone has any ideas on how to solve this problem I would be grateful. Essentially I want #!/usr/bin/env python to resolve to /n/2.7.9/venv/ without /n/2.7.9/venv/ being active, or some other solution where I cannot hardcode the shebang.
Currently I am solving this problem by having a run function in the library, and putting a wrapper around this function in the main code (that calls the library) with the hardcoded shebang in it. This is less offensive because the hardcoded shebang makes sense in the main code, but it is still messy because I have to have an executable wrapper file around every function I want to run from the library.
I would change the environment variable PYTHONPATH and also the environment variable PATH. Point PYTHONPATH to your virtual environment and PATH to the directory that contains your new python executable, and make sure the path to your python executable comes first.
I accepted John Schmitt's answer because it led me to the solution. However, I am posting what I actually did, because it might be useful for other Hadoopy users.
What I actually did was :
args['cmdenvs'] = ['export VIRTUAL_ENV=/n/2.7.9/ourvenv','export PYTHONPATH=/n/2.7.9/ourvenv', 'export PATH=/n/2.7.9/ourvenv/bin:$PATH']
and passed args into Hadoopy's launch function. In the executable .py files, I put the generic #!/usr/bin/env python shebang.
My Problem is the following:
I want to create an script that can create other executables. These new executables have to be standalone, so they don't require any DLL's, etc.
I know this is possible with PyInstaller, but only from console/command line.
So essentially, what I want to do is make a python script that imports pyinstaller, creates another .py-file and uses pyinstaller to compile the new script to a .exe, so people who don't have python installed can use this program.
EDIT: The script itself should only use one file, so it can also be a one-file executable
Supposing you have already installed Pyinstaller in PYINSTALLER_PATH (you should have called the Configure.py script in the distribution for the first time), Pyinstaller generates a spec file from your main script by calling the Makespec.py. You can add some flags to generate one dir binary distribution or one file. Finally you have to call Build.py with spec file.
This is easy scriptable with a couple of system calls. Something like :
import os
PROJECT_NAME = "test"
PROJECT_MAIN_SCRIPT = "main_script.py"
MAKESPEC_CMD = """%s %s\Makespec.py -X -n %s -F %s""" % (PYTHON_EXECUTABLE, PYINSTALLER_PATH, PROJECT_NAME, PROJECT_MAIN_SCRIPT)
BUILD_CMD = """%s %s\Build.py %s.spec""" % (PYTHON_EXECUTABLE, PYINSTALLER_PATH, PROJECT_NAME)
os.system(MAKESPEC_CMD)
os.system(BUILD_CMD)
You can avoid to generate the spec file every time and hack it, adding embedded resources (i.e. xml files or configuration) and specifying some other flag. Basically this is a python file, with the definition of some dictionaries.
I don't think there is a Pyinstaller module you can use directly, but you can look at Build.py and mimic its behaviour to do the same. Build.py is the main script which does the trick.
You may want to check out cx_Freeze, which can be used to do this kind of thing.
There are three different ways to use cx_Freeze. The first is to use the included cxfreeze script which works well for simple scripts. The second is to create a distutils setup script which can be used for more complicated configuration or to retain the configuration for future use. The third method involves working directly with the classes and modules used internally by cx_Freeze and should be reserved for complicated scripts or extending or embedding.
Source
Try downloading Pyinstaller's latest development code. There they trying to implement GUI toolkit for building executables.
Currently, when trying to reference some library code, I'm doing this at the top of my python file:
import sys
sys.path.append('''C:\code\my-library''')
from my-library import my-library
Then, my-library will be part of sys.path for as long as the session is active. If I start a new file, I have to remember to include sys.path.append again.
I feel like there must be a much better way of doing this. How can I make my-library available to every python script on my windows machine without having to use sys.path.append each time?
Simply add this path to your PYTHONPATH environment variable. To do this, go to Control Panel / System / Advanced / Environment variable, and in the "User variables" sections, check if you already have PYTHONPATH. If yes, select it and click "Edit", if not, click "New" to add it.
Paths in PYTHONPATH should be separated with ";".
You should use
os.path.join
to make your code more reliable.
You have already used __my-library__ in the path. So don't use it the second time in import.
If you have a directory structure like this
C:\code\my-library\lib.py and a function in there, e.g.:
def main():
print("Hello, world")
then your resulting code should be
import sys
sys.path.append(os.path.join('C:/', 'code', 'my-library'))
from lib import main
If this is a library that you use throughout your code, you should install it as such. Package it up properly, and either install it in your site-packages directory - or, if it's specific to certain projects, use virtualenv and install it just within the relevant virtualenvs.
To do such a thing, you'll have to use a sitecustomize.py (or usercustomize.py) file where you'll do your sys.path modifications (source python docs).
Create the sitecustomize.py file into the \Lib\site-packages directory of your python installation, and it will be imported each time a python interpreter is launched.
If you are doing this interactively, the best thing to do would be to install ipython and configure your startup settings to include that code. If you intend to have it be part of a script you run from the interpreter, the same thing applies, since it will have access to your namespace.
On the other hand, a stand alone script should not include that automatically. In the future, you or some other maintainer will come along, and all the code should be obvious, and not dependent upon a specific machine setup. The best thing to do would be to set up a skeleton file for new projects that includes all of the basic functionality you need. That, along with oft-used snippets will handle the problem.
All of your code to run the script, will be in the script, and you won't have to think about adding that code every time.
Using jupyter with multiple environments, adding the path to .bashrc didn't work. I had to edit the kernel.json file for that particular kernel and append it to the PYTHONPATH in env section.
This only worked in that kernel but maybe this can help someone else.