Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Disclaimer: I'm still not sure I understand fully what setup.py does.
From what I understand, using a setup.py file is convenient for packages that need to be compiled or to notify Disutils that the package has been installed and can be used in another program. setup.py is thus great for libraries or modules.
But what about super simple packages that only has a foo.py file to be run? Is a setup.py file making packaging for Linux repository easier?
Using a setup.py script is only useful if:
Your code is a C extension, and then depends on platform-specific features that you really don't want to define manually.
Your code is pure Python but depends on other modules, in which case dependencies may be resolved automatically.
For a single file or a few set of files that don't rely on anything else, writing one is not worth the hassle. As a side note, your code is likely to be more attractive to people if trying it up doesn't require a complex setup. "Installing" it is then just about copying a directory or a single file in one's project directory.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
All the time that I've worked with python and anaconda, I have never wondered how actually virtual envs are useful except for version controlling. When I looked it up, I found a lot of articles on how to create and use custom envs, but not exactly why they are so awesome. Why is it dangerous to install new libraries into the original installation? Are virtual envs useful for anything other than versioning?
PROS:
You can use any version of python you want for a specific environment without having to worry about collisions.
Your main python package directory does not get flooded with unnecessary python packages.
You can organize your packages much better and know exactly the packages you need to run.
Anyone can run your code on their machine.
Your project is easier to deploy.
Your application runs faster.
Ease of maintenance.
CONS:
storage space?
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I am seeing some of my colleagues use the following workaround for importing external python modules (without installing them).
import sys
sys.path.append(<PATH_TO_MODULE>)
import <module>
sys.path.remove(<PATH_TO_MODULE>)
I don't think this is a good approach but "it works".
What should I suggest them to do instead of the following code and why?
Thanks!
It sounds as though your colleagues are not making virtual environments to run python and are trying to avoid muddy-ing the main python modules.
So I'd suggest they start seperating out their concerns and projects into seperate virtual environments where they don't need to worry about having modules installed.
See also conda environments and other alternatives to achieve the same goal
An alternative approach would be to append the module’s path to PYTHONPATH:
export PYTHONPATH="${PYTHONPATH}:/path/to/your/module/"
In this way, nothing is hardcoded in your source code and whenever something changes, you just need to export the new path to PYTHONPATH.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I'm writing a package that wraps a non-python program that myself and my team often have to automate. I'm packaging this with setuptools and want to make it available to our other developers OR to our operations team.
Here's what I want to do. The program it wraps obviously needs to be there for my module to work. So, I'm thinking I need setuptools to check to see if it's installed and if it's not, install it.
Is there a way to do this within setup() OR is that step going to need to be manual (or handled by something else)? OR... should this just be something that stays in the module? It's about 50MB, so not horribly huge.
your program needs an installation or you have a portable version?
If it is portable, you can trigger it with relative paths and then recreate the same structure in your compiled python script.
folder/
main.py
bin/
file.exe
Let's say that you want to call you binary from the main.py
# main.py
import os
# get the current directory dynamically
base_dir = os.getcwd()
# create the file path
file_path = os.path.join(base_dir, 'bin', 'file.exe')
# run the file
os.system(file_path)
After compiling the file, you should create the folder bin in the destination and copy inside your file.exe
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Why don't people just use the compiled python file whenever they need optimization? Then the code won't have to be interpereted then compiled.
Is there something I am missing? It seems to me like a simple problem.
I believe this is enough to correct your misunderstanding.
A program doesn’t run any faster when it is read from a .pyc or .pyo file than when it is read from a .py file; the only thing that’s faster about .pyc or .pyo files is the speed with which they are loaded.
source : https://docs.python.org/2/tutorial/modules.html#packages
Python is interpreted even if it's read from a pyc-file. As already said in this answer, pyc-files only speed up program starting, not execution. Commands stored into pyc-files are not machine codes, it's just python level commands that will be interpreted anyway by python interpreter. On the other hand, when you use program written in C, executable file of such program consists of machine codes, that are "interpreted" directly by CPU.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
Personally I think it's better to distribute .py files as these will then be compiled by the end-user's own python, which may be more patched.
What are the pros and cons of distributing .pyc files versus .py files for a commercial, closed-source python module?
In other words, are there any compelling reasons to distribute .pyc files?
Edit: In particular, if the .py/.pyc is accompanied by a DLL/SO module which is compiled against a certain version of Python.
If your proprietary bits are inside a binary DLL or SO, then there's no real value in making an interface layer a .pyc (as opposed to a .py). You can drop that all together or distribute it as an uncompiled python file. I don't know of any reasons to distribute compiled python files. In many cases, build environments treat them as stale byproducts and clean them out so your program might disappear.
You should know that there are open source and proprietary de-compilers to convert Python byte code to Python source. One such example would be Mysterie's uncompyle2
Moreover, .pyc files are not cross version safe, so there are more trouble than benefit in distributing .pyc over .py.