How to get the location of installed Python package into the shell - python

I want my users to be able to reference a file in my python package (specifically a docker-compose.yml file) directly from the shell.
I couldnt find a way to get only the location from pip show (and grep-ing out "location" from its output feels ugly), so my current (somewhat verbose) solution is:
docker compose -f $(python3 -c "import locust_plugins; print(locust_plugins.__path__[0])")/timescale/docker-compose.yml up
Is there a better way?
Edit: I solved it by installing a wrapper command I call locust-compose as part of the package. Not perfect, but it gets the job done:
#!/bin/bash
module_location=$(python3 -c "import locust_plugins; print(locust_plugins.__path__[0])")
set -x
docker compose -f $module_location/timescale/docker-compose.yml "$#"

Most of the support you need for this is in the core setuptools suite.
First of all, you need to make sure the data file is included in your package. In a setup.cfg file you can write:
[options.package_data]
timescale = docker-compose.yml
Now if you pip install . or pip wheel, that will include the Compose file as part of the Python package.
Next, you can retrieve this in Python code using the ResourceManager API:
#!/usr/bin/env python3
# timescale/compose_path.py
import pkg_resources
if __name__ == '__main__':
print(pkg_resources.resource_filename('timescale', 'docker-compose.yml'))
And finally, you can take that script and make it a setuptools entry point script (as distinct from the similarly-named Docker concept), so that you can just run it as a single command.
[options.entry_points]
console_scripts=
timescale_compose_path = timescale:compose_path
Again, if you pip install . into a virtual environment, you should be able to run timescale_compose_path and get the path name out.
Having done all of those steps, you can finally run a simpler
docker-compose -f $(timescale_compose_path) up

Related

How to run Python inside an expressjs Docker container

i am trying to build a container for my express.js application. The express.js-app makes use of python via the npm package PythonShell.
I have plenty of python-code, which is in a subfolder of my express-app and with npm start everything works perfectly.
However, i am new to docker and i need to containerize the app. My Dockerfile looks like this:
FROM node:18
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3001
CMD ["node", "./bin/www"]
I built the Image with:
docker build . -t blahblah-server and ran it with docker run -p 8080:3001 -d blahblah-server.
I make use of imports at the top of the python-script like this:
import datetime
from pathlib import Path # Used for easier handling of auxiliary file's local path
import pyecma376_2 # The base library for Open Packaging Specifications. We will use the OPCCoreProperties class.
from assi import model
When the pythonscript is executed (only in the container!!!) I get following error-message:
/usr/src/app/public/javascripts/service/pythonService.js:12
if (err) throw err;
^
PythonShellError: ModuleNotFoundError: No module named 'pyecma376_2'
at PythonShell.parseError (/usr/src/app/node_modules/python-shell/index.js:295:21)
at terminateIfNeeded (/usr/src/app/node_modules/python-shell/index.js:190:32)
at ChildProcess.<anonymous> (/usr/src/app/node_modules/python-shell/index.js:182:13)
at ChildProcess.emit (node:events:537:28)
at ChildProcess._handle.onexit (node:internal/child_process:291:12)
----- Python Traceback -----
File "/usr/src/app/public/pythonscripts/myPython/wtf.py", line 6, in <module>
import pyecma376_2 # The base library for Open Packaging Specifications. We will use the OPCCoreProperties class. {
traceback: 'Traceback (most recent call last):\n' +
' File "/usr/src/app/public/pythonscripts/myPython/wtf.py", line 6, in <module>\n' +
' import pyecma376_2 # The base library for Open Packaging Specifications. We will use the OPCCoreProperties class.\n' +
"ModuleNotFoundError: No module named 'pyecma376_2'\n",
executable: 'python3',
options: null,
script: 'public/pythonscripts/myPython/wtf.py',
args: null,
exitCode: 1
}
If I comment the first three imports out, I get the same error:
PythonShellError: ModuleNotFoundError: No module named 'assi'
Please notice, that assi actually is from my own python-code, which is included in the expressjs-app-directory
Python seems to be installed in the container correctly. I stepped inside the container via docker exec -it <container id> /bin/bash and there are the python packages in the #/usr/lib-directory.
I really have absolute no idea how all this works together and why python doesn't find this modules...
You are trying to use libs that are not in Standard Python Library. It seems that you are missing to run pip install , when you build the docker images.
Try adding RUN docker commands that can do this for you. Example:
RUN pip3 install pyecma376_2
RUN pip3 install /path/to/assi
Maybe, that can solve your problem. Don't forget to check if python are already installed in your container, it semms that it is. And if you have python2 and pyhton3 installed, make sure that you use pip3 instead of only pip.

Run python module from command line

I don't really know how to ask this question but I can describe what I want to achieve. I would update any edits that would be suggested.
I have a python module that makes use of some command line arguments. Using the module requires some initial setup outside of the python interpreter. The python file that does the setup runs fine, but the problem is that I have to dig through the python installation to find where that file is located i.e. I have to do python full-path-to-setup-script.py -a argA -b argB etc.I would like to call the setup script like this
some-setup-command -a argA -b argB etc.
I want to achieve something like
workon environmnent_name as in the virtualenv module or
pipenv install as in the pipenv module.
I know both of the above commands call a script of some kind (whether bash or python). I've tried digging through the source codes of virtualenv and pipenv without any success.
I would really appreciate if someone could point me to any necessary resource for coding such programs.
If full-path-to-setup-script.py is executable and has a proper shebang line
#! /usr/bin/env python
then you can
ln -s full-path-to-setup-script.py ~/bin/some-command
considering ~/bin exists and is in your PATH,
and you'll be able to invoke
some-command -a argA -b argB
It's a bit difficult to understand what you're looking for, but python -m is my best guess.
For example, to make a new Jupyter kernel, we call
python -m ipykernel arg --option --option
Where arg is the CLI argument and option is a CLI option, and ipykernel is the module receiving the args and options.
Commands that are callable from the command prompt are located in one of the directories in your system's PATH variable. If you are on Windows, you see the locations via:
echo %PATH%
Or if you want a nicer readout:
powershell -c "$env:path -split(';')"
One solution is to create a folder, add it to your system's PATH, and then create a callable file that you can run. In this example we will create a folder in your user profile, add it to the path, then create a callable file in that folder.
mkdir %USERPROFILE%\path
set PATH=%PATH%%USERPROFILE%\path;
setx PATH %PATH%
In the folder %USERPROFILE%\path, we create a batch file with following content:
# file name:
# some-command.bat
#
python C:\full\path\to\setup-script.py %*
Now you should be able to call
some-command -a argA -b argB
And the batch file will call python with python script and pass the arguments you added.
Looking at the above answers, I see no one has mentioned this:
You can of course compile the python file and give executable permissions with
chmod +x filename.py
and then run it as
./filename.py -a argA -b argB ...
Moreover, you can also remove the extention .py (since it is an executable now) and then run it only as
./filename -a argA -b argB ...

In NixOS, how can I install an environment with the Python packages SpaCy, pandas, and jenks-natural-breaks?

I'm very new to NixOS, so please forgive my ignorance. I'm just trying to set up a Python environment---any kind of environment---for developing with SpaCy, the SpaCy data, pandas, and jenks-natural-breaks. Here's what I've tried so far:
pypi2nix -V "3.6" -E gcc -E libffi -e spacy -e pandas -e numpy --default-overrides, followed by nix-build -r requirements.nix -A packages. I've managed to get the first command to work, but the second fails with Could not find a version that satisfies the requirement python-dateutil>=2.5.0 (from pandas==0.23.4)
Writing a default.nix that looks like this: with import <nixpkgs> {};
python36.withPackages (ps: with ps; [ spacy pandas scikitlearn ]). This fails with collision between /nix/store/9szpqlby9kvgif3mfm7fsw4y119an2kb-python3.6-msgpack-0.5.6/lib/python3.6/site-packages/msgpack/_packer.cpython-36m-x86_64-linux-gnu.so and /nix/store/d08bgskfbrp6dh70h3agv16s212zdn6w-python3.6-msgpack-python-0.5.6/lib/python3.6/site-packages/msgpack/_packer.cpython-36m-x86_64-linux-gnu.so
Making a new virtualenv, and then running pip install on all these packages. Scikit-learn fails to install, with fish: Unknown command 'ar rc build/temp.linux-x86_64-3.6/liblibsvm-skl.a build/temp.linux-x86_64-3.6/sklearn/svm/src/libsvm/libsvm_template.o'
I guess ideally I'd like to install this environment with nix, so that I could enter it with nix-shell, and so other environments could reuse the same python packages. How would I go about doing that? Especially since some of these packages exist in nixpkgs, and others are only on Pypi.
Caveat
I had trouble with jenks-natural-breaks to the tune of
nix-shell ❯ poetry run python -c 'import jenks_natural_breaks'
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/matt/2022/12/28-2/.venv/lib/python3.10/site-packages/jenks_natural_breaks/__init__.py", line 5, in <module>
from ._jenks_matrices import ffi as _ffi
ModuleNotFoundError: No module named 'jenks_natural_breaks._jenks_matrices'
So I'm going to use jenkspy which appears to be a bit livelier. If that doesn't scratch your itch, I'd contact the maintainer of jenks-natural-breaks for guidance
Flakes
you said:
so other environments could reuse the same python packages
Which makes me think that a flake.nix is what you need. What's cool about flakes is that you can define an environment that has spacy, pandas, and jenkspy with one flake. And then you (or somebody else) might say:
I want an env like Jonathan's, except I also want sympy
and rather than copying your env and making tweaks, they can declare your env as a build input and write a flake.nix with their modifications--which can be further modified by others.
One could imagine a sort of family-tree of environments, so you just need to pick the one that suits your task. The python community has not yet converged on this vision.
Poetry
Poetry will treat you like you're trying to publish a library when all you asked for is an environment, but a library's dependencies are pretty much an environment so there's nothing wrong with having an empty package and just using poetry as an environment factory.
Bonus: if you decide to publish a library after all, you're ready.
The Setup
nix flakes thinks in terms of git repo's, so we'll start with one:
$ git init
Then create a file called flake.nix. Usually I end up with poetry handling 90% of the python stuff, but both pandas and spacy are in that 10% that has dependencies which link to system libraries. So we ask nix to install them so that when poetry tries to install them in the nix develop shell, it has what it needs.
{
description = "Jonathan's awesome env";
inputs = {
nixpkgs.url = "github:nixos/nixpkgs";
};
outputs = { self, nixpkgs, flake-utils }: (flake-utils.lib.eachSystem [
"x86_64-linux"
"x86_64-darwin"
"aarch64-linux"
"aarch64-darwin"
] (system:
let
pkgs = nixpkgs.legacyPackages.${system};
in
rec {
packages.jonathansenv = pkgs.poetry2nix.mkPoetryApplication {
projectDir = ./.;
};
defaultPackage = packages.jonathansenv;
devShell = pkgs.mkShell {
buildInputs = [
pkgs.poetry
pkgs.python310Packages.pandas
pkgs.python310Packages.spacy
];
};
}));
}
Now we let git know about the flake and enter the environment:
❯ git add flake.nix
❯ nix develop
$
Then we initialize the poetry project. I've found that poetry, installed by nix, is kind of odd about which python it uses by default, so we'll set it explicitly
$ poetry init # follow prompts
$ poetry env use $(which python)
$ poetry run python --version
Python 3.10.9 # declared in the flake.nix
At this point, we should have a pyproject.toml:
[tool.poetry]
name = "jonathansenv"
version = "0.1.0"
description = ""
authors = ["Your Name <you#example.com>"]
readme = "README.md"
[tool.poetry.dependencies]
python = "^3.10"
jenkspy = "^0.3.2"
spacy = "^3.4.4"
pandas = "^1.5.2"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
Usage
Now we create the venv that poetry will use, and run a command that depends on these.
$ poetry install
$ poetry run python -c 'import jenkspy, spacy, pandas'
You can also have poetry put you in a shell:
$ poetry shell
(venv)$ python -c 'import jenkspy, spacy, pandas'
It's kind of awkward to do so though, because we're two subshells deep and any shell customizations that we have the grandparent shell are not available. So I recommend using direnv, to enter the dev shell whenever I navigate to that directory and then just use poetry run ... to run commands in the environment.
Publishing the env
In addition to running nix develop with the flake.nix in your current dir, you can also do nix develop /local/path/to/repo or develop nix develop github:/githubuser/githubproject to achieve the same result.
To demonstrate the github example, I have pushed the files referenced above here. So you ought to be able to run this from any linux shell with nix installed:
❯ nix develop github:/MatrixManAtYrService/nix-flake-pandas-spacy
$ poetry install
$ poetry run python -c 'import jenkspy, spacy, pandas'
I say "ought" because if I run that command on a mac it complains about linux-headers-5.19.16 being unsupported on x86_64-darwin.
Presumably there's a way to write the flake (or fix a package) so that it doesn't insist on building linux stuff on a mac, but until I figure it out I'm afraid that this is only a partial answer.

Running containerized PyTest

I am learning how to run containerized PyTests and I am failing to run a test with arguments.
My Dockerfile looks like this:
FROM python:2
ADD main.py /
RUN pip install docker
RUN pip install fake_useragent
RUN pip install pytest
RUN pip install requests
CMD ["pytest", "main.py --html=report.html"]
But I tried all kinds of CMD/RUN variations I found online.
Anybody has a clue?
The full project is here if helps:
https://github.com/pavelzag/DockerSDKLearn
"main.py --html=report.html" will be passed in pytest as a single argument and will appear in sys.argv[1] there. Hence pytest is trying to locate a file with the exact same name with stuff like --html in it. You should fully tokenize the command:
CMD ["pytest", "main.py", "--html=report.html"]

How do I make a python script executable?

How can I run a python script with my own command line name like myscript without having to do python myscript.py in the terminal?
Add a shebang line to the top of the script:
#!/usr/bin/env python
Mark the script as executable:
chmod +x myscript.py
Add the dir containing it to your PATH variable. (If you want it to stick, you'll have to do this in .bashrc or .bash_profile in your home dir.)
export PATH=/path/to/script:$PATH
The best way, which is cross-platform, is to create setup.py, define an entry point in it and install with pip.
Say you have the following contents of myscript.py:
def run():
print('Hello world')
Then you add setup.py with the following:
from setuptools import setup
setup(
name='myscript',
version='0.0.1',
entry_points={
'console_scripts': [
'myscript=myscript:run'
]
}
)
Entry point format is terminal_command_name=python_script_name:main_method_name
Finally install with the following command.
pip install -e /path/to/script/folder
-e stands for editable, meaning you'll be able to work on the script and invoke the latest version without need to reinstall
After that you can run myscript from any directory.
I usually do in the script:
#!/usr/bin/python
... code ...
And in terminal:
$: chmod 755 yourfile.py
$: ./yourfile.py
Another related solution which some people may be interested in. One can also directly embed the contents of myscript.py into your .bashrc file on Linux (should also work for MacOS I think)
For example, I have the following function defined in my .bashrc for dumping Python pickles to the terminal, note that the ${1} is the first argument following the function name:
depickle() {
python << EOPYTHON
import pickle
f = open('${1}', 'rb')
while True:
try:
print(pickle.load(f))
except EOFError:
break
EOPYTHON
}
With this in place (and after reloading .bashrc), I can now run depickle a.pickle from any terminal or directory on my computer.
The simplest way that comes to my mind is to use "pyinstaller".
create an environment that contains all the lib you have used in your code.
activate the environment and in the command window write pip install pyinstaller
Use the command window to open the main directory that codes maincode.py is located.
remember to keep the environment active and write pyinstaller maincode.py
Check the folder named "build" and you will find the executable file.
I hope that this solution helps you.
GL
I've struggled for a few days with the problem of not finding the command py -3 or any other related to pylauncher command if script was running by service created using Nssm tool.
But same commands worked when run directly from cmd.
What was the solution? Just to re-run Python installer and at the very end click the option to disable path length limit.
I'll just leave it here, so that anyone can use this answer and find it helpful.

Categories