I am using a bash script (run.sh) to run a python file (calculate_ssp.py). The code is working fine. The folder structure is given below
├── dataset
└── dataset_1.csv
├── Dockerfile
├── __init__.py
├── output
├── run.sh
├── scripts
│ ├── calculate_ssp.py
│ ├── __init__.py
The bash script is
#!/bin/bash
python3 -m scripts.calculate_ssp 0.5
Now, I am trying to run the bash script (run.sh) from the Dockerfile. The content of the Dockerfile is
#Download base image ubuntu 20.04
FROM ubuntu:20.04
#Download python
FROM python:3.8.5
ADD run.sh .
RUN pip install pandas
RUN pip install rdkit-pypi
CMD ["bash", "run.sh"]
But, I am getting an error /usr/local/bin/python3: Error while finding module specification for 'scripts.calculate_ssp' (ModuleNotFoundError: No module named 'scripts')
Could you tell me why I am getting the error (as the code is working fine from the bash script)?
You need to package all the files required by your script in the image. You can drop the Ubuntu base image, since it isn't used. (The python base image is based on some other image that already provides bash.)
#Download python
FROM python:3.8.5
RUN pip install pandas
RUN pip install rdkit-pypi
ADD run.sh .
ADD scripts/ ./scripts
CMD ["bash", "run.sh"]
In the Bash script, write a program that creates a file named Dockerfile. The contents of the Dockerfile should have the following commands:
First, the base image should install python3 via the FROM command. Then the rest of the Dockerfile should look like the following:
RUN pip install {{MODULES}}
CMD ["python", {{FILENAME}}]
{{MODULES}} should be replaced with the modules: numpy, scipy, and pandas all on one line. {{FILENAME}} should be replaced with ./main.pyv
Related
I've coded my python project and have succeeded in publishing it to test pypi. However, now I can't figure out how to correctly configure it as a console script. Upon running my_project on the command line, I get the following stack trace:
Traceback (most recent call last):
File "/home/thatcoolcoder/.local/bin/my_project", line 5, in <module>
from my_project.__main__ import main
ModuleNotFoundError: No module named 'my_project'
Clearly, it's created a script to run but the script is then failing to import my actual code.
Folder structure:
pyproject.toml
setup.cfg
my_project
├── __init__.py (empty)
├── __main__.py
Relevant sections of setup.cfg:
[metadata]
name = my-project
version = 1.0.5
...
[options]
package_dir =
= my_project
packages = find:
...
[options.packages.find]
where = my_project
[options.entry_points]
console_scripts =
my_project = my_project.__main__:main
pyproject.toml (probably not relevant)
[build-system]
requires = [
"setuptools>=42",
"wheel"
]
__main__.py:
from my_project import foo
def main():
foo.bar()
if __name__ == '__main__':
main()
To build and upload, I'm running the following: (python is python 3.10)
python -m build
python -m twine upload --repository testpypi dist/*
Then to install and run:
pip install -i https://test.pypi.org/pypi/ --extra-index-url https://pypi.org/simple my-project --upgrade
my_project
How can I make this console script work?
Also, this current method of setting console_scripts only allows it to be run as my_project; is it possible to also make it work by python -m my_project? Or perhaps this will work once my main issue is fixed.
It's funny, but I had the same frustration when trying to install scripts on multiple platforms. (As Python calls them; posix and nt.)
So I wrote setup-py-script in 2020. It's up on github now.
It installs scripts that use their own modules as a self-contained zip-file. (This method was inspired by youtube-dl.) That means no more leftover files when you delete a script but forget to remove the module et cetera.
It does not require root or administrator privileges; installation is done in user-accessible directories.
You might have to structure your project slightly differently; the script itself is not in the module directory. See the project README.
I finally got back to this problem today and it appears that I was using an incorrect source layout, which caused the pip module installation to not work. I switched to a directory structure like this one:
├── src
│ └── mypackage
│ ├── __init__.py
│ └── mod1.py
├── setup.py
└── setup.cfg
and modified the relevant parts of my setup.cfg:
[options]
package_dir=
=src
packages=find:
[options.packages.find]
where=src
Then I can run it like python -m mypackage. This also made the console scripts work. It works on Linux but I presume it also works on other systems.
I have developed a python application that records the users' actions on the web using the following packages
python==3.7.9
selenium==4.0.0
seleniumbase==2.1.9
Tinkerer==1.7.2
tqdm==4.62.3
validators==0.18.2
I have tried to use a lot of different approaches to convert into an exe file that will require nothing to be installed on the client's side
I have tried using the following
pyvan and it did not work and I opened an issue to the pkg's author
pyinstaller and when I run my code, it always tells me that the library SeleniumBase is unknown and the solution is to install it on the customer side WHICH IS NOT AN OPTION
pipenv and did not work as well
PyOxidizer & py2exe did not work with me as well
I would like to convert that python application into an exe (I don't care if it is a single file or folder) as long as it requires no installation on the user's side
Repo Structure
├── requirements.txt
├── main.py
├── logo.ico
├── web_actions_recorder
| ├── main.py
| ├── __init__.py
└── README.md
I have found a solution/workaround
My application has this line where I am trying to invoke SeleniumBase on the customer side by the following python snippet import os; os.system("sbase mkrec recording.py") which is not possible as the customer does not have seleniumbase on his/her PC
The solution is as follows:
Copy from your env Python Folder C:\Users\<USER_NAME>\AppData\Local\Programs\Python\Python38 and paste it inside your project files.
The folder is called Python38 as I am working with multiple python versions on my PC, this one is named Python38 as it is python version 3.8.10
Edit the code to be as following
import os
# the script was in a folder so I had to do `os.path.dirname(".") first
python_dir_path = os.path.join(os.path.dirname("."), "Python38", "python.exe")
# in that way we will use the python folder with everything installed in
# to help us run the `sbase` command without having to install
# anything on the user's side
os.system(f"{python_path} -m sbase mkrec recording.py")
Finally use PyInstaller v4.7 to package the application, in my case
pyinstaller --clean --onedir --name <your_app_name> --windowed --hidden-import seleniumbase --icon <path_to_icon> main.py
While working on a Python FastAPI project using Pipenv and Pytest, I was asked to write a Pipenv script to run the tests.
Project has the following structure:
.
├── app
| ├── main.py
│ ├── my_package
│ │ ├── __init__.py
│ │ ├── (project folders)
│ │ └── tests
| | └── tests.py
│ └── __pycache__
└──(deployment scripts and folders, Pipfile, Dokerfile, etc)
I'd like to run Pytest on tests.py from the top structure (./app/.../tests.py). At least for now.
As recommended here, I currently run them from the top with:
( cd app ; python -m pytest my_package/tests/tests.py )
... which works as expected.
However, when I add that my Pipfile-scripts-section:
[scripts]
my_script = "( cd app ; python -m pytest my_package/tests/tests.py )"
... run it with:
pipenv run my_script
I get the error:
Error: the command ( (from my_script) could not be found within PATH.
I've also tried:
[scripts]
my_script = "cd app && python -m pytest my_package/tests/tests.py"
... which returns another similar error:
Error: the command cd (from my_script) could not be found within PATH.
So it's clear I'm wrong to use it as bash aliases.
I've tried searching for more documentation on how the [scripts] section works as I, but I've had no luck (yet).
I'm not familiar with the tools you're using, but the error message suggests that it's looking for an executable. ( is part of shell syntax, and cd is a shell builtin, not an executable. Try this:
my_script = "bash -c 'cd app && python -m pytest my_package/tests/tests.py'"
Here bash is the executable, and -c makes it run your snippet.
BTW, keep in mind that cd can fail, so the script should bail out if it does.
I am new to poetry and want to get it set-up with pytest. I have a package mylib in the following set-up
├── dist
│ ├── mylib-0.0.1-py3-none-any.whl
│ └── mylib-0.0.1.tar.gz
├── poetry.lock
├── mylib
│ ├── functions.py
│ ├── __init__.py
│ └── utils.py
├── pyproject.toml
├── README.md
└── tests
└── test_functions.py
in test_functions I have
import mylib
However, when I run
poetry run pytest
it complains about mylib not being included. I can run
pip install dist/mylib-0.0.1-py3-none-any.whl
but that clutters my python environment with mylib. I want to use that environment as well for other packages.
My question is: What is the proper way to work with poetry and pytest?
My underlying python environment is a clean pyenv python 3.8. Using pyproject.toml I create a project based virtual environment for mylib.
You need to run poetry install to set up your dev environment. It will install all package and development requirements, and once that is done it will do a dev-install of your source code.
You only need to run it once, code changes will propagate directly and do not require running the install again.
If you have set up the virtual env that you want already, take care that it is activated when you run the install command. If you don't, poetry will try to create a new virtual env and use that, which is probably not what you want.
FYI you also need pytest specified as a dev dependency in pyproject.toml.
If you don't have that, poetry run will find the pytest instance in your home env, but that instance won't find the venv. I don't think the documentation makes that very clear.
There is a specific way to run pytest:
poetry run pytest
I couldn't run it just running pytest with the virtual environment activated. Nothing happens when I run.
It just works when I prefix the poetry executable.
P.S.: Don't forget to add pytest as a dev dependency in your pyproject.toml file.
I have a simple python app, which uses custom class I've created. The following folder structure is the following:
│ mains
| ├── run_it.py
| ├── __init__.py
│ ├── parsers
│ ├── parser.py
│ ├── __init__.py
In the run_it.py, the main program, I'm calling
from mains.parsers.parser import Parser
In local mode I've added to ~/.bashrc the line and it works good:
export PYTHONPATH="${PYTHONPATH}:/home/.../THE_FOLDER_ABOVE_MAINS"
But when I try to dockerize the app, I get the following error:
File "/app/run_it.py", line 11, in <module>
from mains.parsers.parser import Parser
ModuleNotFoundError: No module named 'mains'
My Dockerfile is:
FROM python:3
RUN mkdir /app
WORKDIR /app
ADD . /app/
RUN apt-get update
RUN pip3 install gunicorn
RUN pip3 install -r requirements.txt
EXPOSE 5000
ENV PYTHONIOENCODING=utf-8
ENV GUNICORN_CMD_ARGS="--bind 0.0.0.0:5000 --workers=2"
CMD ["gunicorn","run_it:app"]
Any idea how can I solve it?
Thanks in advance!
I didn't see you set any module path in container, then for your case, the folder app which run the top script run_it.py was automatically added to module path.
As a result, you should use next:
from parsers.parser import Parser
And another way add next to your Dockerfile(suppose it's the same folder of mains) too:
ENV PYTHONPATH=/app
Then you can still use from mains.parsers.parser import Parser
Try this its working..
from parsers.parser import Parser