I have an Angular-Flask application that I'm trying to Dockerize.
One of the Python files used is a file called trial.py. This file uses a module called _trial.pyd. When I run the project on local everything works fine. However, when I dockerize using the following code and run, it gives me error "Module _trial not found".
FROM node:latest as node
COPY . /APP
COPY package.json /APP/package.json
WORKDIR /APP
RUN npm install
RUN npm install -g #angular/cli#7.3.9
CMD ng build --base-href /static/
FROM python:3.6
WORKDIR /root/
COPY --from=0 /APP/ .
RUN pip install -r requirements.txt
EXPOSE 5000
ENTRYPOINT ["python"]
CMD ["app.py"]
Where am I going wrong? Do I need to change some path in my dockerfile?
EDIT:
Directory structure:
APP [folder with src Angular files]
static [folder with js files]
templates [folder with index.html]
_trial.pyd
app.py [Starting point of execution]
Dockerfile
package.json
requirements.txt
trial.py
app.py calls trial.py (successfully).
trial.py needs _trial.pyd. I try to call it using import _trial.
It works fine locally, but when I dockerize it, i get the following error message:
Traceback (most recent call last):
File "app.py", line 7, in
from trial import *
File "/root/trial.py", line 15, in
import _trial
ModuleNotFoundError: No module named '_trial'
The commands run are:
docker image build -t prj .
docker container run --publish 5000:5000 --name prj prj
UPDATE
It is able to recognize other .py files but not .pyd module.
I think this could be due to the following reasons
Path
Make sure the required file is in the pythonpath. It sounds like you have done that so probably not this one.
Debug Mode
If you are working with a debug mode you will need to rename this module "_trail.pyd" to "_trail_d.pyd"
Missing DLL
The .pyd file may require a dll or other dependency that is not available and cant be imported due to that. there are tools such as depends.exe or this that allow you to find what is required
Name Space Issue
If there was a file already called "_trail.py" in the python path that could create unwanted behavior.
Solved it!
The issue was that I was using .pyd files in a Linux container. However, it seems that Linux doesn't support .pyd files. This is why it was not able to detect the _trial.pyd module. I had to generate a shared object (.so) file (i.e. _trial.so) and it worked fine.
EDIT: The .so file was generated on a Linux system by me. Generating .so on Windows and then using it in a Linux container gives "invalid ELF header" error.
Related
I am experimenting with Google Cloud Platform buildpacks, specifically for Python. I started with the Sample Functions Framework Python example app, and got that running locally, with commands:
pack build --builder=gcr.io/buildpacks/builder sample-functions-framework-python
docker run -it -ePORT=8080 -p8080:8080 sample-functions-framework-python
Great, let's see if I can apply this concept on a legacy project (Python 3.7 if that matters).
The legacy project has a structure similar to:
.gitignore
source/
main.py
lib
helper.py
requirements.txt
tests/
<test files here>
The Dockerfile that came with this project packaged the source directory contents without the "source" directory, like this:
COPY lib/ /app/lib
COPY main.py /app
WORKDIR /app
... rest of Dockerfile here ...
Is there a way to package just the contents of the source directory using the buildpack?
I tried to add this config to the project.toml file:
[[build.env]]
name = "GOOGLE_FUNCTION_SOURCE"
value = "./source/main.py"
But the Python modules/imports aren't set up correctly for that, as I get this error:
File "/workspace/source/main.py", line 2, in <module>
from source.lib.helper import mymethod
ModuleNotFoundError: No module named 'source'
Putting both main.py and /lib into the project root dir would make this work, but I'm wondering if there is a better way.
Related question, is there a way to see what project files are being copied into the image by the buildpack? I tried using verbose logging but didn't see anything useful.
Update:
The python module error:
File "/workspace/source/main.py", line 2, in <module>
from source.lib.helper import mymethod
ModuleNotFoundError: No module named 'source'
was happening because I moved the lib dir into source in my test project, and when I did this, Intellij updated the import statement in main.py without me catching it. I fixed the import, then applied the solution listed below and it worked.
I had been searching the buildpack and Google cloud function documentation, but I discovered the option I need on the pack build documentation page: option --path.
This command only captures the source directory contents:
pack build --builder=gcr.io/buildpacks/builder --path source sample-functions-framework-python
If changing the path, the project.toml descriptor needs to be in that directory too (or specify with --descriptor on command line).
I am attempting to run a Docker container, but I am encountering the error "ModuleNotFoundError: No module named 'basic_config'" when trying to import the module "basic_config" from the config directory. Although the code runs without any issues on Windows and Linux servers, the error only occurs when running the code inside a base Docker container. I am seeking a solution to resolve this issue and be able to import the module correctly within the Docker container.
this is the file structure
"""
ml-project/
enviroment/
env.dev
env.prod
src/
config/
base_config.py
model/
model1/
model.py
.env
piplines.yml
requirements.txt
"""
this this model.py
import sys
sys.path.append("../../config")
from basic_config import AGE
print(AGE)
this is the docker file
FROM python:3.8
WORKDIR /app
COPY . .
ARG env_type=dev
RUN pip install -r requirements.txt
CMD ["python", "src/module/model1/model.py"]
I'm doing the building Neo4j applications with Python course on Neo4j's GraphAcademy and am stuck early in the process with Setting Environment Variables. I've installed the dependencies (FLASK etc.) but don't seem to have an .env file for the next part...
Setting Environment Variables
This project will read environment variables from the .env file located in the project root.
The project contains an example file at .env.example. You can run the following command in your terminal window to copy the example file to .env.
cp .env.example .env
But when I try to run this in the shell I get the following error:
cp: .env.example: No such file or directory
I don't seem to have a .env file in any of the newly created folders in the sandbox. Can anyone help with this?
For me, this worked:
Clone the git repository that provides the scaffolding
git clone https://github.com/neo4j-graphacademy/app-python.git
Change directory to be in the newly checked project root folder.
This step is not explicitly mentioned in the graph academy course.
cd app-python
Copy the template env file
cp .env.example .env
Inspect the file to make sure it looks right
cat .env
which printed
FLASK_APP=api
FLASK_ENV=development
FLASK_RUN_PORT=3000
NEO4J_URI=neo4j://localhost:7687
NEO4J_USERNAME=neo4j
NEO4J_PASSWORD=neo
JWT_SECRET=secret
SALT_ROUNDS=10
If you still can't get it to work, let me know which step fails and with which error.
When I upload a (zipped) deployment package as a lambda function on AWS I get "no module named..." errors for both bs4 and google.
I created a virtual environment using venv and I installed the required dependencies
The app works fine when running from within the virtual environment. But, when I zip it up and upload it as a lambda function on AWS, I get "no module named..." errors for both "bs4" and (if I hash out the import of bs4 for debug reasons) also for "google". I checked the site-packages folder in the zip file and they seem to be there.
Why is AWS saying there is no module when there is?!
I am using python3.6 on Ubuntu.
Lambda needs ZIP with all the libraries and your main python code file in the same folder.
Here is what i do:
Create a new package folder with the following hierarchy
mkdir -p ./package/tmp/lib
Copy Project folder into the temp folder
cp -a ./$(PROJECT)/. ./package/tmp/
Copy python site-packages from virtual env to temp folder in package
cp -a $(VIRTUAL_ENV)/lib/python2.7/site-packages/. ./package/tmp/
Remove any unused libraries (that are not required for this particular lambda to run) from temp folder
rm -rf ./package/tmp/wheel*
Zip the temp package directory
cd ./package/tmp && zip -r ../$(PROJECT).zip .
This final zip so created is ready for upload on Lambda.
my goal is to dockerize python 3.6 web application.
During my work I developed private python package ('VmwareProviderBL') that my web application is using.
Build my docker image working perfectly, but when I am trying to run it, I get an error saying my private Python package is not found.
My Dockerfile
# Use an official Python runtime as a parent image
FROM python:3.6-slim
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
ADD . /app
# Install any needed packages specified in requirements.txt
RUN pip install -r requirements.txt
# Make port 80 available to the world outside this container
EXPOSE 80
# Define environment variable
ENV NAME World
# Run WebController.py when the container launches
CMD ["python", "WebApi/WebController.py"]
My Error when trying to run the image
Traceback (most recent call last):
File "WebApi/WebController.py", line 3, in <module>
from VmwareProviderBL import SnapshotBL, MachineBL, Utils
ModuleNotFoundError: No module named 'VmwareProviderBL'
My package hierarchy
--VmwareVirtualMachineProvider
--WebApi
--VmwareProviderBL
--requirements
--Dockerfile
Any ideas anyone?
I know I need somehow to add this package to my Dockerfile, did not find any example online.
When you import a module, Python looks in (a) the built-in path (sys.path), (b) locations in the PYTHONPATH variable, and (c) in the directory containing the script you're running.
You're installing the requirements for your module (pip install -r requirements.txt), but you're never installing the VmwareProviderBL module itself. This means it's not going to be available in (a) above. The script you're running (WebApi/WebController.py) isn't located in the same directory as the VmwareProviderBL module, so that rules out (c).
The best way of solving this problem would be to include a setup.py file in your project so that you could simply pip install . to install both the requirements and the module itself.
You could also modify the PYTHONPATH environment variable to include the directory that contains the VmwareProviderBL module. For example, add to your Dockerfile:
ENV PYTHONPATH=/app