Python using requirements.txt - python

Im doing Google Ap Engine application,using Flask
when im add lib in requirements.txt my applcication not deploying
content requirements.txt
Flask==0.10.1
gunicorn==19.4.5
google-api-python-client==1.5.0
oauth2client==2.0.1
pandas==0.18.0
end its return error
ERROR: (gcloud.preview.app.deploy) Docker build aborted: The command '/bin/sh -c pip install -r requirements.txt' returned a non-zero code: 143
if im remove "google-api-python-client",error disappear

Related

AWS CDK: Installing external dependencies using requirements.txt via PythonFunction

I am trying to synthesize a CDK app (typeScript) which has some python lambda functions.
I am using PythonFunction to use a requirements.txt file to install the external dependencies. I am running vscode on WSL. I am encountering the following error.
Bundling asset Test/test-lambda-stack/test-subscriber-data-validator-poc/Code/Stage...
node:internal/fs/utils:347
throw err;
^
Error: ENOENT: no such file or directory, open '~/.nvm/versions/node/v16.17.0/lib/node_modules/docker/node_modules/highlight.js/styles/cp -rTL /asset-input/ /asset-output && cd /asset-output && python -m pip install -r requirements.txt -t /asset-output.css'
at Object.openSync (node:fs:594:3)
at Object.readFileSync (node:fs:462:35)
at module.exports (~/.nvm/versions/node/v16.17.0/lib/node_modules/docker/src/getColourScheme.js:47:26)
at ~/.nvm/versions/node/v16.17.0/lib/node_modules/docker/src/docker.js:809:47
at FSReqCallback.readFileAfterClose [as oncomplete] (node:internal/fs/read_file_context:68:3)
at FSReqCallback.callbackTrampoline (node:internal/async_hooks:130:17) {
errno: -2,
syscall: 'open',
code: 'ENOENT',
path: '~/.nvm/versions/node/v16.17.0/lib/node_modules/docker/node_modules/highlight.js/styles/cp -rTL /asset-input/ /asset-output && cd /asset-output && python -m pip install -r requirements.txt -t /asset-output.css'
}
Error: Failed to bundle asset Test/test-lambda-stack/test-subscriber-data-validator-poc/Code/Stage, bundle output is located at ~/Code/AWS/CDK/test-dev-poc/cdk.out/asset.6b577fe604573a3b53e635f09f768df3f87ad6651b18e9f628c2a086a525bb49-error: Error: docker exited with status 1
at AssetStaging.bundle (~/Code/AWS/CDK/test-dev-poc/node_modules/aws-cdk-lib/core/lib/asset-staging.js:2:614)
at AssetStaging.stageByBundling (~/Code/AWS/CDK/test-dev-poc/node_modules/aws-cdk-lib/core/lib/asset-staging.js:1:4506)
at stageThisAsset (~/Code/AWS/CDK/test-dev-poc/node_modules/aws-cdk-lib/core/lib/asset-staging.js:1:1867)
at Cache.obtain (~/Code/AWS/CDK/test-dev-poc/node_modules/aws-cdk-lib/core/lib/private/cache.js:1:242)
at new AssetStaging (~/Code/AWS/CDK/test-dev-poc/node_modules/aws-cdk-lib/core/lib/asset-staging.js:1:2262)
at new Asset (~/Code/AWS/CDK/test-dev-poc/node_modules/aws-cdk-lib/aws-s3-assets/lib/asset.js:1:736)
at AssetCode.bind (~/Code/AWS/CDK/test-dev-poc/node_modules/aws-cdk-lib/aws-lambda/lib/code.js:1:4628)
at new Function (~/Code/AWS/CDK/test-dev-poc/node_modules/aws-cdk-lib/aws-lambda/lib/function.js:1:2803)
at new PythonFunction (~/Code/AWS/CDK/test-dev-poc/node_modules/#aws-cdk/aws-lambda-python-alpha/lib/function.ts:73:5)
at new lambdaInfraStack (~/Code/AWS/CDK/test-dev-poc/lib/serviceInfraStacks/lambda-infra-stack.ts:24:40)
My requirements.txt file looks like this
attrs==22.1.0
jsonschema==4.16.0
pyrsistent==0.18.1
My cdk code is this
new PythonFunction(this,`${appName}-subscriber-data-validator-${stage}`,{
runtime: Runtime.PYTHON_3_9,
entry: join('lambdas/subscriber_data_validator'),
handler: 'lambda_hander',
index: 'subscriber_data_validator.py'
})
Do I need to install anything additional? I have esbuild installed as a devDependency. Having a real hard time getting this work. Any help is appreciated.

pip3 offline installator complains on "no matching distribution" even when it is present

I have to prepare installation of a python3 FastAPI based service to a server without an internet conenction.
I installed all needed stuff in a minimal Debian container, tested the service and called
pip freeze > requirements.txt
I got:
asgiref==3.4.1
certifi==2020.6.20
chardet==4.0.0
click==8.0.1
fastapi==0.68.0
fastapi-utils==0.2.1
greenlet==1.1.1
h11==0.12.0
httptools==0.2.0
idna==2.10
iso8601==0.1.16
m3u8==0.9.0
pydantic==1.8.2
python-dotenv==0.19.0
PyYAML==5.4.1
requests==2.25.1
six==1.16.0
SQLAlchemy==1.4.23
starlette==0.14.2
typing-extensions==3.10.0.0
urllib3==1.26.5
uvicorn==0.15.0
uvloop==0.16.0
watchgod==0.7
websockets==9.1
Then I called on my host
mkdir dependencies
pip download -r requirements.txt -d "./dependencies"
cp requirements.txt ./dependencies/
tar cvfz dependencies.tar.gz dependencies
The approach is based on these SO questions and answers:
installing python packages without internet and using source code as .tar.gz and .whl
How to install packages offline?
I created a new fresh Debian container with access to archive made above, installed python3 and python3-pip, disconnected my host from internet and tried this:
root#3eed3ed8cafc:~/temp# pip3 install --no-index --find-links="./dependencies/" -r dependencies/requirements.txt
Looking in links: ./dependencies/
Processing ./dependencies/asgiref-3.4.1-py3-none-any.whl
Processing ./dependencies/certifi-2020.6.20-py2.py3-none-any.whl
Processing ./dependencies/chardet-4.0.0-py2.py3-none-any.whl
Processing ./dependencies/click-8.0.1-py3-none-any.whl
Processing ./dependencies/fastapi-0.68.0-py3-none-any.whl
Processing ./dependencies/fastapi_utils-0.2.1-py3-none-any.whl
ERROR: Could not find a version that satisfies the requirement greenlet==1.1.1
ERROR: No matching distribution found for greenlet==1.1.1
But it is there as greenlet-1.1.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl wheel:
root#3eed3ed8cafc:~/temp# ls dependencies
PyYAML-5.4.1-cp38-cp38-manylinux1_x86_64.whl m3u8-0.9.0-py3-none-any.whl
SQLAlchemy-1.4.23-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl pydantic-1.8.2-cp38-cp38-manylinux2014_x86_64.whl
asgiref-3.4.1-py3-none-any.whl python_dotenv-0.19.0-py2.py3-none-any.whl
certifi-2020.6.20-py2.py3-none-any.whl requests-2.25.1-py2.py3-none-any.whl
chardet-4.0.0-py2.py3-none-any.whl requirements.txt
click-8.0.1-py3-none-any.whl six-1.16.0-py2.py3-none-any.whl
fastapi-0.68.0-py3-none-any.whl starlette-0.14.2-py3-none-any.whl
fastapi_utils-0.2.1-py3-none-any.whl typing_extensions-3.10.0.0-py3-none-any.whl
greenlet-1.1.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl urllib3-1.26.5-py2.py3-none-any.whl
h11-0.12.0-py3-none-any.whl uvicorn-0.15.0-py3-none-any.whl
httptools-0.2.0-cp38-cp38-manylinux1_x86_64.whl uvloop-0.16.0-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl
idna-2.10-py2.py3-none-any.whl watchgod-0.7-py3-none-any.whl
iso8601-0.1.16-py2.py3-none-any.whl websockets-9.1-cp38-cp38-manylinux2010_x86_64.whl
Not even moving the greenlet to the first line in requirements.txt helped.
What is wrong?

scrapyd-deploy error: pkg_resources.DistributionNotFound

I have been trying for a long time to find a solution to the scrapyd error message: pkg_resources.DistributionNotFound: The 'idna<3,>=2.5' distribution was not found and is required by requests
What I have done:
$ docker pull ceroic/scrapyd
$ docker build -t scrapyd .
Dockerfile:
FROM ceroic/scrapyd
RUN pip install "idna==2.5"
$ docker build -t scrapyd .
Sending build context to Docker daemon 119.3kB
Step 1/2 : FROM ceroic/scrapyd
---> 868dca3c4d94
Step 2/2 : RUN pip install "idna==2.5"
---> Running in c0b6f6f73cf1
Downloading/unpacking idna==2.5
Installing collected packages: idna
Successfully installed idna
Cleaning up...
Removing intermediate container c0b6f6f73cf1
---> 849200286b7a
Successfully built 849200286b7a
Successfully tagged scrapyd:latest
I run the container:
$ docker run -d -p 6800:6800 scrapyd
Next:
scrapyd-deploy demo -p tutorial
And get error:
pkg_resources.DistributionNotFound: The 'idna<3,>=2.5' distribution was not found and is required by requests
I'm not a Docker expert, and I don't understand the logic. If idna==2.5 has been successfully installed inside the container, why does the error message require version 'idna<3,>=2.5'?
The answer is very simple. I finished my 3 days! torment. When I run the
scrapyd-deploy demo -p tutorial
then I do it not in the created container, but outside it.
The problem was solved by:
pip uninstall idna
pip install "idna == 2.5"
This was to be done on a virtual server, not a container. I can't believe I didn't understand it right away.

Docker run cannot find executable "uwsgi"

I am trying to deploy a falcon app with Docker. Here is my Dockerfile:
FROM python:2-onbuild
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
ADD . /app
RUN pip install -r ./requirements.txt
RUN pip install uwsgi
EXPOSE 8000
CMD ["uwsgi", "--http”, " :8000" , "—wsgi-file”, "falconapp.wsgi"]
However I keep getting error saying:
/bin/sh: 1: [uwsgi,: not found
I've tried running uwsgi in the local directory and it works well with the command:
uwsgi --http :8000 --wsgi-file falconapp.wsgi
Why is Docker not working in this case???
Here is the log, I'm pretty sure uwsgi is installed:
Step 5/7 : RUN pip install uwsgi
---> Running in 2df7c8e018a9
Collecting uwsgi
Downloading uwsgi-2.0.17.tar.gz (798kB)
Building wheels for collected packages: uwsgi
Running setup.py bdist_wheel for uwsgi: started
Running setup.py bdist_wheel for uwsgi: finished with status 'done'
Stored in directory: /root/.cache/pip/wheels/94/c9/63/e7aef2e745bb1231490847ee3785e3d0b5f274e1f1653f89c5
Successfully built uwsgi
Installing collected packages: uwsgi
Successfully installed uwsgi-2.0.17
Removing intermediate container 2df7c8e018a9
---> cb71648306bd
Step 6/7 : EXPOSE 8000
---> Running in 40daaa0d5efa
Removing intermediate container 40daaa0d5efa
---> 39c75687a157
Step 7/7 : CMD ["uwsgi", "--http”, " :8000" , "—wsgi-file”, "falconapp.wsgi"]
---> Running in 67e6eb29f3e0
Removing intermediate container 67e6eb29f3e0
---> f33181adbcfa
Successfully built f33181adbcfa
Successfully tagged image_heatmap:latest
dan#D-MacBook-Pro:~/Documents/falconapp_api$ docker run -p 8000:80 small_runner
/bin/sh: 1: [uwsgi,: not found
very often you have to write the full patch for the executable. If you go to your container and run this command whereis uwsgi it will tell you it is at /usr/local/bin/uwsgi so your CMD should be in the same form:
CMD ["/usr/local/bin/uwsgi", "--http", " :8000" , "--wsgi-file", "falconapp.wsgi"]
I see some incorrect syntax in CMD, please use this:
CMD ["uwsgi", "--http", " :8000" , "--wsgi-file", "falconapp.wsgi"]
some double quotes are incorrect and -- is not before wsgi-file .

getting python and react component based container to work

I am attempting to dockerize this workflow of isomorphic app.
I build the container of below docker file.
FROM python:3.5-slim
RUN apt-get update && \
apt-get -y install gcc mono-mcs && \
apt-get -y install vim && \
apt-get -y install nano && \
rm -rf /var/lib/apt/lists/*
RUN mkdir -p /statics/js
VOLUME ["/statics/"]
WORKDIR /statics/js
COPY requirements.txt /opt/requirements.txt
RUN pip install -r /opt/requirements.txt
EXPOSE 8080
CMD ["python", "/statics/js/app.py"]
and this was the result:
$ docker build -t ciasto/pythonreact:v2 .
Sending build context to Docker daemon 1.327 MB
Step 1/9 : FROM python:3.5-slim
---> b27a94c44674
Step 2/9 : RUN apt-get update && apt-get -y install gcc mono-mcs && apt-get -y install vim && apt-get -y install nano && rm -rf /var/lib/apt/lists/*
---> Using cache
---> c76cb348707c
Step 3/9 : RUN mkdir -p /statics/js
---> Using cache
---> 2ef5b24f551c
Step 4/9 : VOLUME /statics/
---> Using cache
---> 5e62c6af1867
Step 5/9 : WORKDIR /statics/js
---> Using cache
---> a5a018e8c727
Step 6/9 : COPY requirements.txt /opt/requirements.txt
---> Using cache
---> 1fa4dccc6608
Step 7/9 : RUN pip install -r /opt/requirements.txt
---> Running in 8845a0efcee7
Collecting TurboGears2==2.3.10 (from -r /opt/requirements.txt (line 1))
Downloading TurboGears2-2.3.10.tar.gz (176kB)
Collecting Kajiki==0.6.3 (from -r /opt/requirements.txt (line 2))
Downloading Kajiki-0.6.3.tar.gz (174kB)
Collecting tgext.webassets==0.0.2 (from -r /opt/requirements.txt (line 3))
Downloading tgext.webassets-0.0.2.tar.gz
Collecting dukpy==0.1.0 (from -r /opt/requirements.txt (line 4))
Downloading dukpy-0.1.0.tar.gz (2.0MB)
Collecting WebOb>=1.2 (from TurboGears2==2.3.10->-r /opt/requirements.txt (line 1))
Downloading WebOb-1.7.2-py2.py3-none-any.whl (83kB)
Collecting crank<0.9.0,>=0.8.0 (from TurboGears2==2.3.10->-r /opt/requirements.txt (line 1))
Downloading crank-0.8.1.tar.gz
Collecting repoze.lru (from TurboGears2==2.3.10->-r /opt/requirements.txt (line 1))
Downloading repoze.lru-0.6.tar.gz
Collecting MarkupSafe (from TurboGears2==2.3.10->-r /opt/requirements.txt (line 1))
Downloading MarkupSafe-1.0.tar.gz
Collecting nine (from Kajiki==0.6.3->-r /opt/requirements.txt (line 2))
Downloading nine-1.0.0-py2.py3-none-any.whl
Collecting webassets (from tgext.webassets==0.0.2->-r /opt/requirements.txt (line 3))
Downloading webassets-0.12.1.tar.gz (179kB)
Collecting cssmin (from tgext.webassets==0.0.2->-r /opt/requirements.txt (line 3))
Downloading cssmin-0.2.0.tar.gz
Building wheels for collected packages: TurboGears2, Kajiki, tgext.webassets, dukpy, crank, repoze.lru, MarkupSafe, webassets, cssmin
Running setup.py bdist_wheel for TurboGears2: started
Running setup.py bdist_wheel for TurboGears2: finished with status 'done'
Stored in directory: /root/.cache/pip/wheels/51/1d/bb/c9cfdcf2a49f71955d5b66aed0dbd187e58e5d77a9fa34a4af
Running setup.py bdist_wheel for Kajiki: started
Running setup.py bdist_wheel for Kajiki: finished with status 'done'
Stored in directory: /root/.cache/pip/wheels/ad/fe/15/33e02c73fead4ea9238fcd31d273accf6fb9d922ec901e20c8
Running setup.py bdist_wheel for tgext.webassets: started
Running setup.py bdist_wheel for tgext.webassets: finished with status 'done'
Stored in directory: /root/.cache/pip/wheels/00/f2/09/0378f24bd9151b7a927093546c11685899ebec451b65eb181f
Running setup.py bdist_wheel for dukpy: started
Running setup.py bdist_wheel for dukpy: finished with status 'done'
Stored in directory: /root/.cache/pip/wheels/21/29/46/34c303b9dca370a8ccc97a84b094c8089b78edde125b0a1fcb
Running setup.py bdist_wheel for crank: started
Running setup.py bdist_wheel for crank: finished with status 'done'
Stored in directory: /root/.cache/pip/wheels/1c/00/54/4dcfd62d8268d7b34ea607bd9f8cb12aa930a7718c8c5fbc02
Running setup.py bdist_wheel for repoze.lru: started
Running setup.py bdist_wheel for repoze.lru: finished with status 'done'
Stored in directory: /root/.cache/pip/wheels/b2/cd/b3/7e24400bff83325a01d492940eff6e9579f553f33348323d79
Running setup.py bdist_wheel for MarkupSafe: started
Running setup.py bdist_wheel for MarkupSafe: finished with status 'done'
Stored in directory: /root/.cache/pip/wheels/88/a7/30/e39a54a87bcbe25308fa3ca64e8ddc75d9b3e5afa21ee32d57
Running setup.py bdist_wheel for webassets: started
Running setup.py bdist_wheel for webassets: finished with status 'done'
Stored in directory: /root/.cache/pip/wheels/9d/cb/c2/340b9b695822b6954840bcb6cd147b3a7cfc2bcd922296e63e
Running setup.py bdist_wheel for cssmin: started
Running setup.py bdist_wheel for cssmin: finished with status 'done'
Stored in directory: /root/.cache/pip/wheels/c3/79/88/647f59be446af4e9867362ca6e961cc7f218bd793fbdc351a6
Successfully built TurboGears2 Kajiki tgext.webassets dukpy crank repoze.lru MarkupSafe webassets cssmin
Installing collected packages: WebOb, crank, repoze.lru, MarkupSafe, TurboGears2, nine, Kajiki, webassets, cssmin, tgext.webassets, dukpy
Successfully installed Kajiki-0.6.3 MarkupSafe-1.0 TurboGears2-2.3.10 WebOb-1.7.2 crank-0.8.1 cssmin-0.2.0 dukpy-0.1.0 nine-1.0.0 repoze.lru-0.6 tgext.webassets-0.0.2 webassets-0.12.1
---> 86c189792ae7
Removing intermediate container 8845a0efcee7
Step 8/9 : EXPOSE 8080
---> Running in 9243a87c36e2
---> e7d35d54e66d
Removing intermediate container 9243a87c36e2
Step 9/9 : CMD python /statics/js/app.py
---> Running in 6e3b53cd901d
---> 0d79c4f81f3b
Removing intermediate container 6e3b53cd901d
Successfully built 0d79c4f81f3b
So my first question is what does the step 9 means ? does that mean it is attempting to run the /statics/js/app.py path even before I run the container because that would not work as I planned to mount this statics volume from the host.
Secondly, if I run the command:
$ docker run -it -v ~/Development/my-Docker-builds/pythonReact/statics/:/statics/ -d ciasto/pythonreact:v2
03d77c87651e752450e3be0aa64a0841c088b32a1db5424ad96c150c949d0366
I get the hash key but nothing works ! I do not even see the start trace or error message of app.py had any error.
So how should I run the app.py from host mounted volume when I run the container ?
You can consider CMD as the startup script for the container. Having said that at step 9 it is only marking pyhton /statics/js/app.py to be executed whenever you start the container. Also, since you are using -d flag you won't be able to see logs so you have to fetch them using docker logs command as:
docker logs 03d77c87651e752450e3be0aa64a0841c088b32a1db5424ad96c150c949d0366
The logs should be enough to help you figure out the issue. I hope it helps.

Categories