I am trying to run AWS CLI commands on a Lambda function. I referred to How to use AWS CLI within a Lambda function (aws s3 sync from Lambda) :: Ilya Bezdelev and generated a zip file with the awscli package. When I try to run the lambda function I get the following error:
START RequestId: d251660b-4998-4061-8886-67c1ddbbc98c Version: $LATEST
[INFO] 2020-06-22T19:15:45.232Z d251660b-4998-4061-8886-67c1ddbbc98c
Running shell command: /opt/aws --version
Traceback (most recent call last):
File "/opt/aws", line 19, in <module>
import awscli.clidriver
ModuleNotFoundError: No module named 'awscli'
What could be the issue here?
Everything in the 'site-packages' folder needs to be directly in the zip, and subsequently the /opt/ folder for the lambda, NOT nested inside a 'site-packages' folder which is what the tutorial results in unfortunately when you use his commands verbatim.
AS #tvmaynard said, you first need to add all the packages inside the same path as aws script of the AWS-CLI, by using this command:
cp -r ../${VIRTUAL_ENV_DIR}/lib/python${PYTHON_VERSION}/site-packages/. .
But, Even after that you will face a problem that there is some libraries that AWS-CLI is dependent on and must be installed in the Runtime Python as PyYAML, to install it you need to have access to Python Runtime inside the lambda, which is Not allowed.
Even if, you try to solve this by telling the interpreter where to search for the PyYAML library and installed it inside /tmp/, as follow:
import sys
import subprocess
stderr = subprocess.PIPE
stdout = subprocess.PIPE
cmd_1 = "pip install PyYAML -t /tmp/ --no-cache-dir"
subprocess.Popen(
args=cmd_1, start_new_session=True, shell=True, text=True, cwd=None,
stdout=stdout, stderr=stderr
)
sys.path.insert(1, '/tmp/')
import yaml
You will be able to use the library by importing it only inside the lambda function context as if you added a layer to the lambda, But not from the underlying command line which is linked to the python interpreter inside the lambda Runtime and have a default path to search for the libraries it needs.
You will also, Pay More Money to install this and the may needed other libraries, every time you trigger your lambda, Which if on Production ENV, will add more money to your Bill, that doesn't generate value.
Related
I am trying to transform a mp3 to a wav file in pycharm using subprocess
import subprocess
subprocess.call(['ffmpeg', '-i','test.mp3','test.wav'])
It returns error of not finding file, so I change the 'ffmpeg' to its path on my pc and it work.
The problem is that I am making an app and others might install ffpmeg on other's location (since it is download with zip and can be unzip at any place), but I don't know how to get its full path.
I tried using os module
import os
print(os.path('ffmpeg.exe'))
but it seems like it is not able to get the path of exe
Traceback (most recent call last):
File "C:\Users\Percy\PycharmProjects\APP\test3.py", line 8, in <module>
print(os.path('ffmpeg.exe'))
TypeError: 'module' object is not callable
I also tried shutil module
import shutil
print(shutil.which('ffmpeg'))
print(shutil.which('ffmpeg.exe'))
but it returns 2 None (prob wrong cause I am 100% sure I have installed ffmpeg)
None
None
I want to ask if there is any way to get the full path of ffmpeg in pycharm or any method that I can make ffmpeg install in designated path with the app when it is downloaded by users
If you can make "everyone" to install using my ffmpeg-downloader then all of you can install FFmpeg by:
pip install ffmpeg-downloader
ffdl install
Then in Python your package could use
import ffmpeg_downloader as ffdl
sp.run([ffdl.ffmpeg_path, '-i', 'input.mp4', 'output.mkv'])
Alternately, you can use static-ffmpeg to (dynamically) install FFmpeg to Lib/site-package. (See the linked GitHub page for howto.)
I have two repos both pip install a package that I upload.
In the package I have set an argparser for taking command line arguments.
However, my current method of using the command line options of the src script is to add two identical scripts in both repos that import the module and call the function that I need (same one as defined for command line).
Is there a way I could call something like
script:
python3 {get-src-path} --options 1
in the .gitlab-ci.yml scripts?
Or even embedding Python code like this?
script:
python3
{entered interactive mode}
import <package-name>
package.function()
exit()
{return to bash}
i am trying to build a container for my express.js application. The express.js-app makes use of python via the npm package PythonShell.
I have plenty of python-code, which is in a subfolder of my express-app and with npm start everything works perfectly.
However, i am new to docker and i need to containerize the app. My Dockerfile looks like this:
FROM node:18
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3001
CMD ["node", "./bin/www"]
I built the Image with:
docker build . -t blahblah-server and ran it with docker run -p 8080:3001 -d blahblah-server.
I make use of imports at the top of the python-script like this:
import datetime
from pathlib import Path # Used for easier handling of auxiliary file's local path
import pyecma376_2 # The base library for Open Packaging Specifications. We will use the OPCCoreProperties class.
from assi import model
When the pythonscript is executed (only in the container!!!) I get following error-message:
/usr/src/app/public/javascripts/service/pythonService.js:12
if (err) throw err;
^
PythonShellError: ModuleNotFoundError: No module named 'pyecma376_2'
at PythonShell.parseError (/usr/src/app/node_modules/python-shell/index.js:295:21)
at terminateIfNeeded (/usr/src/app/node_modules/python-shell/index.js:190:32)
at ChildProcess.<anonymous> (/usr/src/app/node_modules/python-shell/index.js:182:13)
at ChildProcess.emit (node:events:537:28)
at ChildProcess._handle.onexit (node:internal/child_process:291:12)
----- Python Traceback -----
File "/usr/src/app/public/pythonscripts/myPython/wtf.py", line 6, in <module>
import pyecma376_2 # The base library for Open Packaging Specifications. We will use the OPCCoreProperties class. {
traceback: 'Traceback (most recent call last):\n' +
' File "/usr/src/app/public/pythonscripts/myPython/wtf.py", line 6, in <module>\n' +
' import pyecma376_2 # The base library for Open Packaging Specifications. We will use the OPCCoreProperties class.\n' +
"ModuleNotFoundError: No module named 'pyecma376_2'\n",
executable: 'python3',
options: null,
script: 'public/pythonscripts/myPython/wtf.py',
args: null,
exitCode: 1
}
If I comment the first three imports out, I get the same error:
PythonShellError: ModuleNotFoundError: No module named 'assi'
Please notice, that assi actually is from my own python-code, which is included in the expressjs-app-directory
Python seems to be installed in the container correctly. I stepped inside the container via docker exec -it <container id> /bin/bash and there are the python packages in the #/usr/lib-directory.
I really have absolute no idea how all this works together and why python doesn't find this modules...
You are trying to use libs that are not in Standard Python Library. It seems that you are missing to run pip install , when you build the docker images.
Try adding RUN docker commands that can do this for you. Example:
RUN pip3 install pyecma376_2
RUN pip3 install /path/to/assi
Maybe, that can solve your problem. Don't forget to check if python are already installed in your container, it semms that it is. And if you have python2 and pyhton3 installed, make sure that you use pip3 instead of only pip.
I got a Node.js CLI program called meyda installed (Mac OS 10.14) using:
sudo npm install --global meyda
From the Terminal I can call the program and it works as expected; like:
meyda --bs=256 --o=apagodis2.csv DczN6842.wav rms
Now, I want to call it from inside a python script (using Spyder) at the same location and tried this – but getting error:
import os
os.system ('/usr/local/bin/meyda --bs=256 --o=apagodis4.csv samples_training/DczN6842.wav rms')
>>> env: node: No such file or directory
I can issue more "traditional" shell commands like this from the same Python script and it works:
os.system ('cp samples_training/DczN6842.wav copy.wav')
Also tried subprocess call with same result. I confirmed the executable is at /usr/local/bin/
To make sure I also removed all file arguments calling the program using only the help flag but same, error.
os.system ('/usr/local/bin/meyda -h')
>>> env: node: No such file or directory
Why is the command not found from inside Python but sucessfully in the macOS Terminal?
I have the following lambda function on AWS
import os
import sys
sys.path.insert(0, '/opt')
def tc1(event, context):
print("in tc1")
os.system("pytest /tests/Test_preRequisites.py -s -v")
os.system("python -m pytest /tests/Test_preRequisites.py -s -v")
when I run this function, the following error is displayed
Response:
null
Request ID:
"8e8738b7-9h28-4379-b814-688da8c31d58"
Function logs:
START RequestId: 8e8738b7-9h28-4379-b814-688da8c31d58 Version: $LATEST
in tc1
sh: pytest: command not found
/var/lang/bin/python: No module named pytest
END RequestId: 8e8738b7-9h28-4379-b814-688da8c31d58
REPORT RequestId: 8e8738b7-9h28-4379-b814-688da8c31d58 Duration: 38.46 ms Billed Duration: 100 ms Memory Size: 2048 MB Max Memory Used: 57 MB Init Duration: 123.66 ms
I can understand that the lambda function is unable to find the pytest module from these errors sh: pytest: command not found and /var/lang/bin/python: No module named pytest
I have tried to run the pytest command and also the python -m pytest command, both both give the same error.
However, I have already added a zip file as a layer and added that layer to this lambda function.
I installed pytest on my local machine to a folder by the command pip install pytest -t C:\Users\admin\dependencies
and then zipped the contents of that folder and uploaded it to the layer on AWS.
Still I am unable to access the pytest module.
This works perfectly fine on my local machine on local environment. This issue is occurring for AWS lambda only, so the script is working fine.
Can anyone please let me know what needs to be added or modified here to get this working.
Thanks.
Place your dependencies in a 'python' directory for Python layers, like this:
pip install pytest -t C:\Users\admin\dependencies\python
then zip up the contents of the 'dependencies' folder as before. The zip file will contain a single directory, 'python' with your dependencies under it.
This is because there's no entry point in the Lambda environment. When you install pytest normally, you get a pytest script due to the project's options.entry_points value in its setup.cfg ( found here: https://github.com/pytest-dev/pytest/blob/main/setup.cfg )
If you install the package into a virtualenv and navigate to the /bin directory, you'll see a pytest script sitting in there. That's what's normally being executed when you invoke the pytest command on the CLI. Your Lambda needs a version of that script, if you want to be able to shell out to it.
For reference, here's what's in that script:
#!/path/to/venv/bin/python
# -*- coding: utf-8 -*-
import re
import sys
from pytest import console_main
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
sys.exit(console_main())
I have not verified it myself, but I suspect that changing the shebang to #!/usr/bin/env python in this script would cause it to work from within the Lambda. Also, note that since your dependencies typically end up dumped into the same directory as your code in a Lambda package, you may need to use a different name for the script (because the name pytest is already used by a directory)