I have the following lambda function on AWS
import os
import sys
sys.path.insert(0, '/opt')
def tc1(event, context):
print("in tc1")
os.system("pytest /tests/Test_preRequisites.py -s -v")
os.system("python -m pytest /tests/Test_preRequisites.py -s -v")
when I run this function, the following error is displayed
Response:
null
Request ID:
"8e8738b7-9h28-4379-b814-688da8c31d58"
Function logs:
START RequestId: 8e8738b7-9h28-4379-b814-688da8c31d58 Version: $LATEST
in tc1
sh: pytest: command not found
/var/lang/bin/python: No module named pytest
END RequestId: 8e8738b7-9h28-4379-b814-688da8c31d58
REPORT RequestId: 8e8738b7-9h28-4379-b814-688da8c31d58 Duration: 38.46 ms Billed Duration: 100 ms Memory Size: 2048 MB Max Memory Used: 57 MB Init Duration: 123.66 ms
I can understand that the lambda function is unable to find the pytest module from these errors sh: pytest: command not found and /var/lang/bin/python: No module named pytest
I have tried to run the pytest command and also the python -m pytest command, both both give the same error.
However, I have already added a zip file as a layer and added that layer to this lambda function.
I installed pytest on my local machine to a folder by the command pip install pytest -t C:\Users\admin\dependencies
and then zipped the contents of that folder and uploaded it to the layer on AWS.
Still I am unable to access the pytest module.
This works perfectly fine on my local machine on local environment. This issue is occurring for AWS lambda only, so the script is working fine.
Can anyone please let me know what needs to be added or modified here to get this working.
Thanks.
Place your dependencies in a 'python' directory for Python layers, like this:
pip install pytest -t C:\Users\admin\dependencies\python
then zip up the contents of the 'dependencies' folder as before. The zip file will contain a single directory, 'python' with your dependencies under it.
This is because there's no entry point in the Lambda environment. When you install pytest normally, you get a pytest script due to the project's options.entry_points value in its setup.cfg ( found here: https://github.com/pytest-dev/pytest/blob/main/setup.cfg )
If you install the package into a virtualenv and navigate to the /bin directory, you'll see a pytest script sitting in there. That's what's normally being executed when you invoke the pytest command on the CLI. Your Lambda needs a version of that script, if you want to be able to shell out to it.
For reference, here's what's in that script:
#!/path/to/venv/bin/python
# -*- coding: utf-8 -*-
import re
import sys
from pytest import console_main
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
sys.exit(console_main())
I have not verified it myself, but I suspect that changing the shebang to #!/usr/bin/env python in this script would cause it to work from within the Lambda. Also, note that since your dependencies typically end up dumped into the same directory as your code in a Lambda package, you may need to use a different name for the script (because the name pytest is already used by a directory)
Related
I have two repos both pip install a package that I upload.
In the package I have set an argparser for taking command line arguments.
However, my current method of using the command line options of the src script is to add two identical scripts in both repos that import the module and call the function that I need (same one as defined for command line).
Is there a way I could call something like
script:
python3 {get-src-path} --options 1
in the .gitlab-ci.yml scripts?
Or even embedding Python code like this?
script:
python3
{entered interactive mode}
import <package-name>
package.function()
exit()
{return to bash}
i am trying to build a container for my express.js application. The express.js-app makes use of python via the npm package PythonShell.
I have plenty of python-code, which is in a subfolder of my express-app and with npm start everything works perfectly.
However, i am new to docker and i need to containerize the app. My Dockerfile looks like this:
FROM node:18
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3001
CMD ["node", "./bin/www"]
I built the Image with:
docker build . -t blahblah-server and ran it with docker run -p 8080:3001 -d blahblah-server.
I make use of imports at the top of the python-script like this:
import datetime
from pathlib import Path # Used for easier handling of auxiliary file's local path
import pyecma376_2 # The base library for Open Packaging Specifications. We will use the OPCCoreProperties class.
from assi import model
When the pythonscript is executed (only in the container!!!) I get following error-message:
/usr/src/app/public/javascripts/service/pythonService.js:12
if (err) throw err;
^
PythonShellError: ModuleNotFoundError: No module named 'pyecma376_2'
at PythonShell.parseError (/usr/src/app/node_modules/python-shell/index.js:295:21)
at terminateIfNeeded (/usr/src/app/node_modules/python-shell/index.js:190:32)
at ChildProcess.<anonymous> (/usr/src/app/node_modules/python-shell/index.js:182:13)
at ChildProcess.emit (node:events:537:28)
at ChildProcess._handle.onexit (node:internal/child_process:291:12)
----- Python Traceback -----
File "/usr/src/app/public/pythonscripts/myPython/wtf.py", line 6, in <module>
import pyecma376_2 # The base library for Open Packaging Specifications. We will use the OPCCoreProperties class. {
traceback: 'Traceback (most recent call last):\n' +
' File "/usr/src/app/public/pythonscripts/myPython/wtf.py", line 6, in <module>\n' +
' import pyecma376_2 # The base library for Open Packaging Specifications. We will use the OPCCoreProperties class.\n' +
"ModuleNotFoundError: No module named 'pyecma376_2'\n",
executable: 'python3',
options: null,
script: 'public/pythonscripts/myPython/wtf.py',
args: null,
exitCode: 1
}
If I comment the first three imports out, I get the same error:
PythonShellError: ModuleNotFoundError: No module named 'assi'
Please notice, that assi actually is from my own python-code, which is included in the expressjs-app-directory
Python seems to be installed in the container correctly. I stepped inside the container via docker exec -it <container id> /bin/bash and there are the python packages in the #/usr/lib-directory.
I really have absolute no idea how all this works together and why python doesn't find this modules...
You are trying to use libs that are not in Standard Python Library. It seems that you are missing to run pip install , when you build the docker images.
Try adding RUN docker commands that can do this for you. Example:
RUN pip3 install pyecma376_2
RUN pip3 install /path/to/assi
Maybe, that can solve your problem. Don't forget to check if python are already installed in your container, it semms that it is. And if you have python2 and pyhton3 installed, make sure that you use pip3 instead of only pip.
I am trying to run AWS CLI commands on a Lambda function. I referred to How to use AWS CLI within a Lambda function (aws s3 sync from Lambda) :: Ilya Bezdelev and generated a zip file with the awscli package. When I try to run the lambda function I get the following error:
START RequestId: d251660b-4998-4061-8886-67c1ddbbc98c Version: $LATEST
[INFO] 2020-06-22T19:15:45.232Z d251660b-4998-4061-8886-67c1ddbbc98c
Running shell command: /opt/aws --version
Traceback (most recent call last):
File "/opt/aws", line 19, in <module>
import awscli.clidriver
ModuleNotFoundError: No module named 'awscli'
What could be the issue here?
Everything in the 'site-packages' folder needs to be directly in the zip, and subsequently the /opt/ folder for the lambda, NOT nested inside a 'site-packages' folder which is what the tutorial results in unfortunately when you use his commands verbatim.
AS #tvmaynard said, you first need to add all the packages inside the same path as aws script of the AWS-CLI, by using this command:
cp -r ../${VIRTUAL_ENV_DIR}/lib/python${PYTHON_VERSION}/site-packages/. .
But, Even after that you will face a problem that there is some libraries that AWS-CLI is dependent on and must be installed in the Runtime Python as PyYAML, to install it you need to have access to Python Runtime inside the lambda, which is Not allowed.
Even if, you try to solve this by telling the interpreter where to search for the PyYAML library and installed it inside /tmp/, as follow:
import sys
import subprocess
stderr = subprocess.PIPE
stdout = subprocess.PIPE
cmd_1 = "pip install PyYAML -t /tmp/ --no-cache-dir"
subprocess.Popen(
args=cmd_1, start_new_session=True, shell=True, text=True, cwd=None,
stdout=stdout, stderr=stderr
)
sys.path.insert(1, '/tmp/')
import yaml
You will be able to use the library by importing it only inside the lambda function context as if you added a layer to the lambda, But not from the underlying command line which is linked to the python interpreter inside the lambda Runtime and have a default path to search for the libraries it needs.
You will also, Pay More Money to install this and the may needed other libraries, every time you trigger your lambda, Which if on Production ENV, will add more money to your Bill, that doesn't generate value.
I use PyCharm/IntelliJ community editions from a wile to write and debug Python scripts, but now I'm trying to debug a Python module, and PyCharm does a wrong command line instruction parsing, causing an execution error, or maybe I'm making a bad configuration.
This is my run/debug configuration:
And this is executed when I run the module (no problems here):
/usr/bin/python3.4 -m histraw
But when I debug, this is the output in the IntelliJ console:
/usr/bin/python3.4 -m /opt/apps/pycharm/helpers/pydev/pydevd.py --multiproc --client 127.0.0.1 --port 57851 --file histraw
/usr/bin/python3.4: Error while finding spec for '/opt/apps/pycharm/helpers/pydev/pydevd.py' (<class 'ImportError'>: No module named '/opt/apps/pycharm/helpers/pydev/pydevd')
Process finished with exit code 1
As you can see, the parameters are wrong parsed, and after -m option a IntelliJ debug script is passed before the module name.
I also tried just put -m histraw in the Script field, but doesn't work, that field is only to put Python script paths, not modules.
Any ideas?
There is another way to make it work.You can write a python script to run your module.Then just configure PyCharm to run this script.
import sys
import os
import runpy
path = os.path.dirname(sys.modules[__name__].__file__)
path = os.path.join(path, '..')
sys.path.insert(0, path)
runpy.run_module('<your module name>', run_name="__main__",alter_sys=True)
Then the debugger works.
In PyCharm 2019.1 (professional), I'm able to select run as module option under configurations, as below
I found it easiest to create a bootstrap file (debuglaunch.py) with the following contents.
from {package} import {file with __main__}
if __name__ == '__main__':
{file with __main__}.main()
For example, to launch locustio in the pycharm debugger, I created debuglaunch.py like this:
from locust import main
if __name__ == '__main__':
main.main()
And configured pycharm as follows.
NOTE: I found I was not able to break into the debugger unless I added a breakpoint on main.main() . That may be specific to locustio, however.
The problem is already fixed since PyCharm 4.5.2. See corresponding issue in PyCharm tracker:
https://youtrack.jetbrains.com/issue/PY-15230
I'm trying to run code coverage over my program's unit tests. I'm using mock in the tests, which means I have to use python3 and up. I've installed coverage.py using pip:
pip install coverage
The installation worked and coverage is working preperly. The issue is that when I'm trying to run coverage over my unit tests it runs with python2.6 and fails on import mock although my script starts with #!/usr/bin/python3:
coverage run ./my_tests.py
Traceback (most recent call last):
File "./my_tests.py", line 9, in module
from unittest.mock import patch
ImportError: No module named mock
Is there a way to configure coverage to run with python3? Is there a version of coverage which works with python3 by default?
You apparently have 2.6 as your default python. Or at least, you installed the coveragepy module in the 2.6 tree, which put 'coverage' in python26/Scripts, which then runs coveragepy with 2.6. However, the module works with both 2.x and 3.x if you explicitly run it with one or the other instead of just the default.
I happened to have 'installed' coveragepy by cloning it in my dev directory. I also wrote a cover.bat for my particular need, which is to test new and patched idlelib files in my python repository clone before committing them. Here is my file. Of particular relevance to your question are the lines that begin with %py%. I set that to my repository build of 3.4, but you could just as easily point it to installed 3.4 or even make it an input.
#echo off
rem Usage: cover fileName [test_ suffix] # proper case required by coveragepy
rem filename without .py, 2nd parameter if test is not test_filename
setlocal
set py=34\pcbuild\python_d
set src=idlelib.%1
if "%2" EQU "" set tst=34/Lib/idlelib/idle_test/test_%1.py
if "%2" NEQ "" set tst=34/Lib/idlelib/idle_test/test_%2.py
%py% coveragepy run --pylib --source=%src% %tst%
%py% coveragepy report --show-missing
%py% coveragepy html
htmlcov\34_Lib_idlelib_%1.html
rem Above opens new report; htmlcov\index.html displays report index