I am working on a project that is using AWS CodeBuild to deploy a Serverless (SLS) function that is written in Python.
The deployment works fine within code build. It successfully creates the function and I can view the lambda within the Lambda AWS UI. Whenever the function is triggered, I get the error seen below:
Runtime.ImportModuleError: Unable to import module 'some/function': attempted relative import with no known parent package
It is extremely frustrating as I know the function exists at that directory listed above. During the CodeBuild script, I can ls into the directory and confirm that it indeed exists. The function is defined in my serverless.yml file as follows:
functions:
file-blaster:
runtime: python3.7
handler: some/function.function_name
events:
- existingS3:
bucket: some_bucket
events:
- s3:ObjectCreated:*
rules:
- prefix: ${opt:stage}/some/prefix
Sadly, I haven't been able to crack this one. Has anyone had a similar experience while working with SLS and python in the cloud?
It seems odd that SLS would build and deploy successfully, but the Lambda itself cant find the function.
This will be a short answer for what is a somewhat longer discussion on Python imports. You can do the research yourself on the hectic and confusing battle between relative and absolute imports as a design for a python project.
The Gist:
It is necessary to understand that the base of the python importing for SLS functions IS where the serverless.yml file exists (I imagine that it is similar to having a main.py that calls the other files that are referenced as "functions" in the sls yml). For my case above, I did not structure the imports using absolute imports when I had my issues. I switched all of my imports to have absolute paths, so when I moved the package around, it would continue to work.
The error that I was given Runtime.ImportModuleError: Unable to import module 'some/function': attempted relative import with no known parent package was really poor to describe the actual issue. The error should have included that the packages being used by some/function were not found when attempting a relative import because that was the actual problem that needed fixing.
Hopefully this helps someone else out someday. Let me know if I can provide more information where I haven't already.
I think you need to change your handler property from :
handler: some/function.function_name
to
handler: some/function.{lambda handler name}
like, my folder structure is:
- some
- function1.py
then my template will be:
functions:
file-blaster:
runtime: python3.7
handler: some/function1.lambda_handler
for more details check here https://serverless.com/framework/docs/providers/aws/guide/functions/
Related
I wrote a custom python package for Ansible to handle business logic for some servers I manage. I have multiple files and they reference each other by re-importing the package.
So my package named <MyCustomPackage> has functions <Function1> <Function2> <Function3>, etc all in their own files... Some of these functions reference functions in the same package, so to do that the file has:
import MyCustomPackage
at the top. I did it this way instead of a relative import because I'm also unit testing these and mocking would not work with relative paths because of a __init__ file in the test directory which was needed for test discovery. The only way I could mock was through importing the package itself. Seemed simple enough.
The problem is with Ansible. These packages are in module_utils. I import them with:
from ansible.module_utils.MyCustomPackage import MyCustomPackage
but when I use the commands I get module not found errors - and traced it back to the import MyCustomPackage statement in the package itself.
So - how should I be structuring my package? Should I try again with relative file imports, or have the package modify the path so it's found with the friendly name?
Any tips would be helpful! Or if someone has a module they've written with Python modules in module_utils and unit tests that they'd be willing to share, that'd be great also!
Many people have problems with relative imports and imports in general in Python because they are ambiguous and surprisingly depend on your current working directory (and other things).
Thus I've created an experimental, new import library: ultraimport
It gives you more control over your imports and lets you do file system based, relative imports.
Given that you have a file function1.py, to import a function from function2.py, you would then write:
import ultraimport
Function2 = ultraimport('__dir__/function2.py', 'Function2')
This will always work, no matter how you run your code. It also does not force you to a specific package structure. You can just have any files you like.
I have an package thats imported from the parent path everywhere. So i have to set PYTHONPATH enviromentvariable when i want to serve the docs for that package.
Ive searched the Docs,stack overflow, google but couldn't find an solution to configure that in the mkdocs.yml or run an piece of python code to append it to sys.path
Edit:
handlers:
python:
setup_commands:
- import sys;sys.path.append('..');print(sys.path)
could be what i search for, but during mkdocs build (or serve) the print is never called
The solution is really to use the setup_commands. Prints don't seems to be showed there and my problem was that an used package wasn't installed. The error message then is the same as when the import isn't be found.
I am trying to upload a python lambda function via a zip file. I copied all of my python files into a directory called "lambda", installed all of the libraries directly into that directory, did chmod -R 755 . to make sure all of those files are executable, and then zipped the directory with zip -r ../analysis.zip .
The file that holds my lambda function is called "main.py" and the lambda function is called "handler", so by AWS Lambda convention, I set the file it should be looking for to main.handler in the AWS Lambda page. I check my cloudwatch logs for this lambda function and still get an error saying aws cannot find the main module and also cannot find some regex._regex module as well.
This is the official error:
[ERROR] Runtime.ImportModuleError: Unable to import module 'main': No module named 'regex._regex'
Does anyone know what can be the problem? I have deployed aws lambda functions before using the same process and this is the first time I am getting this issue.
Lambda operates on a Python function/method not at the file. So the function handler must point to a actual function/method not a file.
So within your main.py file, there must be a function e.g. test_function, and your handler has to be main.test_function. The name of the function in AWS is irrelevant to the function.
Hope that helps.
By the description and error message your naming convention seems right.
The error message indicates that you're missing the regex module. However, from your description you seem to have packaged the dependancies correctly.
Usually, it would then be a case of a runtime miss match. However, I have had struggles with regex and lambda when runtimes do match. By default now, I don't go above python 3.6 at the moment. I have struggled with other dependancies on lambda, such as pickle, on higher versions recently. Whilst everything seems to operate fine on 3.6.
I got rid of the regex error on lambda with python 3.6 by downloading the taar.gz file from pypi and running setup.py... rather than pip3 install. It's a bit of pain, but it worked.
This has been already asked a number of times, but I have tried everything suggested plus more, and nothing seems to work.
My setup: an application on Lambda, with python functions and deployed via CloudPipeline. This is the full error I get (and all I can see in the logs):
{
"errorMessage": "Unable to import module 'lambda_functions.function_one': No module named 'lambda_functions'",
"errorType": "Runtime.ImportModuleError"
}
lambda_functions is a directory, and function_one is the name of the python file with the handler in it. The full function invocation path in my template.yml is: lambda_functions.function_one.lambda_handler. I do have a __init__.py in that dir.
I installed the AWS SAM tools and I can invoke the function locally fine. I have also downloaded the zipped project from S3 and checked permissions etc.
The logs show that the requirements are installed correctly, but even then, just to make sure I tried commenting out everything in the function file except for a bare handler, no dependencies at all, and still fails.
Any ideas on why Lambda can fail to find my module?
I do not think it is supported to put the file in a subdirectory, so you probably have to make sure that the file function_one.py file is in the root of the zip file.
I think posting the question here served as rubber duck debugging! After hours trying to make it work, I have just found the problem: the Code URI entry in the template was pointing to a full path to a S3 bucket, like:
s3://aws-eu-west-2-575-foo-foo-pipe/622d85cc8
But I just need a ./, like this:
Resources:
getAllItemsFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: ./
Handler: function_one.lambda_handler
I hope this helps someone else and they don't have to spend hours trying to debug this. Lambda is nice when it all works well, but man it is hard to debug.
So I have the recommended setup for smaller projects, where you have multiple module YAML files all in the main file, all sharing source. Like here: https://cloud.google.com/appengine/docs/python/modules/#devserver
I only have 2 modules: the default module, and my backend module for running tasks, pipeline, etc.
Default is on version 22, backend is on version 'uno' (the first and only version of this module).
I cannot get backend to update to version 'dos'. Whenever I test things I am getting 404's, like the source files don't exist on the backend module. The requests make it to the correct module, but error out.
I have tried to update using: appcfg.py update main_directory app.yaml backend.yaml
But it always looks like it is only doing a 'default module' update. I never see anything about the backend module. Even when I try the above command minus the app.yaml (which is acting as my default module YAML).
In the developer console I can only see the single version for my backend module. It has not added a 2nd version despite my attempts to add a 'dos' version, and a 'v2' version' - both never "worked".
Anyone else have problems updating a 'backend' module to a new version? Is it the 'all in one directory' setup giving me problems? Am I just not using the right appcfg incantation?
Update 1: My directory structure looks like this
where module1.yaml is app.yaml and module2.yaml is backend.yaml.
Drop the main_directory from the update command:
appcfg.py update app.yaml backend.yaml
Specifying a directory only works for single-module apps, for uploading modules only the respective modules' .yaml files should be specified:
You can also update a single module or a subset of the apps modules by specifying only the .yaml files for the desired module(s).