I'm trying to set environment variables in AWS lambda using python
initially I have an environment variable stackname with no value in lambda configuration.
def lambda_handler(event, context):
if os.environ["stackname"]:
print("Not empty")
print(os.environ["stackname"])
else:
print("its empty")
os.environ["stackname"]="mystack"
print(os.environ["stackname"])
Now I'm seeing weird intermittent behaviour here for the first I expect to print
its empty
mystack
and from thereon whenever I execute lambda it should print
Not empty
mystack
for initial couple of times it prints
Not empty
mystack
but after couple or more executions lambda prints below which is weird.
its empty
mystack
Please suggest if this is any other better way to set environment variables which gives consistent output.
AWS Lambda functions run inside of an Amazon Linux environment. Sequential invocations of the same Lambda function may result in the function running on the same environment or a different environment. By environment I mean computer, container, etc.
This means that you cannot reliably set environment variables and expect them to be there on the next invocation.
A better approach is to store your run-time variables in persistent storage such as DynamoDB.
Why not just use the environment variable support provided by lambda? You can config the env vars when you create or update your function, and then reference those vars in your function code. Regarding why it prints out 'its empty', #John Hanley's answer is pretty accurate.
you can do it as you do in system using os
import os
os.environ['varible']='value'
Related
I'm writing a Python 3 script meant to be run from Jenkins; However, I'd like it to print several debug messages only when it runs locally on a developer's PC.
I know a possible solution would be creating an Environment variable in the developer's IDE to be passed to the Interpreter and then check for it on start-up:
debug_mode = False
if 'DEBUGMODE' in os.environ:
debug_mode = bool(os.environ.get('DEBUGMODE'))
print('Script is starting up')
(...) # Do stuff
if debug_mode:
print('So many things to do...')
(...) # Do other stuff
Actually, I don't like to force the developer to define DEBUGMODE in his/her environment, so I'm
wondering if there's any other way for my script to automatically know it's running in a Jenkins job and not in a Debugger.
Thanks in advance!
Max
When a Jenkins job executes, it always sets some default environment variables.
In your python code you can just check to see if one (or more) of these variables exists.
You can go for the JENKINS_URL environment variable as it is quite unique and probably wont be used for any other purpose beside what you want to achieve.
So your code can look like:
debug_mode = 'JENKINS_URL' not in os.environ
print('Script is starting up')
(...) # Do stuff
if debug_mode:
print('So many things to do...')
(...) # Do other stuff
I am trying to create a pipeline and have defined some variable in my gitlab job which will be used by python script to populate results.
Is there any way I can call/define those variable in my python script.
For example:
.test-report:
extends: .test_report_setup
tags: *def_runners_tags
variables:
value-path: ${CI_PROJECT_DIR}/value
Now in my python script I want to call the variable 'value-path' to fetch or read files located in that directory. (Please note that variable is a custom variable on gitlab)
file_path = '<value-path>' <---- To get the gitlab job variable here
file_in = open(file_path + "id.txt")
Please help me how I can get it in my python script as I am a bit stuck on it.
Any suggestion/help on this will be appreciated. Thanks in advance.
You can access environment variables with os.environ or os.getenv. os.environ is a dict with the environment variables and will fail if you attempt to retrieve a key that doesn't exist. os.getenv is basically os.environ.get, which allows you to set a default value if there is no environment variable with that name.
I've got a Python application that connects to a database and I would like the db credentials to be different when it's running in local env (for testing) or within a lambda function (for production).
Is there any way, from the Python app, to detect that it is running inside the lambda function?
EDIT 2:
Thanks #MarkB for the update regarding the new feature of custom runtimes.
The Approach:
There are certain environment variables whose value is set when code runs in AWS. Checking for the existence of such variables would indicate that the code is running in AWS.
However, due to a new feature my previous take on it with AWS_EXECUTION_ENV environment variable does not work in all cases. From the docs here https://docs.aws.amazon.com/lambda/latest/dg/configuration-envvars.html The AWS_EXECUTION_ENV environment variable is not defined for custom runtimes, which means that checking for its existence when using custom runtimes would not be an indicator of whether the code is running on AWS.
One can check for the existence of one of the other AWS_* specific environment variables (see link above). Which one is right for you might depend on your use case. But Mark's suggestion looks good!
os.environ.get("AWS_LAMBDA_FUNCTION_NAME") is not None
This works for me The following would work as long as you are using a standard AWS runtime environment
os.environ.get("AWS_EXECUTION_ENV") is not None
EDIT: I find the existence of the context object insufficient for such a check because you might be mocking it when not running within an AWS lambda function. Then again, you may be mocking the AWS_EXECUTION_ENV as well ...
EDIT 2: With the introduction of Lambda function custom runtimes, it may be better to check for the AWS_LAMBDA_FUNCTION_NAME environment variable, like so:
os.environ.get("AWS_LAMBDA_FUNCTION_NAME") is not None
EDIT: See the other answer, this is a better solution:
os.environ.get("AWS_EXECUTION_ENV") is not None
Original answer:
How about checking for the existence of the context object in the handler function? http://docs.aws.amazon.com/lambda/latest/dg/python-programming-model-handler-types.html
For unit testing I use the structure:
+ my_function/
+- __init__.py - empty files
+- code/
+- __init__.py
+- lambda_function.py
+- unittest/
+- __init__.py
+- tests.py - from ..code.lambda_function import *
When running unit tests with python -m my_function.unittest.tests, in lambda_function.py the __name__ == 'my_function.code.lambda_function'.
When running in the Lambda running, __name__ == 'lambda_function'. Note that you'll get the same value if you run with python -m my_function.code.lambda_function so you'll always need a wrapper.
This is what I use
import os
try:
region = os.environ['AWS_REGION']
except:
# Not in Lambda environment
region = "us-east-1"
Because of this bug it is possible to tell if you are running inside an AWS Lambda Function.
import multiprocessing
def on_lambda():
try:
multiprocessing.Pool()
on_lambda = False
except:
on_lambda = True
return on_lambda
I used this to implement context sensible metric reporting successfully.
Lets hope they don't fix the bug any soon!
To use logarithmic function, I used export to pass a variable $var1 from bash to python script. After the calculation, I used
os.environ['var1']=str(result)
to send the result back to bash script.
However, the bash still shows the unmodified value.
You can have a look at the os.putenv(key, value) function here maybe it could help you.
Although as noted on the doc :
When putenv() is supported, assignments to items in os.environ are automatically translated into corresponding calls to putenv(); however, calls to putenv() don’t update os.environ, so it is actually preferable to assign to items of os.environ.
EDIT :
moooeeeep thought about it just a minute before me, but the variable you change only applies to the current process. In such a case I can see two solutions :
- Write the data to a file and read it with your bash script
- Call your bash script directly from within python so that the bash process would inherit your modified variable.
I need build a system environ variable, and I use os.putenv(key, value) to build one, then print os.getenv(key), the console outputs None.
But the console outputs value (here is print os.getenv(key) or print os.environ[key]) when I use os.environ[key] = value to build it.
However, the key and the value are not in the dictionary if print os.environ.
Why can I not build the system environment variable successfully? I use Windows 7 and Python 2.7.5.
If you read the documentation you will get the answer to why os.putenv does not work:
This mapping is captured the first time the os module is imported, typically during
Python startup as part of processing site.py. Changes to the environment made after
this time are not reflected in os.environ, except for changes made by modifying
os.environ directly.
If the platform supports the putenv() function, this mapping may be used to modify the
environment as well as query the environment. putenv() will be called automatically
when the mapping is modified.
Note Calling putenv() directly does not change os.environ, so it’s better to modify
os.environ.