How to check if Python app is running within AWS lambda function? - python

I've got a Python application that connects to a database and I would like the db credentials to be different when it's running in local env (for testing) or within a lambda function (for production).
Is there any way, from the Python app, to detect that it is running inside the lambda function?

EDIT 2:
Thanks #MarkB for the update regarding the new feature of custom runtimes.
The Approach:
There are certain environment variables whose value is set when code runs in AWS. Checking for the existence of such variables would indicate that the code is running in AWS.
However, due to a new feature my previous take on it with AWS_EXECUTION_ENV environment variable does not work in all cases. From the docs here https://docs.aws.amazon.com/lambda/latest/dg/configuration-envvars.html The AWS_EXECUTION_ENV environment variable is not defined for custom runtimes, which means that checking for its existence when using custom runtimes would not be an indicator of whether the code is running on AWS.
One can check for the existence of one of the other AWS_* specific environment variables (see link above). Which one is right for you might depend on your use case. But Mark's suggestion looks good!
os.environ.get("AWS_LAMBDA_FUNCTION_NAME") is not None
This works for me The following would work as long as you are using a standard AWS runtime environment
os.environ.get("AWS_EXECUTION_ENV") is not None
EDIT: I find the existence of the context object insufficient for such a check because you might be mocking it when not running within an AWS lambda function. Then again, you may be mocking the AWS_EXECUTION_ENV as well ...

EDIT 2: With the introduction of Lambda function custom runtimes, it may be better to check for the AWS_LAMBDA_FUNCTION_NAME environment variable, like so:
os.environ.get("AWS_LAMBDA_FUNCTION_NAME") is not None
EDIT: See the other answer, this is a better solution:
os.environ.get("AWS_EXECUTION_ENV") is not None
Original answer:
How about checking for the existence of the context object in the handler function? http://docs.aws.amazon.com/lambda/latest/dg/python-programming-model-handler-types.html

For unit testing I use the structure:
+ my_function/
+- __init__.py - empty files
+- code/
+- __init__.py
+- lambda_function.py
+- unittest/
+- __init__.py
+- tests.py - from ..code.lambda_function import *
When running unit tests with python -m my_function.unittest.tests, in lambda_function.py the __name__ == 'my_function.code.lambda_function'.
When running in the Lambda running, __name__ == 'lambda_function'. Note that you'll get the same value if you run with python -m my_function.code.lambda_function so you'll always need a wrapper.

This is what I use
import os
try:
region = os.environ['AWS_REGION']
except:
# Not in Lambda environment
region = "us-east-1"

Because of this bug it is possible to tell if you are running inside an AWS Lambda Function.
import multiprocessing
def on_lambda():
try:
multiprocessing.Pool()
on_lambda = False
except:
on_lambda = True
return on_lambda
I used this to implement context sensible metric reporting successfully.
Lets hope they don't fix the bug any soon!

Related

Is there a way to run several time a combinaison of python code and Pytest tests automatically?

I am looking to automate the process where:
I run some python code,
then run a set of tests using pytest
then, if all tests are validated, start the process again with new data.
I am thinking of writing a script executing the python code, then calling pytest using pytest.main(), check with the help of the exit code that all tests passed and in case of success start again.
The issue is that it is stated in pytest docs (https://docs.pytest.org/en/stable/usage.html) that it is not recommended to make multiple calls to pytest.main():
Note from pytest docs:
"Calling pytest.main() will result in importing your tests and any modules that they import. Due to the caching mechanism of python’s import system, making subsequent calls to pytest.main() from the same process will not reflect changes to those files between the calls. For this reason, making multiple calls to pytest.main() from the same process (in order to re-run tests, for example) is not recommended."
I was woundering if it was ok to call pytest.main() the way I intend to or if there was any better way to achieve what I am looking for?
I've made a simple example to make it problem more clear:
A = [0]
def some_action(x):
x[0] += 1
if __name__ == '__main__':
print('Initial value of A: {}'.format(A))
for i in range(10):
if i == 5:
# one test in test_mock2 that fails
test_dir = "./tests/functional_tests/test_mock2.py"
else:
# two tests in test_mock that pass
test_dir = "./tests/functional_tests/test_mock.py"
some_action(A)
check_tests = int(pytest.main(["-q", "--tb=no", test_dir]))
if check_tests != 0:
print('Interrupted at i={} because of tests failures'.format(i))
break
if i > 5:
print('All tests validated, final value of A: {}'.format(A))
else:
print('final value of A: {}'.format(A))
In this example some_action is executed until i reaches 5, at which point the tests fail and the process of executing/testing is interrupted. It seems to work fine, I'm only concerned because of the comments in pytest docs as stated above
The warning applies to the following sequence of events:
Run pytest.main on some folder which imports a.py, directly or indirectly.
Modify a.py (manually or programatically).
Attempt to rerun pytest.main on the same directory in the same python process as #1
The second run in step #3 will not not see the changes you made to a.py in step #2. That is because python does not import a file twice. Instead, it will check if the file has an entry in sys.modules, and use that instead. It's what lets you import large libraries multiple times without incurring a huge penalty every time.
Modifying the values in imported modules is fine. Python binds names to references, so if you bind something (like a new integer value) to the right name, everyone will be able to see it. Your some_action function is a good example of this. Future tests will run with the modified value if they import your script as a module.
The reason that the caveat is there is that pytest is usually used to test code after it has been modified. The warning is simply telling you that if you modify your code, you need to start pytest.main in a new python process to see the changes.
Since you do not appear to be modifying the code of the files in your test and expecting the changes to show up, the caveat you cite does not apply to you. Keep doing what you are doing.

How to answer input call from console script in pytest?

I'm currently writing a module which uses console_script in setup.py to create scripts at installation time. For performing the tests I use the plugin pytest-console-scripts to execute those scripts. One of the functions I want to test involves a input() call to get an anwer from the user ('y'es or 'n'o). But I do not have any idea on how to mock this input.
A sample test using pytest-console-scripts looks like:
import pytest
def test_my_function(script_runner):
# first option is the console script to be run followed by arguments
ret = script_runner.run('myscript', '--version')
assert ret.success
This can be used when the console script does not involve user action. How can this be solved?
Many thanks in advance, regards, Thomas
EDIT: the provided solutions in How to test a function with input call may solve my question only partially. My intention is to test the functionality through the console script, but not importing the module containing the function called through that script - if this is possible.
After investigating a lot more through Google I came across a solution, which worked perfectly for me:
# pip install pytest-mock pytest-console-scripts
...
def test_user_input(script_runner, mocker):
# optional use side_effect with any kind of value you try to give to
# your tested function
mocker.patch('builtins.input', return_value='<your_expected_input>')
# use side_effect=<values> instead if you want to insert more than one value
# Options have to be seperated
# Example: ('my_prog', '-a', 'val_a', '-b', 'val_b')
# or: ('my_prog', '-a val_a -b val_b'.split(' '))
ret = script_runner.run('my_prog')
assert ret.success
assert ret.stdout == <whatever>
# or assert 'string' in ret.stdout
See https://docs.python.org/3/library/unittest.mock.html#unittest.mock.Mock.side_effect for further possibilities of how to use side_effect.

Disable a Python unit test for automatic execution only?

The decorator #unittest.SkipTest prevents a unittest from being executed automatically when running the unit tests in a test class. Unfortunately, it also makes
the individual execution in PyCharm (by right-clicking on the function and
selecting Run/Debug for this test method) fail with a TypeError: don't know how to make test from: <function .... at 0x7fbc8e1234c0>
Is there a way to disable a unit test from automatic (bulk) execution
when running the complete test class, but leaving it executable by hand (preferably in PyCharm) such that it can be run without the need to make any changes in the test file?
I thought
#unittest.skipUnless(condition, reason)
might possibly come in handy, but could not come up with a condition that is satisfied only when the test is
launched by hand. Any suggestions appreciated!
Have you tried including a parameter that is set when run through CI/CD? You could have the default value set to false, and then CI/CD sets it to true. This would give you the condition you are looking for.
As far as I'm aware, the only way to differentiate between CI/CD runs and IDE runs is through some intermediary parameter that you must set.
Edit:
Try setting a custom build configuration in PyCharm for that specific test. You could have that build configuration pass in a parameter to your testsuite. At that point you would have a proper condition to have this test not be skipped when you run tests using the command line vs PyCharm's integrated test runner.
For simplicity, you'll want the default value of the parameter to be to skip, and only set the Boolean value to not skip by passing in True to that param in the special build config in PyCharm.
See: https://www.jetbrains.com/help/idea/build-configuration.html
I would try to control this with a parameter that is set only in PyCharm or another IDE.
That is, you could use skipUnless(...) with a condition relating to an environment variable defined in your PyCharm test configuration.
Here is a complete example:
import os
import unittest
def is_prime(n):
""" Return whether `n` is prime. """
return n == 2 or not (n % 2 == 0 or any(n % i == 0 for i in range(3, n, 2)))
class IsPrimeTest(unittest.TestCase):
#unittest.skipUnless(os.getenv('MYPROJECT_DEVELOPMENT_TEST'), reason="Lengthy test")
def test_is_prime(self):
self.assertTrue(is_prime(2))
self.assertTrue(is_prime(3))
self.assertFalse(is_prime(4))
self.assertTrue(is_prime(5))
When run from the shell, the tests are indeed skipped:
$ python3 -m unittest test_example.py
s
----------------------------------------------------------------------
Ran 1 test in 0.000s
OK (skipped=1)
If you select "Edit Configurations...", you can set the specific environment variable in the test configuration, as shown in the screenshot below. With this change, the test in question is actually run in PyCharm (not shown).
Addendum from my comment below:
One helpful option may be to create multiple test "Targets" in PyCharm, using the "+" button (the upper left corner of my screenshot). Then you could have one target that includes the test, and one target that does not, naming them "Standard unit tests" and "All unit tests", for example. To switch between them, you could choose each in a certain dropdown next to a green "play" button (toolbar in my main IDE window). This seems pretty ergonomic and simple.

Python unit tests not discovered in VSCode

I've written a python test file called scraping_test.py, with a single test class, using unittest, called TestScrapingUtils
"""Tests for the scraping app"""
import unittest
from bs4 import BeautifulSoup as bs4
from mosque_scraper.management.commands import scraping_utils
from mosque_scraper.selectors import MOSQUE_INFO_ROWS_SELECTOR
class TestScrapingUtils(unittest.TestCase):
"""Test scraping_utils.py """
def setup(self):
"""Setup McSetupface."""
pass
def test_get_keys_from_row(self):
""" Test that we extract the correct keys from the supplied rows."""
test_page_name = "test_page.html"
with open(test_page_name) as test_page_file:
test_mosque = bs4(test_page_file, 'html.parser')
rows = test_mosque.select(MOSQUE_INFO_ROWS_SELECTOR)
field_dict = scraping_utils.get_fields_from_rows(rows)
self.assertDictEqual(field_dict, {})
My settings for unit tests are:
{
"python.unitTest.unittestEnabled": true,
"python.unitTest.unittestArgs": [
"-v",
"-s",
".",
"-p",
"*test.py"
]
}
It looks like it should work, but when I click to run the tests in VSCode it says that no tests were discovered:
No tests discovered, please check the configuration settings for the tests.
How do I make it work?
You have to run it once by using shortcut key shift+ctrl p, and type "Python run all unit tests".
It won't show up in the editor until it was successfully executed at least once or use the discover unit test method.
However one thing catch me many times is that the Python file has to be a valid Python file. The intellisense in VS Code for Python is not complex(compare to Javascript or Typescript), and it won't highlight any syntax error. You can verify that by force it to run all unit test and observe the Python Test Log window.
What caught me is that the __init__.py file must be created in every subdirectory, from the root folder specified with -s option (in the example, the current directory ".") to the subdirectory where the test module is located. Only then was I able to discover tests successfully.
In the question example, both project_dir/ and project_dir/scraping_app/ should contain __init__.py. This is assuming that settings.json is located in project_dir/.vscode and the tests are run from project_dir/ directory.
Edit: Alternatively, use "-s", "./scraping_app/" as the root test directory so you don't have to put __init__.py to project_dir/.
Instead of file name 'scraping_test.py' it shall be 'test_scraping.py'
string shall start from 'test' prefix
I had the same error with a slightly different configuration. (I am posting this here because this is the question that comes up when you search for this error.)
In addition to what was said above, it is also important to not use periods in the test file names (e.g. use module_test.py instead of module.test.py).
You can add the DJANGO_SETTINGS_MODULE variable and django.setup() inside the __init__.py file of tests package.
import os
import django
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'your_app.settings')
django.setup()
In my case, the problem was that my test was importing a module which was reading an environment variable using os.environ['ENV_NAME']. If the variable does not exist, it throws an error. But VS Code does not log anything (or at least I couldn't find it).
So, the reason was that my .env file was NOT in the workspace root. So I had to add "python.envFile": "${workspaceFolder}/path/to/.env" to the settings.json file.
After that, the test was discovered successfully.
Also had this issue.
for me the issue was, make sure there are no errors, and comment out all code in files that rely on pytest, just for the initial load up.
Another issue that causes the unit tests not be discovered is when using a conda environment that contains an explicit dependency on the conda package itself. This happens when the enviroment.yml contains the line:
- conda
Removing this line and creating the environment from scratch makes the unit tests discoverable. I have created a bug report in Github for this: https://github.com/microsoft/vscode-python/issues/19643
(This is my second solution to this issue; I decided to create another answer since this is entirely different from the previous one.)
This is my first time using unittest in vscode. I found that the file names cannot contain spaces and dots. and cannot start with numbers.
for the dots, I guess anything after a dot is considered by the extension by unittest.
for the spaces, I guess they do not use "" to surround the filename.
For me Discovering the unit tests did the trick.
SHIFT+CTRL+P and execute "Python: Discover unit tests"
After running this I get the "Run Test|Debug Test" over each test function.

How to set environment variables in AWS lambda using Python

I'm trying to set environment variables in AWS lambda using python
initially I have an environment variable stackname with no value in lambda configuration.
def lambda_handler(event, context):
if os.environ["stackname"]:
print("Not empty")
print(os.environ["stackname"])
else:
print("its empty")
os.environ["stackname"]="mystack"
print(os.environ["stackname"])
Now I'm seeing weird intermittent behaviour here for the first I expect to print
its empty
mystack
and from thereon whenever I execute lambda it should print
Not empty
mystack
for initial couple of times it prints
Not empty
mystack
but after couple or more executions lambda prints below which is weird.
its empty
mystack
Please suggest if this is any other better way to set environment variables which gives consistent output.
AWS Lambda functions run inside of an Amazon Linux environment. Sequential invocations of the same Lambda function may result in the function running on the same environment or a different environment. By environment I mean computer, container, etc.
This means that you cannot reliably set environment variables and expect them to be there on the next invocation.
A better approach is to store your run-time variables in persistent storage such as DynamoDB.
Why not just use the environment variable support provided by lambda? You can config the env vars when you create or update your function, and then reference those vars in your function code. Regarding why it prints out 'its empty', #John Hanley's answer is pretty accurate.
you can do it as you do in system using os
import os
os.environ['varible']='value'

Categories