AWS Lambda is not reading environment variables - python

I am writing a python script to query the Qualys API for vulnerability metadata. I am executing it as a lambda function in AWS. I have set my environment variables in the console, but when I execute my function, I am getting the following error:
module initialization error: name 'QUALYS_USERNAME' is not defined
I am using this os module to call them in my code in my handler function:
import os
import requests
import time
import lxml
import tinys3
from lxml import etree
def lambda_handler(event, context):
QUALYS_USERNAME = os.environ('QUALYS_USERNAME')
QUALYS_PASSWORD = os.environ('QUALYS_PASSWORD')
ACCESS_KEY = os.environ('ACCESS_KEY')
SECRET_KEY = os.environ('SECRET_KEY')
s = requests.Session()
s.headers.update({'X-Requested-With':QUALYS_USERNAME})
def login(s):
payload = {'action':'login', 'username':QUALYS_USERNAME,
'password':QUALYS_PASSWORD}
r = s.post('https://qualysapi.qualys.com/api/2.0/fo/session/',
data=payload)
def launchReport(s, polling_delay=120):
payload = {'action':'launch', 'template_id':'X',
'output_format':'xml', 'report_title':'X'}
r = s.post('https://qualysapi.qualys.com/api/2.0/fo/report/', data=payload)
global extract_id
extract_id = etree.fromstring(r.content).find('.//VALUE').text
print("Report ID = %s" % extract_id)
time.sleep(polling_delay)
return extract_id
def bucket_upload(s):
conn = tinys3.Connection(ACCESS_KEY,SECRET_KEY)
payload = {'action': 'fetch', 'id': extract_id}
r = s.post('https://qualysapi.qualys.com/api/2.0/fo/report/',
data=payload)
os.chdir('/tmp')
with open(extract_id + '.xml', 'w') as file:
file.write(r.content)
f = open(extract_id + '.xml','rb')
conn.upload(extract_id + '.xml',f,'X-bucket')
login(s)
launchReport(s)
bucket_upload(s)
Here are my defined environment variables in Lambda:
Env Variables Console
I am not sure why I am getting this error.

You need to access the environment variables as a dictionary not as a function call.
QUALYS_USERNAME = os.environ["QUALYS_USERNAME"]
QUALYS_PASSWORD = os.environ["QUALYS_PASSWORD"]

Potential cause: the inline code, outside of the functions, is running before the event handler.
Unrelated: rather than use environment variables for credentials, you should use IAM roles for AWS credentials and use Parameter Store for other, non-AWS, credentials.

Related

Building Lambda Function in yaml - Issue

I'm building a CloudFormation deployment that includes a Lambda function built out in python 3.9. However, when I build the function, it will not allow me to keep the single quotes. This hasn't been an issue for most of the script as I simply import json and the double quote (") work fine, but one section requires the single quotes.
Here is the code:
import boto3
import json
def lambda_handler(event, context):
client = client_obj()
associated = associated_list(client)
response = client.list_resolver_query_log_configs(
MaxResults=1,
)
config = response['ResolverQueryLogConfigs'][0]['Id']
ec2 = boto3.client('ec2')
vpc = ec2.describe_vpcs()
vpcs = vpc['Vpcs']
for v in vpcs:
if v['VpcId'] not in associated:
client.associate_resolver_query_log_config(
ResolverQueryLogConfigId= f"{config}",
ResourceId=f"{v['VpcId']}"
)
else:
print(f"{v['VpcId']} is already linked.")
def client_obj():
client = boto3.client('route53resolver')
return client
def associated_list(client_object):
associated = list()
assoc = client_object.list_resolver_query_log_config_associations()
for element in assoc['ResolverQueryLogConfigAssociations']:
associated.append(element['ResourceId'])
return associated
any section that includes f"{v['VpcId']}" requires the single quote inside the [] for the script to run properly. Since yaml requires the script to be encapsulated in single quotes for packaging, how can I fix this?
Example in yaml from another script:
CreateIAMUser:
Type: 'AWS::Lambda::Function'
Properties:
Code:
ZipFile: !Join
- |+
- - import boto3
- 'import json'
- 'from botocore.exceptions import ClientError'
- ''
- ''
- 'def lambda_handler(event, context):'
- ' iam_client = boto3.client("iam")'
- ''
- ' account_id = boto3.client("sts").get_caller_identity()["Account"]'
- ''
I imagine I could re-arrange the script to avoid this, but I would like to use this opportunity to learn something new if possible.
Not sure what you are trying to do, but usually you just use pipe in yaml for that:
Code:
ZipFile: |
import boto3
import json
def lambda_handler(event, context):
client = client_obj()
associated = associated_list(client)
response = client.list_resolver_query_log_configs(
MaxResults=1,
)
config = response['ResolverQueryLogConfigs'][0]['Id']
ec2 = boto3.client('ec2')
vpc = ec2.describe_vpcs()
vpcs = vpc['Vpcs']
for v in vpcs:
if v['VpcId'] not in associated:
client.associate_resolver_query_log_config(
ResolverQueryLogConfigId= f"{config}",
ResourceId=f"{v['VpcId']}"
)
else:
print(f"{v['VpcId']} is already linked.")
def client_obj():
client = boto3.client('route53resolver')
return client
def associated_list(client_object):
associated = list()
assoc = client_object.list_resolver_query_log_config_associations()
for element in assoc['ResolverQueryLogConfigAssociations']:
associated.append(element['ResourceId'])
return associated

Python AttributeError: module 'requests' has no attribute 'requestURL'

I would like to run the code from the module requests_with_caching.py that uses the build in requests module from Python. I have 2 py.files in the samefolder (requests_with_caching.py + test.py). The library requests is installed.
I get an AttributeError: module 'requests' has no attribute 'requestURL'.
I don't get what I'm missing ... .
import requests
import json
PERMANENT_CACHE_FNAME = "permanent_cache.txt"
TEMP_CACHE_FNAME = "this_page_cache.txt"
def _write_to_file(cache, fname):
with open(fname, 'w') as outfile:
outfile.write(json.dumps(cache, indent=2))
def _read_from_file(fname):
try:
with open(fname, 'r') as infile:
res = infile.read()
return json.loads(res)
except:
return {}
def add_to_cache(cache_file, cache_key, cache_value):
temp_cache = _read_from_file(cache_file)
temp_cache[cache_key] = cache_value
_write_to_file(temp_cache, cache_file)
def clear_cache(cache_file=TEMP_CACHE_FNAME):
_write_to_file({}, cache_file)
def make_cache_key(baseurl, params_d, private_keys=["api_key"]):
"""Makes a long string representing the query.
Alphabetize the keys from the params dictionary so we get the same order each time.
Omit keys with private info."""
alphabetized_keys = sorted(params_d.keys())
res = []
for k in alphabetized_keys:
if k not in private_keys:
res.append("{}-{}".format(k, params_d[k]))
return baseurl + "_".join(res)
def get(baseurl, params={}, private_keys_to_ignore=["api_key"], permanent_cache_file=PERMANENT_CACHE_FNAME, temp_cache_file=TEMP_CACHE_FNAME):
full_url = requests.requestURL(baseurl, params)
cache_key = make_cache_key(baseurl, params, private_keys_to_ignore)
# Load the permanent and page-specific caches from files
permanent_cache = _read_from_file(permanent_cache_file)
temp_cache = _read_from_file(temp_cache_file)
if cache_key in temp_cache:
print("found in temp_cache")
# make a Response object containing text from the change, and the full_url that would have been fetched
return requests.Response(temp_cache[cache_key], full_url)
elif cache_key in permanent_cache:
print("found in permanent_cache")
# make a Response object containing text from the change, and the full_url that would have been fetched
return requests.Response(permanent_cache[cache_key], full_url)
else:
print("new; adding to cache")
# actually request it
resp = requests.get(baseurl, params)
# save it
add_to_cache(temp_cache_file, cache_key, resp.text)
return resp
import requests_with_caching
# it's not found in the permanent cache
res = requests_with_caching.get("https://api.datamuse.com/words?rel_rhy=happy", permanent_cache_file="datamuse_cache.txt")
print(res.text[:100])
# this time it will be found in the temporary cache
res = requests_with_caching.get("https://api.datamuse.com/words?rel_rhy=happy", permanent_cache_file="datamuse_cache.txt")
# This one is in the permanent cache.
res = requests_with_caching.get("https://api.datamuse.com/words?rel_rhy=funny", permanent_cache_file="datamuse_cache.txt")
The module requests_with_caching.py was written for Runestone by the University of Michigan in their Coursera course Data Collection and Processing with Python.
This module imports the requests module, and seems to use a special requests method called requestURL().
The thing is, the requests.requestURL() method used in the requests_with_caching module is particular to Runestone.
In fact the entire requests module was rewritten for Runestone because Runestone can't do API requests.
Take a look at the difference between Runestone's version of requests found in its src/lib. You will notice it's different than the requests module in your Python environment's site packages folder when you ran pip install requests.
You can view Runestone's rewritten requests module by running this in Runestone:
with open('src/lib/requests.py','r') as f:
module = f.read()
print(module)
I'd suggest looking at how the Runestone requests.requestURL() function was written and modify your copy of the requests_with_caching.py module to add this custom function.
There will be other changes to make as well to get requests_with_caching.py to work in your local python environment.

No results from API call in python

My API call in Python returns no results. It exits with code 0 but there is nothing displayed. Is there something I am missing? I am still new to Python and got the code from a YouTube tutorial. I am using my own API Key. Here is the code:
#!/usr/bin/env python
#Learn how this works here: http://youtu.be/pxofwuWTs7c
import urllib.request
import json
locu_api = 'XXXXXXXXXXXX'
def locu_search(query):
api_key = locu_api
url = 'https://api.locu.com/v1_0/venue/search/?api_key=' + api_key
locality = query.replace(' ', '%20')
final_url = url + "&locality=" + locality + "&category=restaurant"
json_obj = urllib2.urlopen(final_url)
data = json.load(json_obj)
for item in data['objects']:
print (item['name'], item['phone'])
Your script defines the function locu_search, but you are not calling it; thus the script terminates successfully - having successfully done nothing of any value.
You need to call your function after it is defined, like:
def locu_search(query):
#snip
locu_search('San Francisco')
You need to call your function first
locu_search('.....')
If there is no explicite exti(int) -> exit(0) is assumed.

Google SafeBrowsing API: always getting an error

I use google safe browsing API. So I tested this simple code:
from safebrowsinglookup import SafebrowsinglookupClient
class TestMe:
def testMe(self):
self.key='my_Valid_Key_Here'
self.client=SafebrowsinglookupClient(self.key)
self.response=self.client.lookup('http://www.google.com')
print(self.response)
if __name__=="__main__":
TM=TestMe()
TM.testMe()
I always get this whatever the website I test:
{'website_I_tried','error'}
Note that I had to change some lines in the source code after I installed this API because it was written in Python 2 and I am using Python 3.4.1. How can I resolve this problem?
Update:
To understand why the above problem occured to me, I run this code:
from safebrowsinglookup import SafebrowsinglookupClient
class TestMe:
def testMe(self):
self.key = 'my_key_here'
self.client=SafebrowsinglookupClient(self.key,debug=1)
urls = ['http://www.google.com/','http://addonrock.ru/Debugger.js/']
self.results = self.client.lookup(*urls)
print(self.results['http://www.google.com/'])
if __name__ == "__main__":
TM=TestMe()
TM.testMe()
Now, I got this message:
BODY:
2
http://www.google.com/
http://addonrock.ru/Debugger.js/
URL: https://sb-ssl.google.com/safebrowsing/api/lookup?client=python&apikey=ABQIAAAAAU6Oj8JFgQpt0AXtnVwBYxQYl9AeQCxMD6irIIDtWpux_GHGQQ&appver=0.1&pver=3.0
Unexpected server response
name 'urllib2' is not defined
error
error
The library doesn't support Python3.x.
In this case, you can either make it support Python3 (there is also an opened pull request for Python3 Compatibility), or make the request to "Google Safebrowsing API" manually.
Here's an example using requests:
import requests
key = 'your key here'
URL = "https://sb-ssl.google.com/safebrowsing/api/lookup?client=api&apikey={key}&appver=1.0&pver=3.0&url={url}"
def is_safe(key, url):
response = requests.get(URL.format(key=key, url=url))
return response.text != 'malware'
print(is_safe(key, 'http://addonrock.ru/Debugger.js/')) # prints False
print(is_safe(key, 'http://google.com')) # prints True
Just the same, but without third-party packages (using urllib.request):
from urllib.request import urlopen
key = 'your key here'
URL = "https://sb-ssl.google.com/safebrowsing/api/lookup?client=python&apikey={key}&appver=1.0&pver=3.0&url={url}"
def is_safe(key, url):
response = urlopen(URL.format(key=key, url=url)).read().decode("utf8")
return response != 'malware'
print(is_safe(key, 'http://addonrock.ru/Debugger.js/')) # prints False
print(is_safe(key, 'http://google.com')) # prints True

deadline = None after using urlfetch.set_default_fetch_deadline(n)

I'm working on a web application with Python and Google App Engine.
I tried to set the default URLFetch deadline globally as suggested in a previous thread:
https://stackoverflow.com/a/14698687/2653179
urlfetch.set_default_fetch_deadline(45)
However it doesn't work - When I print its value in one of the functions: urlfetch.get_default_fetch_deadline() is None.
Here is main.py:
from google.appengine.api import users
import webapp2
import jinja2
import random
import string
import hashlib
import CQutils
import time
import os
import httpRequests
import logging
from google.appengine.api import urlfetch
urlfetch.set_default_fetch_deadline(45)
...
class Del(webapp2.RequestHandler):
def get(self):
id = self.request.get('id')
ext = self.request.get('ext')
user_id = httpRequests.advance(id,ext)
d2 = urlfetch.get_default_fetch_deadline()
logging.debug("value of deadline = %s", d2)
Prints in the Log console:
DEBUG 2013-09-05 07:38:21,654 main.py:427] value of deadline = None
The function which is being called in httpRequests.py:
def advance(id, ext=None):
url = "http://localhost:8080/api/" + id + "/advance"
if ext is None:
ext = ""
params = urllib.urlencode({'ext': ext})
result = urlfetch.fetch(url=url,
payload=params,
method=urlfetch.POST,
headers={'Content-Type': 'application/x-www-form-urlencoded'})
if (result.status_code == 200):
return result.content
I know this is an old question, but recently ran into the issue.
The setting is placed into a thread-local, meaning that if your application is set to thread-safe and you handle a request in a different thread than the one you set the default deadline for, it can be lost. For me, the solution was to set the deadline before every request as part of the middleware chain.
This is not documented, and required looking through the source to figure it out.

Categories