For a personal project I'm trying to write an AWS Lambda in Python3.9 that will delete a newly created user, if the creator is not myself. For this, the logs in CloudWatch Logs will trigger (via CloudTrail and EventBridge) my Lambda. Therefore, I will receive the JSON request as my event in :
def lambdaHandler(event, context)
But I have trouble to parse it...
If I print the event, I get that :
{'version': '1.0', 'invokingEvent': '{
"configurationItemDiff": {
"changedProperties": {},
"changeType": "CREATE"
},
"configurationItem": {
"relatedEvents": [],
"relationships": [],
"configuration": {
"path": "/",
"userName": "newUser",
"userId": "xxx",
"arn": "xxx",
"createDate": "2022-11-23T09:02:49.000Z",
"userPolicyList": [],
"groupList": [],
"attachedManagedPolicies": [],
"permissionsBoundary": null,
"tags": []
},
"supplementaryConfiguration": {},
"tags": {},
"configurationItemVersion": "1.3",
"configurationItemCaptureTime": "2022-11-23T09:04:40.659Z",
"configurationStateId": 1669194280659,
"awsAccountId": "141372946428",
"configurationItemStatus": "ResourceDiscovered",
"resourceType": "AWS::IAM::User",
"resourceId": "xxx",
"resourceName": "newUser",
"ARN": "arn:aws:iam::xxx:user/newUser",
"awsRegion": "global",
"availabilityZone": "Not Applicable",
"configurationStateMd5Hash": "",
"resourceCreationTime": "2022-11-23T09:02:49.000Z"
},
"notificationCreationTime": "2022-11-23T09:04:41.317Z",
"messageType": "ConfigurationItemChangeNotification",
"recordVersion": "1.3"
}', 'ruleParameters': '{
"badUser": "arn:aws:iam::xxx:user/badUser"
}', 'resultToken': 'xxx=', 'eventLeftScope': False, 'executionRoleArn': 'arn:aws:iam: : xxx:role/aws-service-role/config.amazonaws.com/AWSServiceRoleForConfig', 'configRuleArn': 'arn:aws:config:eu-west-1: xxx:config-rule/config-rule-q3nmvt', 'configRuleName': 'UserCreatedRule', 'configRuleId': 'config-rule-q3nmvt', 'accountId': 'xxx'
}
And for my purpose, I'd like to get the "changeType": "CREATE" value to say that if it is CREATE, I check the creator and if it is not myself, I delete newUser.
So the weird thing is that I copy/paste that event into VSCode and format it in a .json document and it says that there are errors (line 1 : version and invokingEvent should be double quote for example, but well).
For now I only try to reach and print the
"changeType": "CREATE"
by doing :
import json
import boto3
import logging
iam = boto3.client('iam')
def lambda_handler(event, context):
"""
Triggered if a user is created
Check the creator - if not myself :
- delete new user and remove from groups if necessary
"""
try:
print(event['invokingEvent']["configurationItemDiff"]["changeType"])
except Exception as e:
print("Error because :")
print(e)
And get the error string indices must be integers - it happens for ["configurationItemDiff"].
I understand the error already (I'm new to python though so maybe not completely) and tried many things like :
print(event['invokingEvent']['configurationItemDiff']) : swapping double quote by simple quote but doesnt change anything
print(event['invokingEvent'][0]) : but it gives me the index { and [2] gives me the c not the whole value.
At this point I'm stuck and need help because I can't find any solution on this. I don't use SNS, maybe should I ? Because I saw that with it, the JSON document would not be the same and we can access through ["Records"][...] ? I don't know, please help
What you are printing is a python dict, it looks sort of like JSON but is not JSON, it is the representation of a python dict. That means it will have True / False instead of true / false, it will have ' instead of ", etc.
You could do print(json.dumps(event)) instead.
Anyway, the actual problem is that invokingEvent is yet another JSON, but in its string form, you need to to json.loads that nested JSON string. You can see that because the value after invokingEvent is inside another set of '...', therefore it is a string, not a parsed dict already.
invoking_event = json.loads(event['invokingEvent'])
change_type = invoking_event["configurationItemDiff"]["changeType"]
ruleParameters would be another nested JSON which needs parsing first if you wanted to use it.
I'm working with azure CLI to script out a storage upgrade as well as add a policy, all in a python script. However, when I run the script I'm getting some expected and some very NOT expected output.
What I'm using so far:
from azure.cli.core import get_default_cli
def az_cli (args_str):
args = args_str.split()
cli = get_default_cli()
cli.invoke(args)
if cli.result.result:
return cli.result.result
elif cli.result.error:
raise cli.result.error
return True
sas = az_cli("storage account list --query [].{Name:name,ResourceGroup:resourceGroup,Kind:kind}")
print(sas)
By using this SO article as reference I'm pretty easily making Azure CLI calls, however my output is the following:
[
{
"Kind": "StorageV2",
"Name": "TestStorageName",
"ResourceGroup": "my_test_RG"
},
{
"Kind": "Storage",
"Name": "TestStorageName2",
"ResourceGroup": "my_test_RG_2"
}
]
[OrderedDict([('Name', 'TestStorageName'), ('ResourceGroup', 'my_test_RG'), ('Kind', 'StorageV2')]), OrderedDict([('Name', 'TestStorageName2'), ('ResourceGroup', 'my_test_RG_2'), ('Kind', 'Storage')])]
I appear to be getting 2 arrays back, and I'm unsure of what the cause is. I'm assuming it has to do with my using the --query to narrow down the output I get back, but I'm at a loss as to why it then repeats itself. Expected result would just be the first part that's in json format. I have also tried with tsv output as well with the same results. I appreciate any insight!
I want you to ask for your help.
I need to browse folder with json files and I need to search for one attribute with specific value.
Here is my code:
def listDir():
fileNames = os.listdir(FOLDER_PATH)
for fileName in fileNames:
print(fileName)
with open('C:\\Users\\Kamčo\\Desktop\\UBLjsons\\' + fileName, 'r') as json_file:
data = json.load(json_file)
data_json = json.dumps(data, indent=2, sort_keys=True)
print(data_json)
for line in data_json:
if line['ID'] == 'Kamilko':
print("try") #just to be sure it accessed to this
I am getting this error: TypeError: string indices must be integers
I also tried to search for a solution here but it didnt help me.
Here is my JSON
{
"Invoice": {
"#xmlns": "urn:oasis:names:specification:ubl:schema:xsd:Invoice-2",
"#xmlns:cac": "urn:oasis:names:specification:ubl:schema:xsd:CommonAggregateComponents-2",
"#xmlns:cbc": "urn:oasis:names:specification:ubl:schema:xsd:CommonBasicComponents-2",
"ID": "Kamilko",
"IssueDate": "2020-02-09",
"OrderReference": {
"ID": "22"
},
"InvoiceLine": {
"Price": {
"PriceAmount": {
"#currencyID": "EUR",
"#text": "23.50"
},
"BaseQuantity": {
"#unitCode": "C62",
"#text": "1"
}
}
}
}
}
do you have any idea how to do it?
You've loaded your file in using json.load(...). That'll convert the JSON data into a Python dictionary that you can use to access elements:
if data["Invoice"]["OrderReference"]["ID"] == 22:
print("try")
Note that you might want to check the relevant keys exist along the way, in case the structure of your file varies, or you could catch the KeyError that'll come up if the key doesn't exist, using try/except.
Some more background:
When you then call json.dumps(...), you're taking that handy python structure and converting it back into a hard-to-understand string again. I don't think you want or need to do this.
The specific error you have is because dumps has created a string. You're then trying to access an element of that string using the [ ] operator. Strings can only be indexed using integers, e.g. mystr[4], so Python doesn't understand what you're asking it to do with data_json["ID"].
I'm fairly new to Python programming, and have thus far been reverse engineering code that previous developers have made, or have cobbled together some functions on my own.
The script itself works; to cut a long story short, its designed to parse a CSV and to (a) create and or update the contacts found in the CSV, and (b) to correctly assign the contact to their associated company. All using the HubSpot API. To achieve this i've also imported requests and csvmapper.
I had the following questions:
How can I improve this script to make it more pythonic?
What is the best way to make this script run on a remote server,
keeping in mind that Requests and CSVMapper probably aren't
installed on that server, and that I most likely won't have
permission to install them - what is the best way to "package" this
script, or to upload Requests and CSVMapper to the server?
Any advice much appreciated.
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import print_function
import sys, os.path, requests, json, csv, csvmapper, glob, shutil
from time import sleep
major, minor, micro, release_level, serial = sys.version_info
# Client Portal ID
portal = "XXXXXX"
# Client API Key
hapikey = "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"
# This attempts to find any file in the directory that starts with "note" and ends with ".CSV"
# Server Version
# findCSV = glob.glob('/home/accountName/public_html/clientFolder/contact*.CSV')
# Local Testing Version
findCSV = glob.glob('contact*.CSV')
for i in findCSV:
theCSV = i
csvfileexists = os.path.isfile(theCSV)
# Prints a confirmation if file exists, prints instructions if it doesn't.
if csvfileexists:
print ("\nThe \"{csvPath}\" file was found ({csvSize} bytes); proceeding with sync ...\n".format(csvSize=os.path.getsize(theCSV), csvPath=os.path.basename(theCSV)))
else:
print ("File not found; check the file name to make sure it is in the same directory as this script. Exiting ...")
sys.exit()
# Begin the CSVmapper mapping... This creates a virtual "header" row - the CSV therefore does not need a header row.
mapper = csvmapper.DictMapper([
[
{'name':'account'}, #"Org. Code"
{'name':'id'}, #"Hubspot Ref"
{'name':'company'}, #"Company Name"
{'name':'firstname'}, #"Contact First Name"
{'name':'lastname'}, #"Contact Last Name"
{'name':'job_title'}, #"Job Title"
{'name':'address'}, #"Address"
{'name':'city'}, #"City"
{'name':'phone'}, #"Phone"
{'name':'email'}, #"Email"
{'name':'date_added'} #"Last Update"
]
])
# Parse the CSV using the mapper
parser = csvmapper.CSVParser(os.path.basename(theCSV), mapper)
# Build the parsed object
obj = parser.buildObject()
def contactCompanyUpdate():
# Open the CSV, use commas as delimiters, store it in a list called "data", then find the length of that list.
with open(os.path.basename(theCSV),"r") as f:
reader = csv.reader(f, delimiter = ",", quotechar="\"")
data = list(reader)
# For every row in the CSV ...
for row in range(0, len(data)):
# Set up the JSON payload ...
payload = {
"properties": [
{
"name": "account",
"value": obj[row].account
},
{
"name": "id",
"value": obj[row].id
},
{
"name": "company",
"value": obj[row].company
},
{
"property": "firstname",
"value": obj[row].firstname
},
{
"property": "lastname",
"value": obj[row].lastname
},
{
"property": "job_title",
"value": obj[row].job_title
},
{
"property": "address",
"value": obj[row].address
},
{
"property": "city",
"value": obj[row].city
},
{
"property": "phone",
"value": obj[row].phone
},
{
"property": "email",
"value": obj[row].email
},
{
"property": "date_added",
"value": obj[row].date_added
}
]
}
nameQuery = "{first} {last}".format(first=obj[row].firstname, last=obj[row].lastname)
# Get a list of all contacts for a certain company.
contactCheck = "https://api.hubapi.com/contacts/v1/search/query?q={query}&hapikey={hapikey}".format(hapikey=hapikey, query=nameQuery)
# Convert the payload to JSON and assign it to a variable called "data"
data = json.dumps(payload)
# Defined the headers content-type as 'application/json'
headers = {'content-type': 'application/json'}
contactExistCheck = requests.get(contactCheck, headers=headers)
for i in contactExistCheck.json()[u'contacts']:
# ... Get the canonical VIDs
canonicalVid = i[u'canonical-vid']
if canonicalVid:
print ("{theContact} exists! Their VID is \"{vid}\"".format(theContact=obj[row].firstname, vid=canonicalVid))
print ("Attempting to update their company...")
contactCompanyUpdate = "https://api.hubapi.com/companies/v2/companies/{companyID}/contacts/{vid}?hapikey={hapikey}".format(hapikey=hapikey, vid=canonicalVid, companyID=obj[row].id)
doTheUpdate = requests.put(contactCompanyUpdate, headers=headers)
if doTheUpdate.status_code == 200:
print ("Attempt Successful! {theContact}'s has an updated company.\n".format(theContact=obj[row].firstname))
break
else:
print ("Attempt Failed. Status Code: {status}. Company or Contact not found.\n".format(status=doTheUpdate.status_code))
def createOrUpdateClient():
# Open the CSV, use commas as delimiters, store it in a list called "data", then find the length of that list.
with open(os.path.basename(theCSV),"r") as f:
reader = csv.reader(f, delimiter = ",", quotechar="\"")
data = list(reader)
# For every row in the CSV ...
for row in range(0, len(data)):
# Set up the JSON payload ...
payloadTest = {
"properties": [
{
"property": "email",
"value": obj[row].email
},
{
"property": "firstname",
"value": obj[row].firstname
},
{
"property": "lastname",
"value": obj[row].lastname
},
{
"property": "website",
"value": None
},
{
"property": "company",
"value": obj[row].company
},
{
"property": "phone",
"value": obj[row].phone
},
{
"property": "address",
"value": obj[row].address
},
{
"property": "city",
"value": obj[row].city
},
{
"property": "state",
"value": None
},
{
"property": "zip",
"value": None
}
]
}
# Convert the payload to JSON and assign it to a variable called "data"
dataTest = json.dumps(payloadTest)
# Defined the headers content-type as 'application/json'
headers = {'content-type': 'application/json'}
#print ("{theContact} does not exist!".format(theContact=obj[row].firstname))
print ("Attempting to add {theContact} as a contact...".format(theContact=obj[row].firstname))
createOrUpdateURL = 'http://api.hubapi.com/contacts/v1/contact/createOrUpdate/email/{email}/?hapikey={hapikey}'.format(email=obj[row].email,hapikey=hapikey)
r = requests.post(createOrUpdateURL, data=dataTest, headers=headers)
if r.status_code == 409:
print ("This contact already exists.\n")
elif (r.status_code == 200) or (r.status_code == 202):
print ("Success! {firstName} {lastName} has been added.\n".format(firstName=obj[row].firstname,lastName=obj[row].lastname, response=r.status_code))
elif r.status_code == 204:
print ("Success! {firstName} {lastName} has been updated.\n".format(firstName=obj[row].firstname,lastName=obj[row].lastname, response=r.status_code))
elif r.status_code == 400:
print ("Bad request. You might get this response if you pass an invalid email address, if a property in your request doesn't exist, or if you pass an invalid property value.\n")
else:
print ("Contact Marko for assistance.\n")
if __name__ == "__main__":
# Run the Create or Update function
createOrUpdateClient()
# Give the previous function 5 seconds to take effect.
sleep(5.0)
# Run the Company Update function
contactCompanyUpdate()
print("Sync complete.")
print("Moving \"{something}\" to the archive folder...".format(something=theCSV))
# Cron version
#shutil.move( i, "/home/accountName/public_html/clientFolder/archive/" + os.path.basename(i))
# Local version
movePath = "archive/{thefile}".format(thefile=theCSV)
shutil.move( i, movePath )
print("Move successful! Exiting...\n")
sys.exit()
I'll just go from top to bottom. The first rule is, do what's in PEP 8. It's not the ultimate style guide, but it's certainly a reference baseline for Python coders, and that's more important, especially when you're getting started. The second rule is, make it maintainable. A couple of years from now, when some other new kid comes through, it should be easy for her to figure out what you were doing. Sometimes that means doing things the long way, to reduce errors. Sometimes it means doing things the short way, to reduce errors. :-)
#!/usr/bin/env python
# -*- coding: utf-8 -*-
Two things: you got the encoding right, per PEP 8. And
Conventions for writing good documentation strings (a.k.a. "docstrings") are immortalized in PEP 257.
You've got a program that does something. But you don't document what.
from __future__ import print_function
import sys, os.path, requests, json, csv, csvmapper, glob, shutil
from time import sleep
major, minor, micro, release_level, serial = sys.version_info
Per PEP 8: put your import module statements one per line.
Per Austin: make your paragraphs have separate subjects. You've got some imports right next to some version info stuff. Insert a blank line. Also, DO SOMETHING with the data! Or you didn't need it to be right here, did you?
# Client Portal ID
portal = "XXXXXX"
# Client API Key
hapikey = "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"
You've obscured these in more ways than one. WTF is a hapikey? I think you mean Hubspot_API_key. And what does portal do?
One piece of advice: the more "global" a thing is, the more "formal" it should be. If you have a for loop, it's okay to call one of the variables i. If you have a piece of data that is used throughout a function, call it obj or portal. But if you have a piece of data that is used globally, or is a class variable, make it put on a tie and a jacket so everyone can recognize it: make it Hubspot_api_key instead of client_api_key. Maybe even Hubspot_client_api_key if there are more than one API. Do the same with portal.
# This attempts to find any file in the directory that starts with "note" and ends with ".CSV"
# Server Version
# findCSV = glob.glob('/home/accountName/public_html/clientFolder/contact*.CSV')
It didn't take long for the comments to become lies. Just delete them if they aren't true.
# Local Testing Version
findCSV = glob.glob('contact*.CSV')
This is the kind of thing that you should create a function for. Just create a simple function called "get_csv_files" or whatever, and have it return a list of filenames. That decouples you from glob, and it means you can make your test code data driven (pass a list of filenames into a function, or pass a single file into a function, instead of asking it to search for them). Also, those glob patterns are exactly the kind of thing that go in a config file, or a global variable, or get passed as command line arguments.
for i in findCSV:
I'll bet typing CSV in upper case all the time is a pain. And what does findCSV mean? Read that line, and figure out what that variable should be called. Maybe csv_files? Or new_contact_files? Something that demonstrates that there is a collection of things.
theCSV = i
csvfileexists = os.path.isfile(theCSV)
Now what does i do? You had this nice small variable name, in a BiiiiiiG loop. That was a mistake, since if you can't see a variable's entire scope all on one page, it probably needs a somewhat longer name. But then you created an alias for it. Both i and theCSV refer to the same thing. And ... I don't see you using i again. So maybe your loop variable should be theCSV. Or maybe it should be the_csv to make it easier to type. Or just csvname.
# Prints a confirmation if file exists, prints instructions if it doesn't.
This seems a little needless. If you're using glob to get filenames, they pretty much are going to exist. (If they don't, it's because they were deleted between the time you called glob and the time you tried to open them. That's possible, but rare. Just continue or raise an exception, depending.)
if csvfileexists:
print ("\nThe \"{csvPath}\" file was found ({csvSize} bytes); proceeding with sync ...\n".format(csvSize=os.path.getsize(theCSV), csvPath=os.path.basename(theCSV)))
In this code, you use the value of csvfileexists. But that's the only place you use it. In this case, you can probably move the call to os.path.isfile() into the if statement and get rid of the variable.
else:
print ("File not found; check the file name to make sure it is in the same directory as this script. Exiting ...")
sys.exit()
Notice that in this case, when there is an actual problem, you didn't print the file name? How helpful was that?
Also, remember the part where you're on a remote server? You should consider using Python's logging module to record these messages in a useful manner.
# Begin the CSVmapper mapping... This creates a virtual "header" row - the CSV therefore does not need a header row.
mapper = csvmapper.DictMapper([
[
{'name':'account'}, #"Org. Code"
{'name':'id'}, #"Hubspot Ref"
{'name':'company'}, #"Company Name"
{'name':'firstname'}, #"Contact First Name"
{'name':'lastname'}, #"Contact Last Name"
{'name':'job_title'}, #"Job Title"
{'name':'address'}, #"Address"
{'name':'city'}, #"City"
{'name':'phone'}, #"Phone"
{'name':'email'}, #"Email"
{'name':'date_added'} #"Last Update"
]
])
You're creating an object with a bunch of data. This would be a good place for a function. Define a make_csvmapper() function to do all this for you, and move it out of line.
Also, note that the standard csv module has most of the functionality you are using. I don't think you actually need csvmapper.
# Parse the CSV using the mapper
parser = csvmapper.CSVParser(os.path.basename(theCSV), mapper)
# Build the parsed object
obj = parser.buildObject()
Here's another chance for a function. Maybe instead of making a csv mapper, you could just return the obj?
def contactCompanyUpdate():
At this point, things get fishy. You have these function definitions indented, but I don't think you need them. Is that a stackoverflow problem, or does your code really look like this?
# Open the CSV, use commas as delimiters, store it in a list called "data", then find the length of that list.
with open(os.path.basename(theCSV),"r") as f:
No, apparently it really looks like this. Because you're using theCSV inside this function when you don't really need to. Please consider using formal function parameters instead of just grabbing outer-scope objects. Also, why are you using basename on the csv file? If you obtained it using glob, doesn't it already have the path you want?
reader = csv.reader(f, delimiter = ",", quotechar="\"")
data = list(reader)
# For every row in the CSV ...
for row in range(0, len(data)):
Here you forced data to be a list of rows obtained from reader, and then started iterating over them. Just iterate over reader directly, like: for row in reader: BUT WAIT! You're actually iterating over a CSV file that you have already opened, in your obj variable. Just pick one, and iterate over it. You don't need to open the file twice for this.
# Set up the JSON payload ...
payload = {
"properties": [
{
"name": "account",
"value": obj[row].account
},
{
"name": "id",
"value": obj[row].id
},
{
"name": "company",
"value": obj[row].company
},
{
"property": "firstname",
"value": obj[row].firstname
},
{
"property": "lastname",
"value": obj[row].lastname
},
{
"property": "job_title",
"value": obj[row].job_title
},
{
"property": "address",
"value": obj[row].address
},
{
"property": "city",
"value": obj[row].city
},
{
"property": "phone",
"value": obj[row].phone
},
{
"property": "email",
"value": obj[row].email
},
{
"property": "date_added",
"value": obj[row].date_added
}
]
}
Okay, that was a LOOOONG span of code that didn't do much. At the least, tighten those inner dicts up to one line each. But better still, write a function to create your dictionary in the format you want. You can use getattr to pull the data by name from obj.
nameQuery = "{first} {last}".format(first=obj[row].firstname, last=obj[row].lastname)
# Get a list of all contacts for a certain company.
contactCheck = "https://api.hubapi.com/contacts/v1/search/query?q={query}&hapikey={hapikey}".format(hapikey=hapikey, query=nameQuery)
# Convert the payload to JSON and assign it to a variable called "data"
data = json.dumps(payload)
# Defined the headers content-type as 'application/json'
headers = {'content-type': 'application/json'}
contactExistCheck = requests.get(contactCheck, headers=headers)
Here you're encoding details of the API into your code. Consider pulling them out into functions. (That way, you can come back later and build a module of them, to re-use in your next program.) Also, beware of comments that don't actually tell you anything. And feel free to pull that together as a single paragraph, since it's all in service of the same key thing - making an API call.
for i in contactExistCheck.json()[u'contacts']:
# ... Get the canonical VIDs
canonicalVid = i[u'canonical-vid']
if canonicalVid:
print ("{theContact} exists! Their VID is \"{vid}\"".format(theContact=obj[row].firstname, vid=canonicalVid))
print ("Attempting to update their company...")
contactCompanyUpdate = "https://api.hubapi.com/companies/v2/companies/{companyID}/contacts/{vid}?hapikey={hapikey}".format(hapikey=hapikey, vid=canonicalVid, companyID=obj[row].id)
doTheUpdate = requests.put(contactCompanyUpdate, headers=headers)
if doTheUpdate.status_code == 200:
print ("Attempt Successful! {theContact}'s has an updated company.\n".format(theContact=obj[row].firstname))
break
else:
print ("Attempt Failed. Status Code: {status}. Company or Contact not found.\n".format(status=doTheUpdate.status_code))
I'm not sure if this last bit should be an exception or not. Is an "Attempt Failed" normal behavior, or does it mean that something is broken?
At any rate, please look into the API you are using. I'd bet there is some more information available for minor failures. (Major failures would be the internet is broken or their server is offline.) They might provide an "errors" or "error" field in their return JSON, for example. Those should be logged or printed with your failure message.
def createOrUpdateClient():
Mostly this function has the same issues as the previous one.
else:
print ("Contact Marko for assistance.\n")
Except here. Never put your name in someplace like this. Or you'll still be getting calls on this code 10 years from now. Put your department name ("IT Operations") or a support number. The people who need to know will already know. And the people who don't need to know can just notify the people that already know.
if __name__ == "__main__":
# Run the Create or Update function
createOrUpdateClient()
# Give the previous function 5 seconds to take effect.
sleep(5.0)
# Run the Company Update function
contactCompanyUpdate()
print("Sync complete.")
print("Moving \"{something}\" to the archive folder...".format(something=theCSV))
# Cron version
#shutil.move( i, "/home/accountName/public_html/clientFolder/archive/" + os.path.basename(i))
# Local version
movePath = "archive/{thefile}".format(thefile=theCSV)
shutil.move( i, movePath )
print("Move successful! Exiting...\n")
This was awkward. You might consider taking some command line arguments and using them to determine your behavior.
sys.exit()
And don't do this. Never put an exit() at module scope, because it means you can't possibly import this code. Maybe someone wants to import it to parse the docstrings. Or maybe they want to borrow some of those API functions you wrote. Too bad! sys.exit() means always having to say "Oh, sorry, I'll have to do that for you." Put it at the bottom of your actual __name__ == "__main__" code. Or, since you aren't actually passing a value, just remove it entirely.
How can one access the global parameters ("GlobalParameters") sent from a web service in a Python script on Azure ML?
I tried:
if 'GlobalParameters' in globals():
myparam = GlobalParameters['myparam']
but with no success.
EDIT: Example
In my case, I'm sending a sound file over the web service (as a list of samples). I would also like to send a sample rate and the number of bits per sample. I've successfully configured the web service (I think) to take these parameters, so the GlobalParameters now look like:
"GlobalParameters": {
"sampleRate": "44100",
"bitsPerSample": "16",
}
However, I cannot access these variables from the Python script, neither as GlobalParameters["sampleRate"] nor as sampleRate. Is it possible? Where are they stored?
based on our understanding of your question, here may has a miss conception that Azure ML parameters are not “Global Parameters”, as a matter of fact they are just parameter substitution tied to a particular module. So in affect there are no global parameters that are accessible throughout the experiment you have mentioned. Such being the case, we think the experiment below accomplishes what you are asking for:
Please add an “Enter Data” module to the experiment and add Data in csv format. Then for the Data click the parameter to create a web service parameter. Add in the CSV data which will be substituted from data passed by the client application. I.e.
Please add an “Execute Python” module and hook up the “Enter Data” output to the “Execute Python” input1. Add the python code to take the dataframe1 and add it to a python list. Once you have it in a list you can use it anywhere in your python code.
Python code snippet
def azureml_main(dataframe1 = None, dataframe2 = None):
import pandas as pd
global_list = []
for g in dataframe1["Col3"]:
global_list.append(g)
df_global = pd.DataFrame(global_list)
print('Input pandas.DataFrame:\r\n\r\n{0}'.format(df_global))
return [df_global]
Once you publish your experiment, you can add in new values in the “Data”: “”, section below with the new values that you was substituted for the “Enter Data” values in the experiment.
data = {
"Inputs": {
"input1":
{
"ColumnNames": ["Col1", "Col2", "Col3"],
"Values": [ [ "0", "value", "0" ], [ "0", "value", "0" ], ]
}, },
"GlobalParameters": {
"Data": "1,sampleRate,44500\\n2,bitsPerSample,20",
}
}
Please feel free to let us know if this makes sense.
The GlobalParameters parameter can not be used in a Python script. It is used to override certain parameters in other modules.
If you, for example, take the 'Split Data' module, you'll find an option to turn a parameter into a web service parameter:
Once you click that, a new section appears titled "Web Service Parameters". There you can change the default parameter name to one of your choosing.
If you deploy your project as a web service, you can override that parameter by putting it in the GlobalParameters parameter:
"GlobalParameters": {
"myFraction": 0.7
}
I hope that clears things up a bit.
Although it is not possible to use GlobalParameters in the Python script (see my previous answer), you can however hack/abuse the second input of the Python script to pass in other parameters. In my example I call them metadata parameters.
To start, I added:
a Web service input module with name: "realdata" (for your real data off course)
a Web service input module with name: "metadata" (we will abuse this one to pass parameters to our Python).
a Web service output module with name: "computedMetadata"
Connect the modules as follows:
As you can see, I also added a real data set (Restaurant ratings) as wel as a dummy metadata csv (the Enter Data Manually) module.
In this manual data you will have to predefine your metadata parameters as if they were a csv with a header and a only a single row to hold the data:
In the example both sampleRate and bitsPerSample are set to 0.
My Python scripts then takes in that fake csv as metadata, does some dummy calculation with it and returns it as column name:
import pandas as pd
def azureml_main(realdata = None, metadata = None):
theSum = metadata["sampleRate"][0] + metadata["bitsPerSample"][0]
outputString = "The sum of the sampleRate and the bitsPerSecond is " + str(theSum)
print(outputString)
return pd.DataFrame([outputString])
I then published this as a web service and called it using Node.js like this:
httpreq.post('https://ussouthcentral.services.azureml.net/workspaces/xxx/services/xxx', {
headers: {
Authorization: 'Bearer xxx'
},
json: {
"Inputs": {
"realdata": {
"ColumnNames": [
"userID",
"placeID",
"rating"
],
"Values": [
[
"100",
"101",
"102"
],
[
"200",
"201",
"202"
]
]
},
"metadata": {
"ColumnNames": [
"sampleRate",
"bitsPerSample"
],
"Values": [
[
44100,
16
]
]
}
},
"GlobalParameters": {}
}
}, (err, res) => {
if(err) return console.log(err);
console.log(JSON.parse(res.body));
});
The output was as expected:
{ Results:
{ computedMetadata:
{ type: 'table',
value:
{ ColumnNames: [ '0' ],
ColumnTypes: [ 'String' ],
Values:
[ [ 'The sum of the sampleRate and the bitsPerSecond is 44116' ] ] } } } }
Good luck!