Traceback (most recent call last):
File "/Users/jondevereux/Desktop/Data reporting/kpex_code/1PD/api_python_publisher_1PD.py", line 40, in <module>
username = parser.get('api_samples', 'username')
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/ConfigParser.py", line 607, in get
raise NoSectionError(section)
ConfigParser.NoSectionError: No section: 'api_samples'
The config file is in the correct directory (same as .py and has the appropriate section api_samples:
[api_samples]
authentication_url = https://crowdcontrol.lotame.com/auth/v1/tickets
api_url = https://api.lotame.com/2/
username = xxx
password = xxx
Script works on co-workers PC not on mine? I had to use pip to install requests - i'm wondering I i'm missing something else?
Code is as follows:
# Set up the libs we need
import requests
import sys
import csv
import json
from ConfigParser import SafeConfigParser # used to get information from a config file
reload(sys)
sys.setdefaultencoding('utf-8')
'''
Now let's get what we need from our config file, including the username and password
We are assuming we have a config file called config.config in the same directory
where this python script is run, where the config file looks like:
[api_samples]
authentication_url = https://crowdcontrol.lotame.com/auth/v1/tickets
api_url = https://api.lotame.com/2/
username = USERNAME_FOR_API_CALLS
password = PASSWORD
'''
# Set up our Parser and get the values - usernames and password should never be in code!
parser = SafeConfigParser()
parser.read('config.cfg')
username = parser.get('api_samples', 'username')
password = parser.get('api_samples', 'password')
authentication_url = parser.get('api_samples', 'authentication_url')
base_api_url = parser.get('api_samples', 'api_url')
# OK, all set with our parameters, let's get ready to make our call to get a Ticket Granting Ticket
# Add the username and password to the payload (requests encodes for us, no need to urlencode)
payload = {'username': username,
'password': password}
# We want to set some headers since we are going to post some url encoded params.
headers = {"Content-type": "application/x-www-form-urlencoded", "Accept": "text/plain", "User-Agent":"python" }
# Now, let's make our Ticket Granting Ticket request. We get the location from the response header
tg_ticket_location = requests.post(authentication_url, data=payload).headers['location']
# Let's take a look at what a Ticket Granting Ticket looks like:
# print ('Ticket Granting Ticket - %s \n') % (tg_ticket_location[tg_ticket_location.rfind('/') + 1:])
# Now we have our Ticket Granting Ticket, we can get a service ticket for the service we want to call
# The first service call will be to get information on behavior id 5990.
# service_call = base_api_url + 'behaviors/5990'
# Add the service call to the payload and get the ticket
#payload = {'service': service_call}
#service_ticket = requests.post( tg_ticket_location, data=payload ).text
# Let's take a look at the service ticket
#print ('Here is our Service Ticket - %s \n') % ( service_ticket )
'''
Now let's make our call to the service ... remember we need to be quick about it because
we only have 10 seconds to do it before the Service Ticket expires.
A couple of things to note:
JSON is the default response, and it is what we want, so we don't need to specify
like {'Accept':'application/json'}, but we will anyway because it is a good practice.
We don't need to pass any parameters to this call, so we just add the parameter
notation and then 'ticket=[The Service Ticet]'
'''
headers = {'Accept':'application/json'}
#behavior_info = requests.get( ('%s?ticket=%s') % (service_call, service_ticket), headers=headers)
# Let's print out our JSON to see what it looks like
# requests support JSON on it's own, so not other package needed for this
# print ('Behavior Information: \n %s \n') % (behavior_info.json() )
'''
Now let's get the names and IDs of some audiences
We can reuse our Ticket Granting Ticket for a 3 hour period ( we haven't passed that yet),
so let's use it to get a service ticket for the audiences service call.
Note that here we do have a parameter that is part of the call. That needs to be included
in the Service Ticket request.
We plan to make a call to the audience service to get the first 10 audiences in the system
ascending by audience id. We don't need to pass the sort order, because it defaults to ascending
'''
# Set up our call and get our new Service Ticket, we plan to sort by id
# Please insert audiences ID below:
audienceids = ['243733','243736','241134','242480','240678','242473','242483','241119','243732','242492','243784','242497','242485','243785','242486','242487','245166','245167','245168','245169','245170','245171','240860']
f = open("publisher_report_1PD.csv", 'w+')
title_str = ['1PD % Contribution','audienceId','publisherName','audienceName']
print >> f,(title_str)
for audience_id in audienceids:
service_call = base_api_url + 'reports/audiences/' + audience_id + '/publisher?stat_interval=LAST_MONTH&page_count=100&page_num=1&sort_attr=audienceName&inc_network=false&sort_order=ASC'
payload = {'service': service_call}
# Let's get the new Service Ticket, we can print it again to see it is a new ticket
service_ticket = requests.post( tg_ticket_location, data=payload ).text
#print ('Here is our new Service Ticket - %s \n') % ( service_ticket )
# Use the new ticket to query the service, remember we did have a parameter this time,
# so we need to & 'ticket=[The Service Ticket]' to the parameter list
audience_list = requests.get( ('%s&ticket=%s') % (service_call, service_ticket)).json()
#print audience_list
# create an array to hold the audiences, pull ou the details we want, and print it out
audiences = []
for ln in audience_list['stats']:
audiences.append({ 'audienceId': ln['audienceId'], 'audienceName': ln['audienceName'], 'publisherName': ln['publisherName'], '1PD % Contribution': ln['percentOfAudience']})
for ii in range( 0, len(audiences) ):
data = audiences[ii]
data_str = json.dumps(data)
result = data_str.replace("\"","")
result1 = result.replace("{1PD % Contribution:","")
result2 = result1.replace("publisherName: ","")
result3 = result2.replace("audienceName: ","")
result4 = result3.replace("audienceId: ","")
result5 = result4.replace("}","")
print >> f,(result5)
# Once we are done with the Ticket Granting Ticket we should clean it up'
remove_tgt = requests.delete( tg_ticket_location )
print ( 'Status for closing TGT - %s') % (remove_tgt.status_code)
i = input('YAY! Gotcha!!')
I see only one reason for your problem: you run script from different folder and then script is looking for config.cfg in different folder.
You can get full path to folder with script
import os
script_folder = os.path.dirname(os.path.realpath(__file__))
and create full path to config.cfg
parser.read( os.path.join(script_folder, 'config.cfg') )
Related
I have been working with the FHIR REST API for a while but haven't had any experience with Python. As my first python project I am attempting to create a simple python script that can read and write to an open API. I am able to read but I am stuck on creating a successful POST due to the following: error [TypeError("unhashable type: 'dict'")]. I don't fully understand how the python dictionary works and attempted to use a tuple but get the same error.
import requests #REST Access to FHIR Server
print('Search patient by MRN to find existing appointment')
MRN = input("Enter patient's MRN -try CT12181 :")
url = 'http://hapi.fhir.org/baseR4/Patient?identifier='+MRN
print('Searching for Patient by MRN...#'+url)
response = requests.get(url)
json_response = response.json()
try:
key='entry'
EntryArray=json_response[key]
FirstEntry=EntryArray[0]
key='resource'
resource=FirstEntry['resource']
id=resource['id']
PatientServerId= id
patientName = resource['name'][0]['given'][0] + ' ' +resource['name'][0]['family']
print('Patient Found')
print('Patient Id:'+id)
#Searching for assertppointments
url='http://hapi.fhir.org/baseR4/Appointment?patient='+id #fhir server endpoint
#Print appointment data
print('Now Searching for Appointments...#'+url)
appt_response = requests.get(url).json()
key='entry'
EntryArray=appt_response[key]
print (f'Appointment(s) found for the patient {patientName}')
for entry in EntryArray:
appt=entry['resource']
# print('-------------------------')
# Date=appt['start']
# Status=appt['status']
# print(appt_response)
#print ('AppointmentStartDate/Time: ' ,appt['start'])
print ('Status: ' ,appt['status'])
print ('ID: ' ,appt['id'])
print('Search for open general practice slot?')
option = input('Enter yes or no: ')
while not(option == 'yes'):
print('Please search a different paitent')
option = input('Enter yes or no: ')
url = 'http://hapi.fhir.org/baseR4/Slot?service-type=57' #fhir server endpoint
print('Searching for General Practice Slot...#'+url)
slot_response = requests.get(url).json()
key='entry'
EntryArray=slot_response[key]
print ('Slot(s) found for the service type General Practice')
for entry in EntryArray:
slot=entry['resource']
#print('-------------------------')
#slotDate=slot['start']
#slotStatus=slot['status']
print (f'SlotID: ' +slot['id'])
#print (f'Status: ' +slot['status'])
print('Book a slot?')
option = input('Enter yes or no: ')
while not(option == 'yes'):
print('Please search a different paitent')
option = input('Enter yes or no: ')
#Book slot
slotID = input("Enter slot ID :")
url = 'http://hapi.fhir.org/baseR4/Appointment' #fhir server endpoint
print('Booking slot...#'+url)
headers = {"Content-Type": "application/fhir+json;charset=utf-8"}
data = {{"resourceType": "Appointment","status": "booked","slot": tuple({"reference":"Slot/104602"}),"participant": tuple({"actor": {"reference":"Patient/1229151","status": "accepted"}}),"reasonCode": tuple({"text": "I have a cramp"})}}
#fhir server json header content
# headers = {"Content-Type": "application/fhir+json;charset=utf-8"}
response = requests.post(url=url,headers=headers,data=data)
print(response)
print(response.json())
except Exception as e:
print ('error' ,[e])
I was expecting the JSON data to successfully write to the API. I am able to use the same JSON data in Postman to make a call, but I am not as familiar on how this should work within Python.
It looks like the Appointment POST endpoint accepts a simple payload like:
{
"resourceType": "Appointment"
}
Which then returns a corresponding ID, according to the API docs.
This differs from what you seem to be attempting in your code, where you try to pass other details to this endpoint:
ata = {{"resourceType": "Appointment","status": "booked","slot": tuple({"reference":"Slot/104602"}),"participant": tuple({"actor": {"reference":"Patient/1229151","status": "accepted"}}),"reasonCode": tuple({"text": "I have a cramp"})}}
However, to make a POST request to the endpoint as documented in the docs, perhaps try the json argument to requests.post. Something along the lines of:
>>> import requests
>>> headers = {"Content-Type": "application/fhir+json;charset=utf-8"}
>>> json_payload = {
... "resourceType": "Appointment"
... }
>>> url = 'http://hapi.fhir.org/baseR4/Appointment'
>>> r = requests.post(url, headers=headers, json=json_payload)
>>> r
<Response [201]>
>>> r.json()
{'resourceType': 'Appointment', 'id': '2261980', 'meta': {'versionId': '1', 'lastUpdated': '2022-03-25T23:40:42.621+00:00'}}
>>>
If you're already familiar with this API, then perhaps this might help. I suspect you then need to send another POST or PATCH request to another endpoint, using the ID returned in your first request to enter the relevant data.
I'm trying to run simple search via Python SDK (Python 3.8.5, splunk-sdk 1.6.14). Examples that are presented on dev.splunk.com are clear but something goes wrong when I run search with my own parameters
The code is as simple as this
search_kwargs_params = {
"exec_mode": "blocking",
"earliest_time": "2020-09-04T06:57:00.000-00:00",
"latest_time": "2020-11-08T07:00:00.000-00:00",
}
search_query = 'search index=qwe1 trace=111-aaa-222 action=Event.OpenCase'
job = self.service.jobs.create(search_query, **search_kwargs_params)
for result in results.ResultsReader(job.results()):
print(result)
But search returns no results. When I run same query manually in Splunk web GUI it works fine.
I've also tried to put all parameters in 'search_kwargs_params' dictionary, widened search time period and got some search results but they seem to be inappropriate to what I got in GUI.
Can someone advise?
This worked for me. You may also try this:
import requests
import time
import json
scheme = 'https'
host = '<your host>'
username = '<your username>'
password = '<your password>'
unique_id = '2021-03-22T18-43-00' #You may give any unique identifier here
search_query = 'search <your splunk query>'
post_data = { 'id' : unique_id,
'search' : search_query,
'earliest_time' : '1',
'latest_time' : 'now',
}
#'earliest_time' : '1', 'latest_time' : 'now'
#This will run the search query for all time
splunk_search_base_url = scheme + '://' + host +
'/servicesNS/{}/search/search/jobs'.format(username)
resp = requests.post(splunk_search_base_url, data = post_data, verify = False, auth =
(username, password))
print(resp.text)
is_job_completed = ''
while(is_job_completed != 'DONE'):
time.sleep(5)
get_data = {'output_mode' : 'json'}
job_status_base_url = scheme + '://' + host +
'/servicesNS/{}/search/search/jobs/{}'.format(username, unique_id)
resp_job_status = requests.post(job_status_base_url, data = get_data, verify =
False, auth = (username, password))
resp_job_status_data = resp_job_status.json()
is_job_completed = resp_job_status_data['entry'][0]['content']['dispatchState']
print("Current job status is {}".format(is_job_completed))
splunk_summary_base_url = scheme + '://' + host +
'/servicesNS/{}/search/search/jobs/{}/results?count=0'.format(username, unique_id)
splunk_summary_results = requests.get(splunk_summary_base_url, data = get_data, verify
= False, auth = (username, password))
splunk_summary_data = splunk_summary_results.json()
#Print the results in python format (strings will be in single quotes)
for data in splunk_summary_data['results']:
print(data)
print('status code...')
print(splunk_summary_results.status_code)
print('raise for status...')
print(splunk_summary_results.raise_for_status())
print('Results as JSON : ')
#Print the results in valid JSON format (Strings will be in double quotes)
#To get complete json data:
print(json.dumps(splunk_summary_data))
#To get only the relevant json data:
print(json.dumps(splunk_summary_data['results']))
Cheers!
You may also like to have a look at this very handy tutorial. https://www.youtube.com/watch?v=mmTzzp2ldgU
I'm struggling to get a Lambda function working. I have a python script to access twitter API, pull information, and export that information into an excel sheet. I'm trying to transfer python script over to AWS/Lambda, and I'm having a lot of trouble.
What I've done so far: Created AWS account, setup S3 to have a bucket, and poked around trying to get things to work.
I think the main area I'm struggling is how to go from a python script that I'm executing via local CLI and transforming that code into lambda-capable code. I'm not sure I understand how the lambda_handler function works, what the event or context arguments actually mean (despite watching a half dozen different tutorial videos), or how to integrate my existing functions into Lambda in the context of the lambda_handler, and I'm just very confused and hoping someone might be able to help me get some clarity!
Code that I'm using to pull twitter data (just a sample):
import time
import datetime
import keys
import pandas as pd
from twython import Twython, TwythonError
import pymysql
def lambda_handler(event, context):
def oauth_authenticate():
twitter_oauth = Twython(keys.APP_KEY, keys.APP_SECRET, oauth_version=2)
ACCESS_TOKEN = twitter_oauth.obtain_access_token()
twitter = Twython(keys.APP_KEY, access_token = ACCESS_TOKEN)
return twitter
def get_username():
"""
Prompts for the screen name of targetted account
"""
username = input("Enter the Twitter screenname you'd like information on. Do not include '#':")
return username
def get_user_followers(username):
"""
Returns data on all accounts following the targetted user.
WARNING: The number of followers can be huge, and the data isn't very valuable
"""
#username = get_username()
#import pdb; pdb.set_trace()
twitter = oauth_authenticate()
datestamp = str(datetime.datetime.now().strftime("%Y-%m-%d"))
target = twitter.lookup_user(screen_name = username)
for y in target:
target_id = y['id_str']
next_cursor = -1
index = 0
followersdata = {}
while next_cursor:
try:
get_followers = twitter.get_followers_list(screen_name = username,
count = 200,
cursor = next_cursor)
for x in get_followers['users']:
followersdata[index] = {}
followersdata[index]['screen_name'] = x['screen_name']
followersdata[index]['id_str'] = x['id_str']
followersdata[index]['name'] = x['name']
followersdata[index]['description'] = x['description']
followersdata[index]['date_checked'] = datestamp
followersdata[index]['targeted_account_id'] = target_id
index = index + 1
next_cursor = get_followers["next_cursor"]
except TwythonError as e:
print(e)
remainder = (float(twitter.get_lastfunction_header(header = 'x-rate-limit-reset')) \
- time.time())+1
print("Rate limit exceeded. Waiting for:", remainder/60, "minutes")
print("Current Time is:", time.strftime("%I:%M:%S"))
del twitter
time.sleep(remainder)
twitter = oauth_authenticate()
continue
followersDF = pd.DataFrame.from_dict(followersdata, orient = "index")
followersDF.to_excel("%s-%s-follower list.xlsx" % (username, datestamp),
index = False, encoding = 'utf-8')
I am using the script below to collect inventory information from servers and send it to a product called Device42. The script currently works however one of the APIs that I'm trying to add uses PUT instead of POST. I'm not a programmer and just started using python with this script. This script is using iron python. Can the PUT method be used in this script?
"""
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
"""
##################################################
# a sample script to show how to use
# /api/ip/add-or-update
# /api/device/add-or-update
#
# requires ironPython (http://ironpython.codeplex.com/) and
# powershell (http://support.microsoft.com/kb/968929)
##################################################
import clr
clr.AddReference('System.Management.Automation')
from System.Management.Automation import (
PSMethod, RunspaceInvoke
)
RUNSPACE = RunspaceInvoke()
import urllib
import urllib2
import traceback
import base64
import math
import ssl
import functools
BASE_URL='https://device42_URL'
API_DEVICE_URL=BASE_URL+'/api/1.0/devices/'
API_IP_URL =BASE_URL+'/api/1.0/ips/'
API_PART_URL=BASE_URL+'/api/1.0/parts/'
API_MOUNTPOINT_URL=BASE_URL+'/api/1.0/device/mountpoints/'
API_CUSTOMFIELD_URL=BASE_URL+'/api/1.0/device/custom_field/'
USER ='usernme'
PASSWORD ='password'
old_init = ssl.SSLSocket.__init__
#functools.wraps(old_init)
def init_with_tls1(self, *args, **kwargs):
kwargs['ssl_version'] = ssl.PROTOCOL_TLSv1
old_init(self, *args, **kwargs)
ssl.SSLSocket.__init__ = init_with_tls1
def post(url, params):
"""
http post with basic-auth
params is dict like object
"""
try:
data= urllib.urlencode(params) # convert to ascii chars
headers = {
'Authorization' : 'Basic '+ base64.b64encode(USER + ':' + PASSWORD),
'Content-Type' : 'application/x-www-form-urlencoded'
}
req = urllib2.Request(url, data, headers)
print '---REQUEST---',req.get_full_url()
print req.headers
print req.data
reponse = urllib2.urlopen(req)
print '---RESPONSE---'
print reponse.getcode()
print reponse.info()
print reponse.read()
except urllib2.HTTPError as err:
print '---RESPONSE---'
print err.getcode()
print err.info()
print err.read()
except urllib2.URLError as err:
print '---RESPONSE---'
print err
def to_ascii(s):
# ignore non-ascii chars
return s.encode('ascii','ignore')
def wmi(query):
return [dict([(prop.Name, prop.Value) for prop in psobj.Properties]) for psobj in RUNSPACE.Invoke(query)]
def closest_memory_assumption(v):
return int(256 * math.ceil(v / 256.0))
def add_or_update_device():
computer_system = wmi('Get-WmiObject Win32_ComputerSystem -Namespace "root\CIMV2"')[0] # take first
bios = wmi('Get-WmiObject Win32_BIOS -Namespace "root\CIMV2"')[0]
operating_system = wmi('Get-WmiObject Win32_OperatingSystem -Namespace "root\CIMV2"')[0]
environment = wmi('Get-WmiObject Win32Reg_ESFFarmNode -Namespace "root\CIMV2"')[0]
mem = closest_memory_assumption(int(computer_system.get('TotalPhysicalMemory')) / 1047552)
dev_name = to_ascii(computer_system.get('Name')).upper()
fqdn_name = to_ascii(computer_system.get('Name')).upper() + '.' + to_ascii(computer_system.get('Domain')).lower()
device = {
'memory' : mem,
'os' : to_ascii(operating_system.get('Caption')),
'osver' : operating_system.get('OSArchitecture'),
'osmanufacturer': to_ascii(operating_system.get('Manufacturer')),
'osserial' : operating_system.get('SerialNumber'),
'osverno' : operating_system.get('Version'),
'service_level' : environment.get('Environment'),
'notes' : 'Test w/ Change to Device name collection'
}
devicedmn = ''
for dmn in ['Domain1', 'Domain2', 'Domain3', 'Domain4', 'Domain5']:
if dmn == to_ascii(computer_system.get('Domain')).strip():
devicedmn = 'Domain'
device.update({ 'name' : fqdn_name, })
break
if devicedmn != 'Domain':
device.update({
'name': dev_name,
})
manufacturer = ''
for mftr in ['VMware, Inc.', 'Bochs', 'KVM', 'QEMU', 'Microsoft Corporation', 'Xen']:
if mftr == to_ascii(computer_system.get('Manufacturer')).strip():
manufacturer = 'virtual'
device.update({ 'manufacturer' : 'vmware', })
break
if manufacturer != 'virtual':
device.update({
'manufacturer': to_ascii(computer_system.get('Manufacturer')).strip(),
'hardware': to_ascii(computer_system.get('Model')).strip(),
'serial_no': to_ascii(bios.get('SerialNumber')).strip(),
'type': 'Physical',
})
cpucount = 0
for cpu in wmi('Get-WmiObject Win32_Processor -Namespace "root\CIMV2"'):
cpucount += 1
cpuspeed = cpu.get('MaxClockSpeed')
cpucores = cpu.get('NumberOfCores')
if cpucount > 0:
device.update({
'cpucount': cpucount,
'cpupower': cpuspeed,
'cpucore': cpucores,
})
hddcount = 0
hddsize = 0
for hdd in wmi('Get-WmiObject Win32_LogicalDisk -Namespace "root\CIMV2" | where{$_.Size -gt 1}'):
hddcount += 1
hddsize += hdd.get('Size') / 1073741742
if hddcount > 0:
device.update({
'hddcount': hddcount,
'hddsize': hddsize,
})
post(API_DEVICE_URL, device)
for hdd in wmi('Get-WmiObject Win32_LogicalDisk -Namespace "root\CIMV2" | where{$_.Size -gt 1}'):
mountpoint = {
'mountpoint' : hdd.get('Name'),
'label' : hdd.get('Caption'),
'fstype' : hdd.get('FileSystem'),
'capacity' : hdd.get('Size') / 1024 / 1024,
'free_capacity' : hdd.get('FreeSpace') / 1024 / 1024,
'device' : dev_name,
'assignment' : 'Device',
}
post(API_MOUNTPOINT_URL, mountpoint)
network_adapter_configuration = wmi('Get-WmiObject Win32_NetworkAdapterConfiguration -Namespace "root\CIMV2" | where{$_.IPEnabled -eq "True"}')
for ntwk in network_adapter_configuration:
for ipaddr in ntwk.get('IPAddress'):
ip = {
'ipaddress' : ipaddr,
'macaddress' : ntwk.get('MACAddress'),
'label' : ntwk.get('Description'),
'device' : dev_name,
}
post(API_IP_URL, ip)
def main():
try:
add_or_update_device()
except:
traceback.print_exc()
if __name__ == "__main__":
main()
Ok first things first you need to understand the difference between PUT and POST. I would write it out but another member of the community gave a very good description of the two here.
Now, yes you can use requests with that script. Here is an example of using the requests library by python, in order to install requests if you have pip installed install it like this:
pip install requests
Now, lest go through some examples of using the Requests library, the documentation can be found here.
HTTP Get Request. So for this example, you call the get function from the request library, give the url as parameter, then you can print out the text from the touple that is returned. Since GET will return something, it will generally be in the text portion of the touple allowing you to print it.
r = requests.get('http://urlhere.com/apistuffhere')
print(r.text)
HTTP POST: Posting to a url, depending on how the API was set up will return something, it generally does for error handling, but you also have to pass in parameters. Here is an example for a POST request to a new user entry. And again, you can print the text from the touple to check the response from the API
payload = {'username': 'myloginname', 'password': 'passwordhere'}
r = requests.post('https://testlogin.com/newuserEntry', params=payload)
print(r.text)
Alternatively you can print just r and it should return you a response 200 which should be successful.
For PUT: You have to keep in mind put responses can not be cacheable, so you can post data to the PUT url, but you will not know if there is an error or not but use the same syntax as POST. I have not tried to print out the text response in a PUT request using the Request library as I don't use PUT in any API I write.
requests.put('http://urlhere.com/putextension')
Now for implementing this into your code, you already have the base of the url, in your post for the login just do:
payload = {'username': USERNAME, 'passwd':PASSWORD}
r = requests.post('https://loginurlhere.com/', params=payload)
#check response by printing text
print (r.text)
As for putting data to an extension of your api, let us assume you already have a payload variable ready with the info you need, for example the API device extension:
requests.put(API_DEVICE, params=payload)
And that should PUT to the url. If you have any questions comment below and I can answer them if you would like.
My standard answer would be to replace urllib2 with the Requests package. It makes doing HTTP work a lot easier.
But take a look at this SO answer for a 'hack' to get PUT working.
I modified the code published on smbrown.wordpress.com which can extract the top tracks using the Last.fm API as below:
#!/usr/bin/python
import time
import pylast
import re
from md5 import md5
user_name = '*******'
user_password = '*******'
password_hash = pylast.md5("*******")
api_key = '***********************************'
api_secret = '****************************'
top_tracks_file = open('top_tracks_wordle.txt', 'w')
network = pylast.LastFMNetwork(api_key = api_key, api_secret = api_secret, username = user_name, password_hash = password_hash)
# to make the output more interesting for wordle viz.
# run against all periods. if you just want one period,
# delete the others from this list
time_periods = ['PERIOD_12MONTHS', 'PERIOD_6MONTHS', 'PERIOD_3MONTHS', 'PERIOD_OVERALL']
# time_periods = ['PERIOD_OVERALL']
#####
## shouldn't have to edit anything below here
#####
md5_user_password = md5(user_password).hexdigest()
sg = pylast.SessionKeyGenerator(network) #api_key, api_secret
session_key = sg.get_session_key(user_name, md5_user_password)
user = pylast.User(user_name, network) #api_key, api_secret, session_key
top_tracks = []
for time_period in time_periods:
# by default pylast returns a seq in the format:
# "Item: Andrew Bird - Fake Palindromes, Weight: 33"
tracks = user.get_top_tracks(period=time_period)
# regex that tries to pull out only the track name (
# for the ex. above "Fake Palindromes"
p = re.compile('.*[\s]-[\s](.*), Weight: [\d]+')
for track in tracks:
m = p.match(str(track))
**track = m.groups()[0]** <-----------Here---------------
top_tracks.append(track)
# be nice to last.fm's servers
time.sleep(5)
top_tracks = "\n".join(top_tracks)
top_tracks_file.write(top_tracks)
top_tracks_file.close()
When the script is run to the position where marked by " <-----------Here--------------", I got a error message :".... line 46, in
track = m.groups()[0]
AttributeError: 'NoneType' object has no attribute 'groups'"
I just stuck here for over a day, and do not know what to do next. Can anyone give me some clue about this problem?
Apparently some track names do not match your regex, so match() returns None. Catch the exception and examine track.