How to add multiple responses into a single json object - python

I am making some request to another site from my flask api. Basically my flask api is a proxy. So initially I substitute the parameters with the known company id and and get all the workers id. Given the workers id, I try to make another request which helps me get all their details. However with the code below I am only getting the last response which means only the details of the last worker. You can ignore the j==1 for now I did it for testing purposes.
tempDict={}
updateDic={}
dictToSend={}
j=0
#i = companyid
#id=workerid
# I make several calls to url2 depending on the number of employee ids in number
for id in number:
url2="someurl/" + str(i)+ "/contractors/"+str(id)
r = requests.get(url2, headers={'Content-type': 'application/json',"Authorization":authenticate,'Accept': 'application/json'})
print("id"+str(id))
print(url2)
loadJsonResponse2=json.loads(r.text)
print(loadJsonResponse2)
key = i
tempDict.update(loadJsonResponse2)
# I want to have all of their details and add the company number before
print(tempDict)
if(j==1):
dictToSend[key]=tempDict
return jsonify(dictToSend)
j=j+1
return jsonify(dictToSend)
So I have all the workers ids and I request the other url to get all their details. The response is in json format. However I am only getting the last response with the above code. I did something like j==1 because I wanted to check the return.
dictToSend[key]=tempDict
return jsonify(dictToSend)
The key is the company id so that I can identify which company the worker is from.
How can I get to concatenate all the json responses and at the end add a key like "5":{concatenation of all json requests}
Thank you,

Your key for json object is
#i = companyid
.
.
.
key = i
.
.
.
# You are adding all your responses to companyid,
# better make a key with companyid and workerid
# key = str(companyid) + ":" + str(workerid)
dictToSend[key]=tempDict
And here
# you may not need this, since there is already a loop iterating on workerid
if(j==1):
dictToSend[key]=tempDict
return jsonify(dictToSend)
j=j+1
# then only useful line would be
dictToSend[key]=tempDict

Related

How do I get individual userId, id, title and posts from jsonplaceholder.typecode.com

I need help with an assignment regarding API calls. Here are the following parameters:
use python to get on Api (https://jsonplaceholder.typicode.com)
import requests, json
url = 'https://jsonplaceholder.typicode.com/posts'
r = requests.get(url)
data = json.loads(r.text)
capture list of dictionaries from one endpoint and reverse sort
for item in reversed(data):
print(item)
Print a post for specific userId
print(data[0])
Print a post from specific userId that only prints out the title, id, or post
When I try:
print(data[0].id[1])
I get a 'dict' object has no attribute 'id'. Any help is appreciated. I just need help with question 4.
To access dictionary element in python you use dictionary['<key>'] (not dictionary.<key>). Try:
print(data[0]['id'])
print(data[0]['userId'])

Python - save multiple responses from multiple requests

I am pulling JSON data from an api and I am looking to pass in a different parameter for each request and save each response
My current code
# create an empty list to store each account id
accounts = []
##store in accounts list every id
for each in allAccounts['data']:
accounts.append((each['id']))
#for each account , call a new account id for the url
for id in accounts:
urlAccounts = 'https://example.somewebsite.ie:000/v12345/accounts/'+id+'/users'
I save a response and print out the values.
accountReq = requests.get(urlAccounts, headers=headers)
allUsers = accountReq.json()
for each in allUsers['data']:
print(each['username']," " +each['first_name'])
This is fine and it works but I only store the first ID's response.
How do I store the responses from all of the requests?
So I'm looking to send multiple requests where the ID changes every time and save each response essentially.
I'm using python version 3.10.4 .
My code for this in case anyone stumbles across this.
# list of each api url to use
link =[]
#for every id in the accounts , create a new url link into the link list
for i in accounts:
link.append('https://example.somewebsite.ie:000/v12345/accounts/'+i+'/users')
#create a list with all the different requests
accountReq = []
for i in link:
accountReq.append(requests.get(i, headers=headers).json())
# write to a txt file
with open('masterSheet.txt', 'x') as f:
#for every request
for each in accountReq:
#get each accounts data
account = each['data']
#for each accounts data
#get each users email and names
for data in account:
sheet=(data['username']+" "+" ",data['first_name'],data['last_name'])
f.write(str(sheet)+"\n")

how to get a json that varais each request by it's number

I made a request to Instagram v1 API it gives back the response in JSON like this
The JSON data on pastebin.com
I noticed that I can get the number of IDs and the IDs by :
IDs = response['reels'][ide]["media_ids"]
count=response['reels'][ide]["media_count"]
I don't know where I can use these IDs to help extract the stories URL
I don't know how to use it to get the media URLs cause it changes with the number of stories
also if there is another way to extract it, it may solve my problem
the "url" key is not unique it's used in other values
Assuming the "media URLs" are the values associated with a key "url" then you can just do this:
import json
def print_url(jdata):
if isinstance(jdata, list):
for v in jdata:
print_url(v)
elif isinstance(jdata, dict):
if (url := jdata.get('url')):
print(url)
else:
print_url(list(jdata.values()))
with open('instagram.json', encoding='utf-8') as data:
print_url(json.load(data))

Issues dynamically changing Python 'Requests' header to iterate through API URL endpoints

My issue is as follows:
I am attempting to pull down a list of all email address entries from an API. The data set is so large that it spans multiple API 'pages' with unique URLs. The page number can be specified as a parameter in the API request URL. I wrote a loop to try and collect email information from an API page, add the email addresses to a list, add 1 to the page number, and repeat the process up to 30 pages. Unfortunately it seems like the loop is only querying the same page 30 times and producing duplicates. I feel like I'm missing something simple (beginner here) but please let me know if anyone can help. Code is below:
import requests
import json
number = 1
user_list = []
parameters = {'page': number, 'per_page':50}
response = requests.get('https://api.com/profiles.json', headers=headers, params=parameters)
while number <=30:
formatted_data = response.json()
profiles = formatted_data['profiles']
for dict in profiles:
user_list.append(dict['email'])
number = number + 1
print(sorted(user_list))
In Python, numbers and strings are passed by value, not by reference.
That means you need to update dictionary after every iteration.
You also need to place requests.get() inside your loop to get different results
import json
number = 1
user_list = []
parameters = {'page': number, 'per_page':50}
while number <=30:
response = requests.get('https://api.com/profiles.json', headers=headers, params=parameters)
formatted_data = response.json()
profiles = formatted_data['profiles']
for dict_ in profiles: # try to avoid using keywords for your variables
user_list.append(dict_['email'])
number = number + 1
parameters['page'] = number
print(sorted(user_list))

Writing code using graph APIs

I am extremely new to python , scripting and APIs, well I am just learning. I came across a very cool code which uses facebook api to reply for birthday wishes.
I will add my questions, I will number it so that it will be easier for someone else later too. I hope this question will clear lots of newbies doubts.
1) Talking about APIs, in what format are the usually in? is it a library file which we need to dowload and later import? for instance, twitter API, we need to import twitter ?
Here is the code :
import requests
import json
AFTER = 1353233754
TOKEN = ' <insert token here> '
def get_posts():
"""Returns dictionary of id, first names of people who posted on my wall
between start and end time"""
query = ("SELECT post_id, actor_id, message FROM stream WHERE "
"filter_key = 'others' AND source_id = me() AND "
"created_time > 1353233754 LIMIT 200")
payload = {'q': query, 'access_token': TOKEN}
r = requests.get('https://graph.facebook.com/fql', params=payload)
result = json.loads(r.text)
return result['data']
def commentall(wallposts):
"""Comments thank you on all posts"""
#TODO convert to batch request later
for wallpost in wallposts:
r = requests.get('https://graph.facebook.com/%s' %
wallpost['actor_id'])
url = 'https://graph.facebook.com/%s/comments' % wallpost['post_id']
user = json.loads(r.text)
message = 'Thanks %s :)' % user['first_name']
payload = {'access_token': TOKEN, 'message': message}
s = requests.post(url, data=payload)
print "Wall post %s done" % wallpost['post_id']
if __name__ == '__main__':
commentall(get_posts())`
Questions:
importing json--> why is json imported here? to give a structured reply?
What is the 'AFTER' and the empty variable 'TOKEN' here?
what is the variable 'query' and 'payload' inside get_post() function?
Precisely explain almost what each methods and functions do.
I know I am extremely naive, but this could be a good start. A little hint, I can carry on.
If not going to explain the code, which is pretty boring, I understand, please tell me how to link to APIs after a code is written, meaning how does a script written communicate with the desired API.
This is not my code, I copied it from a source.
json is needed to access the web service and interpret the data that is sent via HTTP.
The 'AFTER' variable is supposed to get used to assume all posts after this certain timestamp are birthday wishes.
To make the program work, you need a token which you can obtain from Graph API Explorer with the appropriate permissions.

Categories