Looking for samples to multiple API calls in python - python

I am new and learning python. As part of my learning, i am trying to do Api integration. I am getting the result but it's limited to 100. But the totalresults is around 7000 records. Is there a way I can call multiple times to bring the entire result in CSV format. I am adding my code below and not sure how to proceed further.
import requests
import pandas as pd
resp = requests.get ('apipath' & '?company=XXXX', auth=(XXXXX', 'XXXXXX'))
dataframe = resp.json()
dataset = pd.DataFrame(dataframe["items"]).to_csv('dict_file.csv', header=True)
Please help.

You'll need to check the API Documentation but generally there will be a parameter "maxResults (or similar) that you can add to the url to retrieve more than the default number of results.
Your request (by modify the query string in the url) would look something like this:
resp = requests.get ('apipath' & '?company=XXXX&maxResults=1000', auth=(XXXXX', 'XXXXXX'))

Related

Grafana: How to use the selected range of time in a query?

I'm using Grafana with JSON API data source and I'd like to build a query that depends on the selected period of time selected in the upper right corner of the screen.
Is there any variable (or something like that), which I can send the selected time range form Grafana and receive it in my backend?
In other words, If I select 24hs in Grafana I'd like to use that in my query and return only data in this period.
I tried to get request from Grafana, which should contain the time range. However I got error: Failed to decode JSON object: Expecting value: line 1 column 1 (char 0).
It's possible that I misunderstood something and it doesn't work that way.
This is my /query endpoint:
#app.route('/query', methods=['POST', 'GET'])
def query():
req = request.get_json() <- failed
range = req['request']['range']
json_data = get_from_database(range)
return json_data
Are there any other options, like sending the time range ( with these variables {__from}&to=${__to}) in URL?
You can use the global variables $__from and $__to. As explained in the docs, ${__from} will give you Unix millisecond epoch, but there are also other format options.
How to use variables in the JSON API plugin is explained in the docs. So you can either use them as params what will result in a URL like this /query?range_start=${__from}&range_end=${__to} or use them directly in your path like this /query/${__from}/${__to}.
For retrieving them using python: you will find a lot on that topic on SO. Basically, I think you don't need .get_json() (will not work if the request is not application/json). If you send them as params, use request.args.get('range_start') to get the value (short explanation).

How to get PartitionKeyRangeId of a container in Azure Cosmos Python SDK

I am trying to implement Pull Model to query change feed using Azure Cosmos Python SDK. I found that to parallelise the querying process, the official documentation mentions about FeedRange value and create FeedIterator to iterate through each range of partition key values obtained from the FeedRange.
Currently my code snippet to query change feed looks like this and it is pretty straight-forward:
# function to get items from change feed based on a condition
def get_response(container_client, condition):
# Historical data read
if condition:
response = container.query_items_change_feed(
is_start_from_beginning = True,
# partition_key_range_id = 0
)
# reading from a checkpoint
else:
response = container.query_items_change_feed(
is_start_from_beginning = False,
continuation = last_continuation_token
)
return response
The problem with this approach is the efficiency when getting all the items from beginning (Historical Data Read). I tried this method with pretty small dataset of 500 items and the response took around 60 seconds. When dealing with millions or even billions of items the response might take too long to return.
Would querying change feed parallelly for each partition key range save time?
If yes, how to get PartitionKeyRangeId in Python SDK?
Is there any problems I need to consider when implementing this?
I hope I make sense!

transform JSON file to be usable

Long story short, i get the query from spotify api which is JSON that has data about newest albums. How do i get the specific info from that like let's say every band name or every album title. I've tried a lot of ways to get that info that i found on the internet and nothing seems to work for me and after couple of hours im kinda frustrated
JSON data is on jsfiddle
here is the request
endpoint = "https://api.spotify.com/v1/browse/new-releases"
lookup_url = f"{endpoint}"
r = requests.get(lookup_url, headers=headers)
print(r.json())
you can find the
When you make this request like the comments have mentioned you get a dictionary which you can then access the keys and values. For example if you want to get the album_type you could do the following:
print(data["albums"]["items"][0]["album_type"])
Since items contains a list you would need to get the first values 0 and then access the album_type.
Output:
single
Here is a link to the code I used with your json.
I suggest you look into how to deal with json data in python, this is a good place to start.
I copied the data from the jsfiddle link.
Now try the following code:
import ast
pyobj=ast.literal_eval(str_cop_from_src)
later you can try with keys
pyobj["albums"]["items"][0]["album_type"]
pyobj will be a python dictionary will all data.

String indices must be integers Giphy

I'm trying to get url from object data, but it isn't right. This program has stopped on line 4. Code is under.
My code:
import requests
gifs = str(requests.get("https://api.giphy.com/v1/gifs/random?
api_key=APIKEY"))
dump = json.dumps(gifs)
json.loads(dump['data']['url'])
Your description is not clear enough. You expect to read a json and select a field that brings you something?
I recommend you check this section of requests quickstart guide this i suspect you want to read the data to json and extract from some fields.
Maybe something like this might help:
r = requests.get('http://whatever.com')
url = r.json()['url']

Python Requests - add text at the beginning of query string

When sending data through python-requests a GET request, I have a need to specifically add something at the beginning of the query string. I have tried passing the data in through dicts and json strings with no luck.
The request as it appears when produced by requests:
/apply/.../explain?%7B%22......
The request as it appears when produced by their interactive API documentation (Swagger):
/apply/.../explain?record=%7B%22....
Where the key-value pairs of my data follow the excerpt above.
Ultimately, I think the missing piece is the record= that gets produced by their documentation. It is the only piece that is different from what is produced by Requests.
At the moment I've got it set up something like this:
import requests
s = requests.Session()
s.auth = requests.auth.HTTPBasicAuth(username,password)
s.verify = certificate_path
# with data below being a dictionary of the values I need to pass.
r = s.get(url,data=data)
I am trying to include an image of the documentation below, but don't yet have enough reputation to do so:
apply/model/explain documentation
'GET' requests don't have data, that's for 'POST' and friends.
You can send the query string arguments using params kwarg instead:
>>> params = {'record': '{"'}
>>> response = requests.get('http://www.example.com/explain', params=params)
>>> response.request.url
'http://www.example.com/explain?record=%7B%22'
From the comments i felt the need to explain this.
http://example.com/sth?key=value&anotherkey=anothervalue
Let's assume you have a url like the above in order to call with python requests you only have to write
response = requests.get('http://example.com/sth', params={
'key':'value',
'anotherkey':'anothervalue'
})
Have in mind that if your value or your keys have any special character in them they will be escaped thats the reason for the %7B%2 part of url in your question.

Categories