TypeError in Python Program - python
Hello so I am writing a program that will prompt for a location, contact a web service and retrieve JSON for the web service and parse that data, and retrieve the first place_id from the JSON.
I am trying to find the place_id for: Shanghai Jiao Tong University
I have my code written, but I just can't get it to work. It has to be a small error because when I run it, I get a message that says
place_id = process_json['results'][0]['place_id']
TypeError: list indices must be integers or slices, not str
Here is my code
import urllib.request, urllib.parse, urllib.error
import json
serviceurl = 'http://py4e-data.dr-chuck.net/geojson??'
while True:
location = input('Enter location: ')
if len(location) < 1: break
url = serviceurl + urllib.parse.urlencode(
{'address': location})
print ('Retrieving', url)
data = urllib.request.urlopen(url)
read_data = data.read().decode()
print ('Retrieved',len(read_data),'characters')
try:
process_json = json.loads(read_data)
except:
process_json = None
place_id = process_json['results'][0]['place_id']
print ('Place id:', place_id)
The problem here is that you're treating a list like a dictionary. A list is, as the name implies, a list of items with an incrementing index, 0, 1, 2... A dictionary is a lot like a list, except its index is named.
The reason your code isn't working is because, the JSON returned from the URL is a list. It looks like this:
[
"AGH University of Science and Technology",
"Academy of Fine Arts Warsaw Poland",
"American University in Cairo",
"Arizona State University",
"Athens Information Technology",
"BITS Pilani",
]
It seems you're trying to find the place_id of a university, however there is no place_id in the data you're searching in. But if there were, your approach was correct, however it did not account for the user not typing in the exact name of the university.
Related
Tweepy (API V2) - Convert Response into dictionary
I want to get the information about the people followed by the Twitter account "POTUS" in a dictionary. My code: import tweepy, json client = tweepy.Client(bearer_token=x) id = client.get_user(username="POTUS").data.id users = client.get_users_following(id=id, user_fields=['created_at','description','entities','id', 'location', 'name', 'pinned_tweet_id', 'profile_image_url','protected','public_metrics','url','username','verified','withheld'], expansions=['pinned_tweet_id'], max_results=13) This query returns the type "Response", which in turn stores the type "User": Response(data=[<User id=7563792 name=U.S. Men's National Soccer Team username=USMNT>, <User id=1352064843432472578 name=White House COVID-19 Response Team username=WHCOVIDResponse>, <User id=1351302423273472012 name=Kate Bedingfield username=WHCommsDir>, <User id=1351293685493878786 name=Susan Rice username=AmbRice46>, ..., <User id=1323730225067339784 name=The White House username=WhiteHouse>], includes={}, errors=[], meta={'result_count': 13}) I've tried ._json and .json() but both didn't work. Does anyone have any idea how I can convert this response into a dictionary object to work with? Thanks in advance
Found the soloution! Adding return_type=dict to the client will return everything as a dictionary! client = tweepy.Client(bearer_token=x, return_type=dict) However, you then have to change the line to get the User ID a bit: id = client.get_user(username="POTUS")['data']['id']
You can do previous_cursor, next_cursor = None, 0 while previous_cursor != next_cursor: followed_data = api.get_friend_ids(username = "POTUS", cursor = next_cursor) previous_cursor, next_cursor = next_cursor, followed_data["next_cursor"] followed_ids = followed_data["id"] #this is a list # do something with followed_ids like writing them to a file to get the user ids of the followed accounts. If you want the usernames and not the ids, you can do something very similar with api.get_friends() but this returns fewer items at a time so if you plan to follow those accounts, using the ids will probably be quicker.
Issue with 'else' sequence using Spotipy/Spotify API
My team and I (newbies to python) have written the following code to generate spotify songs related to a specific city and related terms. If the user inputs a city that is not in our CITY_KEY_WORDS list, then it tells the user that the input will be added to a requests file, and then writes the input to a file. The code is as follows: from random import shuffle from typing import Any, Dict, List import spotipy from spotipy.oauth2 import SpotifyClientCredentials sp = spotipy.Spotify( auth_manager=SpotifyClientCredentials(client_id="", client_secret="") ) CITY_KEY_WORDS = { 'london': ['big ben', 'fuse'], 'paris': ['eiffel tower', 'notre dame', 'louvre'], 'manhattan': ['new york', 'new york city', 'nyc', 'empire state', 'wall street', ], 'rome': ['colosseum', 'roma', 'spanish steps', 'pantheon', 'sistine chapel', 'vatican'], 'berlin': ['berghain', 'berlin wall'], } def main(city: str, num_songs: int) -> List[Dict[str, Any]]: if city in CITY_KEY_WORDS: """Searches Spotify for songs that are about `city`. Returns at most `num_songs` tracks.""" results = [] # Search for songs that have `city` in the title results += sp.search(city, limit=50)['tracks']['items'] # 50 is the maximum Spotify's API allows # Search for songs that have key words associated with `city` if city.lower() in CITY_KEY_WORDS.keys(): for related_term in CITY_KEY_WORDS[city.lower()]: results += sp.search(related_term, limit=50)['tracks']['items'] # Shuffle the results so that they are not ordered by key word and return at most `num_songs` shuffle(results) return results[: num_songs] else: print("Unfortunately, this city is not yet in our system. We will add it to our requests file.") with open('requests.txt', 'r') as text_file: request = text_file.read() request = request + city + '\n' with open('requests.txt', 'w+') as text_file: text_file.write(request) def display_tracks(tracks: List[Dict[str, Any]]) -> None: """Prints the name, artist and URL of each track in `tracks`""" for num, track in enumerate(tracks): # Print the relevant details print(f"{num + 1}. {track['name']} - {track['artists'][0]['name']} {track['external_urls']['spotify']}") if __name__ == '__main__': city = input("Virtual holiday city? ") number_of_songs = input("How many songs would you like? ") tracks = main(city, int(number_of_songs)) display_tracks(tracks) The code runs fine for the "if" statement (if someone enters a city we have listed). But when the else statement is run, 2 errors come up after the actions have been executed ok (it prints and writes the user's input into a file). The errors that come up are: Traceback (most recent call last): File "...", line 48, in <module> display_tracks(tracks) File "...", line 41, in display_tracks for num, track in enumerate(tracks): TypeError: 'NoneType' object is not iterable Please excuse my lack of knowledge, but please could someone help with this issue? We would also like to create a playlist of the songs at the end, however have been facing difficulties with this.
Your main function does not have a return statement in the else clause and that causes tracks to be None. Iterating on tracks when it's None is what's causing the error. There are a few things you can do to improve the code: separation of concerns: the main function is doing two different things, checking the input and fetching the tracks. do .lower() once in the beginning so you don't have to repeat it. following documentation conventions. checking the response before using it some code cleaning see below the changes I suggested above: def fetch_tracks(city: str, num_songs: int) -> List[Dict[str, Any]]: """Searches Spotify for songs that are about `city`. :param city: TODO: TBD :param num_songs: TODO: TBD :return: at most `num_songs` tracks. """ results = [] for search_term in [city, *CITY_KEY_WORDS[city]]: response = sp.search(search_term, limit=50) if response and 'tracks' in response and 'items' in response['tracks']: results += response['tracks']['items'] # Shuffle the results so that they are not ordered by key word and return # at most `num_songs` shuffle(results) return results[: num_songs] def display_tracks(tracks: List[Dict[str, Any]]) -> None: """Prints the name, artist and URL of each track in `tracks`""" for num, track in enumerate(tracks): # Print the relevant details print( f"{num + 1}. {track['name']} - {track['artists'][0]['name']} " f"{track['external_urls']['spotify']}") def main(): city = input("Virtual holiday city? ") city = city.lower() # Check the input city and handle unsupported cities. if city not in CITY_KEY_WORDS: print("Unfortunately, this city is not yet in our system. " "We will add it to our requests file.") with open('requests.txt', 'a') as f: f.write(f"{city}\n") exit() number_of_songs = input("How many songs would you like? ") tracks = fetch_tracks(city, int(number_of_songs)) display_tracks(tracks) if __name__ == '__main__': main()
When your if-statement is executed, you return a list of items and feed them into the display_tracks() function. But what happens when the else-statement is executed? You add the request to your text-file, but do not return anything (or a NoneType item) and feed that into display_tracks(). display_tracks then iterates of this NoneType-item, throwing your exception. You only want to show the tracks if there actually are any tracks to display. One way to do this would be to move the call of display_tracks() into your main function, but then the same error would be thrown if no tracks are found to your search-terms. Another solution would be to first check if your tracks are not empty or to catch the TypeError-exception with something like tracks = main(city, int(number_of_songs)) try: display_tracks(tracks) except TypeError: pass
Correcting to the correct URL
I have written a simple script to access JSON to get the keywords needed to be used for the URL. Below is the script that I have written: import urllib2 import json f1 = open('CatList.text', 'r') f2 = open('SubList.text', 'w') lines = f1.read().splitlines() for line in lines: url ='https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle='+line+'&cmlimit=100' json_obj = urllib2.urlopen(url) data = json.load(json_obj) for item in data['query']: for i in data['query']['categorymembers']: print i['title'] print '-----------------------------------------' f2.write((i['title']).encode('utf8')+"\n") In this script, the program will first read CatList which provides a list of keywords used for the URL. Here is a sample of what the CatList.text contains. Category:Branches of geography Category:Geography by place Category:Geography awards and competitions Category:Geography conferences Category:Geography education Category:Environmental studies Category:Exploration Category:Geocodes Category:Geographers Category:Geographical zones Category:Geopolitical corridors Category:History of geography Category:Land systems Category:Landscape Category:Geography-related lists Category:Lists of countries by geography Category:Navigation Category:Geography organizations Category:Places Category:Geographical regions Category:Surveying Category:Geographical technology Category:Geography terminology Category:Works about geography Category:Geographic images Category:Geography stubs My program get the keywords and placed it in the URL. However I am not able to get the result.I have checked the code by printing the URL: import urllib2 import json f1 = open('CatList.text', 'r') f2 = open('SubList2.text', 'w') lines = f1.read().splitlines() for line in lines: url ='https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle='+line+'&cmlimit=100' json_obj = urllib2.urlopen(url) data = json.load(json_obj) f2.write(url+'\n') The result I get is as follows in sublist2: https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category:Branches of geography&cmlimit=100 https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category:Geography by place&cmlimit=100 https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category:Geography awards and competitions&cmlimit=100 https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category:Geography conferences&cmlimit=100 https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category:Geography education&cmlimit=100 https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category:Environmental studies&cmlimit=100 https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category:Exploration&cmlimit=100 https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category:Geocodes&cmlimit=100 https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category:Geographers&cmlimit=100 https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category:Geographical zones&cmlimit=100 https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category:Geopolitical corridors&cmlimit=100 https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category:History of geography&cmlimit=100 https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category:Land systems&cmlimit=100 https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category:Landscape&cmlimit=100 https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category:Geography-related lists&cmlimit=100 https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category:Lists of countries by geography&cmlimit=100 https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category:Navigation&cmlimit=100 https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category:Geography organizations&cmlimit=100 https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category:Places&cmlimit=100 https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category:Geographical regions&cmlimit=100 https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category:Surveying&cmlimit=100 https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category:Geographical technology&cmlimit=100 https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category:Geography terminology&cmlimit=100 https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category:Works about geography&cmlimit=100 https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category:Geographic images&cmlimit=100 https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category:Geography stubs&cmlimit=100 It shows that the URL is placed correctly. But when I run the full code it was not able to get the correct result. One thing I notice is when I place in the link to the address bar for example: https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category:Branches of geography&cmlimit=100 It gives the correct result because the address bar auto corrects it to : https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category:Branches%20of%20geography&cmlimit=100 I believe that if %20 is added in place of an empty space between the word " Category: Branches of Geography" , my script will be able to get the correct JSON items. Problem: But I am not sure how to modify this statement in the above code to get the replace the blank spaces that is contained in CatList with %20. Please forgive me for the bad formatting and the long post, I am still trying to learn python. Thank you for helping me. Edit: Thank you Tim. Your solution works: url ='https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle='+urllib2.quote(line)+'&cmlimit=100' It was able to print the correct result: https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category%3ABranches%20of%20geography&cmlimit=100 https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category%3AGeography%20by%20place&cmlimit=100 https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category%3AGeography%20awards%20and%20competitions&cmlimit=100 https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category%3AGeography%20conferences&cmlimit=100 https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category%3AGeography%20education&cmlimit=100 https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category%3AEnvironmental%20studies&cmlimit=100 https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category%3AExploration&cmlimit=100 https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category%3AGeocodes&cmlimit=100 https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category%3AGeographers&cmlimit=100 https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category%3AGeographical%20zones&cmlimit=100 https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category%3AGeopolitical%20corridors&cmlimit=100 https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category%3AHistory%20of%20geography&cmlimit=100 https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category%3ALand%20systems&cmlimit=100 https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category%3ALandscape&cmlimit=100 https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category%3AGeography-related%20lists&cmlimit=100 https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category%3ALists%20of%20countries%20by%20geography&cmlimit=100 https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category%3ANavigation&cmlimit=100 https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category%3AGeography%20organizations&cmlimit=100 https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category%3APlaces&cmlimit=100 https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category%3AGeographical%20regions&cmlimit=100 https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category%3ASurveying&cmlimit=100 https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category%3AGeographical%20technology&cmlimit=100 https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category%3AGeography%20terminology&cmlimit=100 https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category%3AWorks%20about%20geography&cmlimit=100 https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category%3AGeographic%20images&cmlimit=100 https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category%3AGeography%20stubs&cmlimit=100
use urllib.quote() to replace special characters in an url: Python 2: import urllib line = 'Category:Branches of geography' url ='https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=' + urllib.quote(line) + '&cmlimit=100' https://docs.python.org/2/library/urllib.html#urllib.quote Python 3: import urllib.parse line = 'Category:Branches of geography' url ='https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=' + urllib.parse.quote(line) + '&cmlimit=100' https://docs.python.org/3.5/library/urllib.parse.html#urllib.parse.quote
Get the highest value of a specific field from an API reponse in Python
I make a GET to a API I got this back {"status":200,"message":"Success","data":[{"email_address":"admin#nyunets.com","password":"admin","account_id":1000,"account_type":"admin","name_prefix":null,"first_name":null,"middle_names":null,"last_name":"Admin","name_suffix":null,"non_person_name":false,"dba":"","display_name":"Admin","address1":"111 Park Ave","address2":"Floor 4","address3":"Suite 4011","city":"New York","state":"NY","postal_code":"10022","nation_code":"USA","phone1":"212-555-1212","phone2":"","phone3":"","time_zone_offset_from_utc":-5,"customer_type":2,"last_updated_utc_in_secs":1446127072},{"email_address":"mhn#nyu.com","password":"nyu123","account_id":1002,"account_type":"customer","name_prefix":"","first_name":"MHN","middle_names":"","last_name":"User","name_suffix":"","non_person_name":false,"dba":"","display_name":"MHNUser","address1":"3101 Knox St","address2":"","address3":"","city":"Dallas","state":"TX","postal_code":"75205","nation_code":"USA","phone1":"8623875097","phone2":"","phone3":"","time_zone_offset_from_utc":-5,"customer_type":2,"last_updated_utc_in_secs":1461166172},{"email_address":"mhn1#nyu.com","password":"nyu123","account_id":1004,"account_type":"customer","name_prefix":"","first_name":"MHN1","middle_names":"","last_name":"User","name_suffix":"","non_person_name":false,"dba":"","display_name":"MHN1User","address1":"1010 Rosedale Shopping Center","address2":"","address3":"","city":"Roseville","state":"MN","postal_code":"55113","nation_code":"USA","phone1":"8279856982","phone2":"","phone3":"","time_zone_offset_from_utc":-5,"customer_type":2,"last_updated_utc_in_secs":1461166417},{"email_address":"location#nyu.com","password":"nyu123","account_id":1005,"account_type":"customer","name_prefix":"","first_name":"BB","middle_names":"","last_name":"HH","name_suffix":"","non_person_name":false,"dba":"","display_name":"BBHH","address1":"9906 Beverly Dr","address2":"9906 Beverly Dr","address3":"","city":"Beverly Hills","state":"CA","postal_code":"90210","nation_code":"90210","phone1":"3105559906","phone2":"","phone3":"","time_zone_offset_from_utc":-5,"customer_type":1,"last_updated_utc_in_secs":1461167224},{"email_address":"mbn1#nyu.com","password":"nyu123","account_id":1003,"account_type":"customer","name_prefix":"","first_name":"MBN1","middle_names":"","last_name":"User","name_suffix":"","non_person_name":false,"dba":"","display_name":"MBN1User","address1":"3200 S Las Vegas Blvd","address2":"","address3":"","city":"Las Vegas","state":"NV","postal_code":"89109","nation_code":"USA","phone1":"9273597497","phone2":"","phone3":"","time_zone_offset_from_utc":-5,"customer_type":1,"last_updated_utc_in_secs":1461593233},{"email_address":"mbn#nyu.com","password":"nyu123","account_id":1001,"account_type":"customer","name_prefix":"","first_name":"MBN","middle_names":"","last_name":"User","name_suffix":"","non_person_name":false,"dba":"","display_name":"MBNUser","address1":"300 Concord Road","address2":"","address3":"","city":"Billerica","state":"MA","postal_code":"01821","nation_code":"USA","phone1":"8127085695","phone2":"","phone3":"","time_zone_offset_from_utc":-5,"customer_type":1,"last_updated_utc_in_secs":1461784499},{"email_address":"usermbn#nyu.com","password":"nyu123","account_id":1006,"account_type":"customer","name_prefix":"","first_name":"User","middle_names":"","last_name":"MBN","name_suffix":"","non_person_name":false,"dba":"","display_name":"UserMBN","address1":"75 Saint Alphonsus Street","address2":"","address3":"","city":"Boston","state":"MA","postal_code":"01821","nation_code":"USA","phone1":"8127085695","phone2":"","phone3":"","time_zone_offset_from_utc":-5,"customer_type":1,"last_updated_utc_in_secs":1462285561},{"email_address":"emile.barnaby#example.com","password":"nyu123","account_id":2000,"account_type":"customer","name_prefix":"","first_name":"emile","middle_names":"","last_name":"barnaby","name_suffix":"","non_person_name":false,"dba":"","display_name":"emilebarnaby","address1":"300 Concord Rd","address2":"","address3":"","city":"8239grandmaraisave","state":"manitoba","postal_code":"56798","nation_code":"USA","phone1":"414-140-1435","phone2":"414-140-1435","phone3":"414-140-1435","time_zone_offset_from_utc":-5,"customer_type":1,"last_updated_utc_in_secs":1462211572}]} I have import requests import json url = "http://api/users" accounts = requests.get(url).json() data = json.loads(accounts) object_with_max_account_id = max(accounts['data'], key=lambda x: x['account_id']) print(object_with_max_account_id['account_id']) Goal is to get the highest account id out of it.
Usually we like to see what OPs try themselves, this is pretty much straightforward. import requests url = "http://api/users" accounts = requests.get(url).json() object_with_max_account_id = max(accounts['data'], key=lambda x: x['account_id']) print(object_with_max_account_id['account_id']) >> 2000
Edit: Apparently, you first need to parse your input as JSON. Check out simplejson. import simplejson as json data_obj = json.loads(data) The s in loads means load from string. Then, if you want to be looping through, how about something like: maxID= -1 for account in data_obj: if(account[account_id])>maxID: maxID= account[account_id] print "Max ID is %d" % maxID
Bitcoin: parsing Blockchain API JSON in PyQT
The following link provides data in JSON regarding a BTC adress -> https://blockchain.info/address/1GA9RVZHuEE8zm4ooMTiqLicfnvymhzRVm?format=json. The bitcoin adress can be viewed here --> https://blockchain.info/address/1GA9RVZHuEE8zm4ooMTiqLicfnvymhzRVm As you can see in the first transaction on 2014-10-20 19:14:22, the TX had 10 inputs from 10 adresses. I want to retreive these adresses using the API, but been struggling to get this to work. The following code only retrieves the first adress instead of all 10, see code. I know it has to do with the JSON structure, but I cant figure it out. import json import urllib2 import sys #Random BTC adress (user input) btc_adress = ("1GA9RVZHuEE8zm4ooMTiqLicfnvymhzRVm") #API call to blockchain url = "https://blockchain.info/address/"+(btc_adress)+"?format=json" json_obj = urllib2.urlopen(url) data = json.load(json_obj) #Put tx's into a list txs_list = [] for txs in data["txs"]: txs_list.append(txs) #Cut the list down to 5 recent transactions listcutter = len(txs_list) if listcutter >= 5: del txs_list[5:listcutter] # Get number of inputs for tx recent_tx_1 = txs_list[1] total_inputs_tx_1 = len(recent_tx_1["inputs"]) The block below needs to put all 10 input adresses in the list 'Output_adress'. It only does so for the first one; output_adress = [] output_adress.append(recent_tx_1["inputs"][0]["prev_out"]["addr"]) print output_adress Your help is always appreciated, thanks in advance.
Because you only add one address to it. Change it to this: output_adress = [] for i in xrange(len(recent_tx_1["inputs"])): output_adress.append(recent_tx_1["inputs"][i]["prev_out"]["addr"]) print output_adress