Pyrebase get() method returns a OrderedDict and I was wondering how would I parse it to get the value.
Here's how and when I use Pyrebase's get() method:
pyre_game = db.child("games/data").order_by_child("id").equal_to(
game_object).limit_to_first(1).get()
And when I call
pyre_game.val()
This is what I get: Here's what I get in the console:
OrderedDict([('-LKYjwhuEMjwadDcfWAl', {'category': 'Main game', 'cover': {'cloudinary_id': 'eohx6zgumfvvjlqgaac6', 'url': '//images.igdb.com/igdb/image/upload/t_thumb/eohx6zgumfvvjlqgaac6.jpg'}, 'developers': [16083], 'first_release_date': 1532563200000, 'genres': [9, 14, 32], 'id': 105176, 'name': 'Arcane Golf', 'platforms': [6], 'release_dates': [{'category': 0, 'date': 1532563200000, 'human': '2018-Jul-26', 'm': 7, 'platform': 6, 'region': 8, 'y': 2018}], 'screenshots': [{'cloudinary_id': 'tgdsmj4ybqndrq9xrxe7', 'url': '//images.igdb.com/igdb/image/upload/t_thumb/tgdsmj4ybqndrq9xrxe7.jpg'}, {'cloudinary_id': 'ryxzsrfw8zrlfa1fwuxz', 'url': '//images.igdb.com/igdb/image/upload/t_thumb/ryxzsrfw8zrlfa1fwuxz.jpg'}, {'cloudinary_id': 'krlxlyg3r46w3mrsrozx', 'url': '//images.igdb.com/igdb/image/upload/t_thumb/krlxlyg3r46w3mrsrozx.jpg'}, {'cloudinary_id': 'xkofnlley4atbqbpc4em', 'url': '//images.igdb.com/igdb/image/upload/t_thumb/xkofnlley4atbqbpc4em.jpg'}, {'cloudinary_id': 'atr178vq39rcksei1bhd', 'url': '//images.igdb.com/igdb/image/upload/t_thumb/atr178vq39rcksei1bhd.jpg'}, {'cloudinary_id': 'qo8znn18apizvlzbzec5', 'url': '//images.igdb.com/igdb/image/upload/t_thumb/qo8znn18apizvlzbzec5.jpg'}], 'summary': 'Arcane Golf is a miniature golf puzzle game set in a fantasy world full of dungeons, dangers, gems, and geometry. Play across 200 levels set in 4 unique courses inspired by classic adventure games!', 'updated_at': 1533116562596, 'videos': [{'name': 'Trailer', 'video_id': 'khDsYapla0M'}], 'websites': [{'category': 8, 'url': 'https://www.instagram.com/gold5games'}, {'category': 5, 'url': 'https://twitter.com/Gold5Games'}, {'category': 13, 'url': 'https://store.steampowered.com/app/897800'}]})])
How would I go to parse to get the value. The value is everything inside {} It starts with a category object
Hope this will work
for x in pyre_game.each():
print( x.key(), x.val() )
you can also alternatively typecast the OrderedDict and use it like a dictionary.
pyre_game = dict(db.child("games/data").order_by_child(game_object).limit_to_first(1).get().val())
print(pyre_game)
Related
how do i use more than 1 list index? like i wanna search through more than just 0 without writing another line that then searches 1, that feels like a workaround
This is my current code, I'm using an API that then I put into a json
>result = response.json()
>{'exercises': [{'tag_id': 317, 'user_input': 'run', 'duration_min': 30, 'met': 9.8, 'nf_calories': 842.8, 'photo': {'highres': 'https://d2xdmhkmkbyw75.cloudfront.net/exercise/317_highres.jpg', 'thumb': 'https://d2xdmhkmkbyw75.cloudfront.net/exercise/317_thumb.jpg', 'is_user_uploaded': False}, 'compendium_code': 12050, 'name': 'running', 'description': None, 'benefits': None}, {'tag_id': 814, 'user_input': 'bike', 'duration_min': 1, 'met': 6.8, 'nf_calories': 19.49, 'photo': {'highres': None, 'thumb': None, 'is_user_uploaded': False}, 'compendium_code': 1020, 'name': 'bicycling', 'description': None, 'benefits': None}]}
then I'm trying to get the 'name', but theres two instances of name and i can only use 0,1,2 etc as the index
>exercises = result['exercises'][0]['name']
>exercisess = result['exercises'][1]['name']
>print(exercises)
>print(exercisess)
>running
>bicycling
is there a way i can search whole thing for keys and get their value without specifically saying 0,1,2 as the index to search
I'm a noob at this sorry if i formatted this question wrong.
You can use list comprehension to add them all to a list then print that.
data = {'exercises': [{'tag_id': 317, 'user_input': 'run', 'duration_min': 30, 'met': 9.8, 'nf_calories': 842.8, 'photo': {'highres': 'https://d2xdmhkmkbyw75.cloudfront.net/exercise/317_highres.jpg', 'thumb': 'https://d2xdmhkmkbyw75.cloudfront.net/exercise/317_thumb.jpg', 'is_user_uploaded': False}, 'compendium_code': 12050, 'name': 'running', 'description': None, 'benefits': None}, {'tag_id': 814, 'user_input': 'bike', 'duration_min': 1, 'met': 6.8, 'nf_calories': 19.49, 'photo': {'highres': None, 'thumb': None, 'is_user_uploaded': False}, 'compendium_code': 1020, 'name': 'bicycling', 'description': None, 'benefits': None}]}
names = [x['name'] for x in data['exercises']]
print(names)
# output: ['running', 'bicycling']
Depends on what you are after.
You use a for loop:
for ex in result['exercises']:
print(ex['name'])
The name ex will refer to each of the elements in turn.
I have tried to find the answer but I could not find it
I am looking for the way to save in my computer a json file from python.
I call the API
configuration = api.Configuration()
configuration.api_key['X-XXXX-Application-ID'] = 'xxxxxxx'
configuration.api_key['X-XXX-Application-Key'] = 'xxxxxxxx1'
## List our parameters as search operators
opts= {
'title': 'Deutsche Bank',
'body': 'fraud',
'language': ['en'],
'published_at_start': 'NOW-7DAYS',
'published_at_end': 'NOW',
'per_page': 1,
'sort_by': 'relevance'
}
try:
## Make a call to the Stories endpoint for stories that meet the criteria of the search operators
api_response = api_instance.list_stories(**opts)
## Print the returned story
pp(api_response.stories)
except ApiException as e:
print('Exception when calling DefaultApi->list_stories: %s\n' % e)
I got the response like this
[{'author': {'avatar_url': None, 'id': 1688440, 'name': 'Pranav Nair'},
'body': 'The law firm will investigate whether the bank or its officials have '
'engaged in securities fraud or unlawful business practices. '
'Industries: Bank Referenced Companies: Deutsche Bank',
'categories': [{'confident': False,
'id': 'IAB11-5',
'level': 2,
'links': {'_self': 'https://,
'parent': 'https://'},
'score': 0.39,
'taxonomy': 'iab-qag'},
{'confident': False,
'id': 'IAB3-12',
'level': 2,
'links': {'_self': 'https://api/v1/classify/taxonomy/iab-qag/IAB3-12',
'score': 0.16,
'taxonomy': 'iab-qag'},
'clusters': [],
'entities': {'body': [{'indices': [[168, 180]],
'links': {'dbpedia': 'http://dbpedia.org/resource/Deutsche_Bank'},
'score': 1.0,
'text': 'Deutsche Bank',
'types': ['Bank',
'Organisation',
'Company',
'Banking',
'Agent']},
{'indices': [[80, 95]],
'links': {'dbpedia': 'http://dbpedia.org/resource/Securities_fraud'},
'score': 1.0,
'text': 'securities fraud',
'types': ['Practice', 'Company']},
'hashtags': ['#DeutscheBank', '#Bank', '#SecuritiesFraud'],
'id': 3004661328,
'keywords': ['Deutsche',
'behalf',
'Bank',
'firm',
'investors',
'Deutsche Bank',
'bank',
'fraud',
'unlawful'],
'language': 'en',
'links': {'canonical': None,
'coverages': '/coverages?story_id=3004661328',
'permalink': 'https://www.snl.com/interactivex/article.aspx?KPLT=7&id=58657069',
'related_stories': '/related_stories?story_id=3004661328'},
'media': [],
'paragraphs_count': 1,
'published_at': datetime.datetime(2020, 5, 19, 16, 8, 5, tzinfo=tzutc()),
'sentences_count': 2,
'sentiment': {'body': {'polarity': 'positive', 'score': 0.599704},
'title': {'polarity': 'neutral', 'score': 0.841333}},
'social_shares_count': {'facebook': [],
'google_plus': [],
'source': {'description': None,
'domain': 'snl.com',
'home_page_url': 'http://www.snl.com/',
'id': 8256,
'links_in_count': None,
'locations': [{'city': 'Charlottesville',
'country': 'US',
'state': 'Virginia'}],
'logo_url': None,
'name': 'SNL Financial',
'scopes': [{'city': None,
'country': 'US',
'level': 'national',
'state': None},
{'city': None,
'country': None,
'level': 'international',
'state': None}],
'title': None},
'summary': {'sentences': ['The law firm will investigate whether the bank or '
'its officials have engaged in securities fraud or '
'unlawful business practices.',
'Industries: Bank Referenced Companies: Deutsche '
'Bank']},
'title': "Law firm to investigate Deutsche Bank's US ops on behalf of "
'investors',
'translations': {'en': None},
'words_count': 26}]
In the documentation says "Stories you retrieve from the API are returned as JSON objects by default. These JSON story objects contain 22 top-level fields, whereas a full story object will contain 95 unique data points"
The class is a list. When I have tried to save json file I have the error "TypeError: Object of type Story is not JSON serializable".
How I can save a json file in my computer?
The response you got is not json, json uses double quotes, but here its single quotes. Copy paste your response in the following link to see the issues
http://json.parser.online.fr/.
If you change it like
[{"author": {"avatar_url": None, "id": 1688440, "name": "Pranav Nair"},
"body": "......
It will work, You can use python json module to do it
import json
json.loads(the_dict_got_from_response).
But it should be the duty of the API provider to, To make it working you can json load the result you got.
I have a dict that I am trying to obtain certain data from, an example of this dict is as follows:
{
'totalGames': 1,
'dates': [{
'totalGames': 1,
'totalMatches': 0,
'matches': [],
'totalEvents': 0,
'totalItems': 1,
'games': [{
'status': {
'codedGameState': '7',
'abstractGameState': 'Final',
'startTimeTBD': False,
'detailedState': 'Final',
'statusCode': '7',
},
'season': '20172018',
'gameDate': '2018-05-20T19:00:00Z',
'venue': {'link': '/api/v1/venues/null',
'name': 'Bell MTS Place'},
'gameType': 'P',
'teams': {'home': {'leagueRecord': {'wins': 9,
'losses': 8, 'type': 'league'}, 'score': 1,
'team': {'link': '/api/v1/teams/52',
'id': 52, 'name': 'Winnipeg Jets'}},
'away': {'leagueRecord': {'wins': 12,
'losses': 3, 'type': 'league'}, 'score': 2,
'team': {'link': '/api/v1/teams/54',
'id': 54, 'name': 'Vegas Golden Knights'}}},
'content': {'link': '/api/v1/game/2017030325/content'},
'link': '/api/v1/game/2017030325/feed/live',
'gamePk': 2017030325,
}],
'date': '2018-05-20',
'events': [],
}],
'totalMatches': 0,
'copyright': 'NHL and the NHL Shield are registered trademarks of the National Hockey League. NHL and NHL team marks are the property of the NHL and its teams. \xa9 NHL 2018. All Rights Reserved.',
'totalEvents': 0,
'totalItems': 1,
'wait': 10,
}
I am interested obtaining the score for a certain team if they played that night, for example if my team of interest is the Vegas Golden Knights I would like to create a variable that contains their score (2 in this case). I am completely stuck on this so any help would be greatly appreciated!
This just turns into ugly parsing but is easily doable following the JSON structure; would recommend flattening the structure for your purposes. With that said, if you'd like to find the score of a particular team on a particular date, you could do this:
def find_score_by_team(gamedict, team_of_interest, date_of_interest):
for date in gamedict['dates']:
for game in date['games']:
if game['gameDate'].startswith(date_of_interest):
for advantage in game['teams']:
if game['teams'][advantage]['team']['name'] == team_of_interest:
return game['teams'][advantage]['score']
return -1
Example query:
>>> d = {'totalGames':1,'dates':[{'totalGames':1,'totalMatches':0,'matches':[],'totalEvents':0,'totalItems':1,'games':[{'status':{'codedGameState':'7','abstractGameState':'Final','startTimeTBD':False,'detailedState':'Final','statusCode':'7',},'season':'20172018','gameDate':'2018-05-20T19:00:00Z','venue':{'link':'/api/v1/venues/null','name':'BellMTSPlace'},'gameType':'P','teams':{'home':{'leagueRecord':{'wins':9,'losses':8,'type':'league'},'score':1,'team':{'link':'/api/v1/teams/52','id':52,'name':'WinnipegJets'}},'away':{'leagueRecord':{'wins':12,'losses':3,'type':'league'},'score':2,'team':{'link':'/api/v1/teams/54','id':54,'name':'VegasGoldenKnights'}}},'content':{'link':'/api/v1/game/2017030325/content'},'link':'/api/v1/game/2017030325/feed/live','gamePk':2017030325,}],'date':u'2018-05-20','events':[],}],'totalMatches':0,'copyright':'NHLandtheNHLShieldareregisteredtrademarksoftheNationalHockeyLeague.NHLandNHLteammarksarethepropertyoftheNHLanditsteams.\xa9NHL2018.AllRightsReserved.','totalEvents':0,'totalItems':1,'wait':10,}
>>> find_score_by_team(d, 'VegasGoldenKnights', '2018-05-20')
2
This returns -1 if the team didn't play that night, otherwise it returns the team's score.
I am trying to scrape some ticketing inventory info using Stubhub's API, but I cannot seem to figure out how to loop through the get request.
I basically want to loop through multiple events. The eventid_list is a list of eventids. The code I have is below:
inventory_url = 'https://api.stubhub.com/search/inventory/v2'
for eventid in eventid_list:
data = {'eventid': eventid, 'rows':500}
inventory = requests.get(inventory_url, headers=headers, params=data)
inv = inventory.json()
print(inv)
listing_df = pd.DataFrame(inv['listing'])
When I run this, the dataframe only returns results for one event, instead of multiple. What am I doing wrong?
EDIT: print(inv) outputs something like this:
{
'eventId': 102994860,
'totalListings': 82,
'totalTickets': 236,
'minQuantity': 1,
'maxQuantity': 6,
'listing': [
{
'listingId': 1297697413,
'currentPrice': {'amount': 108.58, 'currency': 'USD'},
'listingPrice': {'amount': 88.4, 'currency': 'USD'},
'sectionId': 1638686,
'row': 'E',
'quantity': 6,
'sellerSectionName': 'FRONT MEZZANINE RIGHT',
'sectionName': 'Front Mezzanine Sides',
'seatNumbers': '2,4,6,8,10,12',
'zoneId': 240236,
'zoneName': 'Front Mezzanine',
'deliveryTypeList': [5],
'deliveryMethodList': [23, 24, 25],
'isGA': 0,
'dirtyTicketInd': False,
'splitOption': '2',
'ticketSplit': '1',
'splitVector': [1, 2, 3, 4, 6],
'sellerOwnInd': 0,
'score': 0.0
},
...
{
'listingId': 1297697417,
'currentPrice': {'amount': 108.58, 'currency': 'USD'},
'listingPrice': {'amount': 88.4, 'currency': 'USD'},
'sectionId': 1638686,
'row': 'D',
'quantity': 3,
'sellerSectionName': 'FRONT MEZZANINE RIGHT',
'sectionName': 'Front Mezzanine Sides',
'seatNumbers': '2,4,6',
'zoneId': 240236,
'zoneName': 'Front Mezzanine',
'deliveryTypeList': [5],
'deliveryMethodList': [23, 24, 25],
'isGA': 0,
'dirtyTicketInd': False,
'splitOption': '2',
'ticketSplit': '1',
'splitVector': [1, 3],
'sellerOwnInd': 0,
'score': 0.0
},
]
}
I'm guessing inventory.json()['listing'] is a list of events. If so, you can try this:
inventory_url = 'https://api.stubhub.com/search/inventory/v2'
def get_event(eventid):
"""Given an event id returns inventory['listing']"""
data = {'eventid': eventid, 'rows':500}
inventory = requests.get(inventory_url, headers=headers, params=data)
return inventory.json().get('listing', [])
# Concatenate output of all events
events = itertools.flatten(get_event(eventid) for eventid in eventid_list)
listing_df = pd.DataFrame(list(events))
This is just a starting point, you will have to deal with cases where inventory.statos_code != 200. The result probably is not very useful, so you may have to flat some of the attributes for the listing items line currentPrice and listingPrice:
I've been trying to parse an XML feed into a Pandas dataframe and can't work out where I'm going wrong.
import pandas as pd
import requests
import lxml.objectify
path = "http://www2.cineworld.co.uk/syndication/listings.xml"
xml = lxml.objectify.parse(path)
root = xml.getroot()
The next bit of code is to parse through the bits I want and create a list of show dictionaries.
shows_list = []
for r in root.cinema:
rec = {}
rec['name'] = r.attrib['name']
rec['info'] = r.attrib["root"] + r.attrib['url']
listing = r.find("listing")
for f in listing.film:
film = rec
film['title'] = f.attrib['title']
film['rating'] = f.attrib['rating']
shows = f.find("shows")
for s in shows['show']:
show = rec
show['time'] = s.attrib['time']
show['url'] = s.attrib['url']
#print show
shows_list.append(rec)
df = pd.DataFrame(show_list)
When I run the code, the film and time field seems to be replicated multiple times within rows. However, if I put a print statement into the code (it's commented out), the dictionaries appear to as I would expect.
What am I doing wrong? Please feel free to let me know if there's a more pythonic way of doing the parsing process.
EDIT: To clarify:
These are the last five rows of the data if I use a print statement to check what's happening as I loop through.
{'info': 'http://cineworld.co.uk/cinemas/107/information', 'rating': 'TBC', 'name': 'Cineworld Stoke-on-Trent', 'title': "Dad's Army", 'url': '/booking?performance=4729365&seats=STANDARD', 'time': '2016-02-07T20:45:00'}
{'info': 'http://cineworld.co.uk/cinemas/107/information', 'rating': 'TBC', 'name': 'Cineworld Stoke-on-Trent', 'title': "Dad's Army", 'url': '/booking?performance=4729366&seats=STANDARD', 'time': '2016-02-08T20:45:00'}
{'info': 'http://cineworld.co.uk/cinemas/107/information', 'rating': 'TBC', 'name': 'Cineworld Stoke-on-Trent', 'title': "Dad's Army", 'url': '/booking?performance=4729367&seats=STANDARD', 'time': '2016-02-09T20:45:00'}
{'info': 'http://cineworld.co.uk/cinemas/107/information', 'rating': 'TBC', 'name': 'Cineworld Stoke-on-Trent', 'title': "Dad's Army", 'url': '/booking?performance=4729368&seats=STANDARD', 'time': '2016-02-10T20:45:00'}
{'info': 'http://cineworld.co.uk/cinemas/107/information', 'rating': 'TBC', 'name': 'Cineworld Stoke-on-Trent', 'title': "Dad's Army", 'url': '/booking?performance=4729369&seats=STANDARD', 'time': '2016-02-11T20:45:00'}
{'info': 'http://cineworld.co.uk/cinemas/107/information', 'rating': 'PG', 'name': 'Cineworld Stoke-on-Trent', 'title': 'Autism Friendly Screening - Goosebumps', 'url': '/booking?performance=4782937&seats=STANDARD', 'time': '2016-02-07T11:00:00'}
This is the end of the list:
...
{'info': 'http://cineworld.co.uk/cinemas/107/information',
'name': 'Cineworld Stoke-on-Trent',
'rating': 'PG',
'time': '2016-02-07T11:00:00',
'title': 'Autism Friendly Screening - Goosebumps',
'url': '/booking?performance=4782937&seats=STANDARD'},
{'info': 'http://cineworld.co.uk/cinemas/107/information',
'name': 'Cineworld Stoke-on-Trent',
'rating': 'PG',
'time': '2016-02-07T11:00:00',
'title': 'Autism Friendly Screening - Goosebumps',
'url': '/booking?performance=4782937&seats=STANDARD'},
{'info': 'http://cineworld.co.uk/cinemas/107/information',
'name': 'Cineworld Stoke-on-Trent',
'rating': 'PG',
'time': '2016-02-07T11:00:00',
'title': 'Autism Friendly Screening - Goosebumps',
'url': '/booking?performance=4782937&seats=STANDARD'},
{'info': 'http://cineworld.co.uk/cinemas/107/information',
'name': 'Cineworld Stoke-on-Trent',
'rating': 'PG',
'time': '2016-02-07T11:00:00',
'title': 'Autism Friendly Screening - Goosebumps',
'url': '/booking?performance=4782937&seats=STANDARD'}]
Your code only has one object that keeps getting updated: rec. Try this:
from copy import copy
shows_list = []
for r in root.cinema:
rec = {}
rec['name'] = r.attrib['name']
rec['info'] = r.attrib["root"] + r.attrib['url']
listing = r.find("listing")
for f in listing.film:
film = copy(rec) # New object
film['title'] = f.attrib['title']
film['rating'] = f.attrib['rating']
shows = f.find("shows")
for s in shows['show']:
show = copy(film) # New object, changed reference
show['time'] = s.attrib['time']
show['url'] = s.attrib['url']
#print show
shows_list.append(show) # Changed reference
df = pd.DataFrame(show_list)
With this structure, the data in rec is copied into each film, and the data in each film is copied into each show. Then, at the end, show is added to the shows_list.
You might want to read this article to learn more about what's happening in your line film = rec, i.e. you are giving another name to the original dictionary rather than creating a new dictionary.