KeyError with Riot API Matchv5 When Trying To Pull Data - python

I'm trying to pull a list of team and player stats from match IDs. Everything looks fine to me but when I run my "for loops" to call the functions for pulling the stats I want, it just prints the error from my try/except block. I'm still pretty new to python and this is my first project so I've tried everything I can think of in the past few days but no luck. I believe the problem is with my actual pull request but I'm not sure as I'm also using a GitHub library I found to help me with the Riot API while I change and update it to get the info I want.
def get_match_json(matchid):
url_pull_match = "https://{}.api.riotgames.com/lol/match/v5/matches/{}/timeline?api_key={}".format(region, matchid, api_key)
match_data_all = requests.get(url_pull_match).json()
# Check to make sure match is long enough
try:
length_match = match_data_all['frames'][15]
return match_data_all
except IndexError:
return ['Match is too short. Skipping.']
And then this is a shortened version of the stat function:
def get_player_stats(match_data, player):
# Get player information at the fifteenth minute of the game.
player_query = match_data['frames'][15]['participantFrames'][player]
player_team = player_query['teamId']
player_total_gold = player_query['totalGold']
player_level = player_query['level']
And there are some other functions in the code as well but I'm not sure they are faulty as well or if they are needed to figure out the error. But here is the "for loop" to call the request and defines the variable 'matchid'
for matchid_batch in all_batches:
match_data = []
for match_id in matchid_batch:
time.sleep(1.5)
if match_id == 'MatchId':
pass
else:
try:
match_entry = get_match_row(match_id)
if match_entry[0] == 'Match is too short. Skipping.':
print('Match', match_id, "is too short.")
else:
match_entry = get_match_row(match_id).reshape(1, -1)
match_data.append(np.array(match_entry))
except KeyError:
print('KeyError.')
match_data = np.array(match_data)
match_data.shape = -1, 17
df = pd.DataFrame(match_data, columns=column_titles)
df.to_csv('Match_data_Diamond.csv', mode='a')
print('Done Batch!')
Since this is my first project any help would be appreciated since I can't find any info on this particular subject so I really don't know where to look to learn why it's not working on my own.

I guess your issue was that the 'frame' array is subordinate to the array 'info'.
def get_match_json(matchid):
url_pull_match = "https://{}.api.riotgames.com/lol/match/v5/matches/{}/timeline?api_key={}".format(region, matchid, api_key)
match_data_all = requests.get(url_pull_match).json()
try:
length_match = match_data_all['info']['frames'][15]
return match_data_all
except IndexError:
return ['Match is too short. Skipping.']
def get_player_stats(match_data, player): # player has to be an int (1-10)
# Get player information at the fifteenth minute of the game.
player_query = match_data['info']['frames'][15]['participantFrames'][str(player)]
#player_team = player_query['teamId'] - It is not possibly with the endpoint to get the teamId
player_total_gold = player_query['totalGold']
player_level = player_query['level']
return player_query
This example worked for me. Unfortunately it is not possible to gain the teamId only through your API-endpoint. Usually the players 1-5 are in team 100 (blue side) and 6-10 in team 200 (red side).

Related

unable to append unwatched EPs to list

The idea of the code is to add to existent playlist unwatched EPs by index order, ep 1 Show X, ep 1 Show Z, regardless of air date:
from plexapi.server import PlexServer
baseurl = 'http://0.0.0.0:0000/'
token = '0000000000000'
plex = PlexServer(baseurl, token)
episode = 0
first_ep_name = []
for x in plex.library.section('Anime').search(unwatched=True):
try:
for y in plex.library.section('Anime').get(x.title).episodes()[episode]:
if plex.library.section('Anime').get(x.title).episodes()[episode].isWatched:
episode +=1
first_ep_name.append(y)
else:
episode = 0
first_ep_name.append(y)
except:
continue
plex.playlist('Anime Playlist').addItems(first_ep_name)
But when I run it, it will always add watched EPs but if I debug the code in Thoni IDE it seems that is doing its purpose so I am not sure whats wrong with that code.
Any ideas?
Im thinking that the error might be here:
plex.playlist('Anime Playlist').addItems(first_ep_name)
but according to the documentation addItems should be a list but my list "first_ep_name " its already appending unwatched episodes in the correct order, in theory addItems should recognize the specific episode and not only the series name but I am not sure anymore.
is somebody out there is having the same issue with plexapi I was able to find a way to get this project working properly:
from plexapi.server import PlexServer
baseurl = 'insert plex url here'
token = 'plex token here'
plex = PlexServer(baseurl, token)
anime_plex = []
scrapped_playlist = []
for x in plex.library.section('Anime').search(unwatched=True):
anime_plex.append(x)
while len(anime_plex) >0:
episode_list = []
for y in plex.library.section('Anime').get(anime_plex[0].title).episodes():
episode_list.append(y)
ep_checker = True
while ep_checker:
if episode_list[0].isWatched:
episode_list.pop(0)
else:
scrapped_playlist.append(episode_list[0])
episode_list.clear()
ep_checker = False
anime_plex.pop(0)
# plex.playlist('Anime Playlist').addItems(scrapped_playlist)
plex.playlist('Anime Playlist').delete()
plex.createPlaylist('Anime Playlist', section='Anime', items= scrapped_playlist)
Basically, what I am doing with that code I am looping through each anime series I have and if EP # X is watched then pop from the list until it finds a boolean FALSE then that will append into an empty list that later I will use for creating/adding to playlist.
The last lines of the code can be commented on for whatever purpose, creating the playlist anime or adding items.

Analysing YouTube comments using Python -- parameter has disabled comments

I'm trying to get into text analysis using YouTube comments. I've been using the code from the following website to scrape YouTube:
https://www.pingshiuanchua.com/blog/post/using-youtube-api-to-analyse-youtube-comments-on-python
The script starts working, but there is a section of the code that generates an error if comments have been disabled, and I can't find a way to check to see if comments are disabled or if comments exist, and to just skip that video if there are no comments to scrape, and continue on to the next video.
The code chunk in question creating the error is:
# =============================================================================
# Get Comments of Top Videos
# =============================================================================
video_id_pop = []
channel_pop = []
video_title_pop = []
video_desc_pop = []
comments_pop = []
comment_id_pop = []
reply_count_pop = []
like_count_pop = []
from tqdm import tqdm
for i, video in enumerate(tqdm(video_id, ncols = 100)):
response = service.commentThreads().list(
part = 'snippet',
videoId = video,
maxResults = 100, # Only take top 100 comments...
order = 'relevance', #... ranked on relevance
textFormat = 'plainText',
).execute()
comments_temp = []
comment_id_temp = []
reply_count_temp = []
like_count_temp = []
for item in response['items']:
comments_temp.append(item['snippet']['topLevelComment']['snippet']['textDisplay'])
comment_id_temp.append(item['snippet']['topLevelComment']['id'])
reply_count_temp.append(item['snippet']['totalReplyCount'])
like_count_temp.append(item['snippet']['topLevelComment']['snippet']['likeCount'])
comments_pop.extend(comments_temp)
comment_id_pop.extend(comment_id_temp)
reply_count_pop.extend(reply_count_temp)
like_count_pop.extend(like_count_temp)
video_id_pop.extend([video_id[i]]*len(comments_temp))
channel_pop.extend([channel[i]]*len(comments_temp))
video_title_pop.extend([video_title[i]]*len(comments_temp))
video_desc_pop.extend([video_desc[i]]*len(comments_temp))
query_pop = [query] * len(video_id_pop)
Edited to add:
The person who created the code left a message to fix the error saying:
"You can wrap the query part of the code in a try...except statement, where if the try statement (the query part) failed, you can push an except of blank response or "error" string into the list."
I have NFI how to carry this out if it makes sense to anyone else...
Note: this is not necessarily "good" coding style, but it's the sort of thing I would do if I ran into this problem when I was writing a script for my own short-term, personal use.
Python (and many other languages) have a way to catch exceptions and handle them without crashing. Used properly, this can be a very nice way to handle bad data.
https://docs.python.org/3.8/tutorial/errors.html is a good overview of exceptions. In general, the format they take is something like
try:
code_that_can_error()
except ExceptionThatWIllBeThrown as ex:
handle_exception()
print(ex) # ex is an object that has information about what went wrong
finally:
clean_up()
(Finally is particularly useful if you have something you need to call close on, like a file. If the exception is thrown, you might not close it, but a finally is guaranteed to get called, even if an exception is thrown.)
In your case, all we need is to ignore the error and move on to the next video.
for i, video in enumerate(tqdm(video_id, ncols = 100)):
try:
response = service.commentThreads().list(
part = 'snippet',
videoId = video,
maxResults = 100, # Only take top 100 comments...
order = 'relevance', #... ranked on relevance
textFormat = 'plainText',
).execute()
comments_temp = []
[...]
video_desc_pop.extend([video_desc[i]]*len(comments_temp))
except:
# Something threw an error. Skip that video and move on
print(f"{video} has comments disabled, or something else went wrong")
query_pop = [query] * len(video_id_pop)

Why does this for loop return a different sized list than expected?

I'm doing a data analysis project using spotipy and numpy libraries. I've figured out how to achieve my expected result, but I don't know exactly why a slight change (using a for loop) to my code causes it to not work. here is my code:
def get_user_playlist(username, playlist_id, sp):
offset=0
playlist_songs = sp.user_playlist_tracks(username, playlist_id, limit=100, fields=None, offset=offset, market=None)['items']
return playlist_songs
def create_dataframe(playlist_songs):
playlist_df_columns = ['artist','track_name','id','explicit','duration','danceability','loudness','tempo']
#audio_analysis_columns = ['danceability','loudness','tempo']
playlist_df = pd.DataFrame(columns=df_columns)
# song = dict object containing song
playlist_df['artist'] = np.array([song['track']["album"]["artists"][0]["name"] for song in playlist_songs])
playlist_df['track_name'] = np.array([song['track']['name'] for song in playlist_songs])
playlist_df['id'] = np.array([song['track']['id'] for song in playlist_songs])
playlist_df['explicit'] = np.array([song['track']['explicit'] for song in playlist_songs])
for song in playlist_songs:
audio_analysis = sp.audio_features(song['track']['id'])
#returning audio_analysis for testing purposes.
return audio_analysis
#return playlist_df
the important part is the for loop, when I run this code, the length of the audio_analysis list = 1 :
for song in playlist_songs:
audio_analysis = sp.audio_features(song['track']['id'])
However, it works when I remove the for loop and do this instead, the length of the audio_analysis list = 94, as expected.:
audio_analysis = sp.audio_features(playlist_df['id'])
For reference, here is the code that prints the length:
playlist = get_user_playlist('username', 'playlist_name', sp)
audio_analysis = create_dataframe(playlist)
print(len(audio_analysis))
My question is: why does the for loop not work as I expect? Is my code not accessing the same information? Why isn't using a for loop to access information the same as using the playlist_df['id'] column directly?

Python Geolocator Geocode location in None but still works + random string is added (weirdest error I ever saw)

I have at the moment to figure out where "favicon.ico" comes from. It is really super weird.
I use geopy to geocode a location into latitude and longitude, then I use these values to create markers on a google map (Flask GoogleMaps).
That is the piece of code:
try:
print findroomcity
location = geolocator.geocode(findroomcity)
print location, location.latitude, location.longitude
if location:
mymap = Map(
identifier="view-side",
lat=location.latitude,
lng=location.longitude,
markers=[(location.latitude, location.longitude)],
zoom = 12
)
else:
print "location is none"
In this case findroomcity is "dortmund". It is actually grabbed from a form.
If I submit the form the map is actually created, so it does not use the else block But it tells me that location in NoneType and has no attribute latitude. It seems that it calls the try block three times and the first time findroomcity is "favicon.ico", also the last time
Check the outputs of the prints:
I dont even used "favicon.ico" in my whole project, I know it must be somewhere but I checked every .py file and also searched for every print. I am really super confused, I keep searching, but maybe someone has encountered something similar.
Here is the whole method which creates the map:
# Die Filter Funktion mit Google Maps
#app.route('/<findroomcity>', methods=["GET", "POST"])
def find_room(findroomcity):
form = FilterZimmerForm()
if form.validate_on_submit():
query = Zimmer.query
filter_list = ["haustiere_erlaubt","bettwaesche_wird_gestellt","grill_vorhanden","safe_vorhanden","kuehlschrank_vorhanden","rauchen_erlaubt","parkplatz_vorhanden",
"kochmoeglichkeit_vorhanden","restaurant_im_haus_vorhanden","handtuecher_werden_gestellt","tv_vorhanden","waschmoeglichkeit_vorhanden","wlan_vorhanden"]
for filter_name in filter_list:
if getattr(form, filter_name).data:
query = query.filter(getattr(Zimmer, filter_name).is_(True))
all_rooms_in_city = query.all()
else:
all_rooms_in_city = Zimmer.query.order_by(desc("stadt")).all()
try:
print findroomcity
location = geolocator.geocode(findroomcity)
print location, location.latitude, location.longitude
if location:
mymap = Map(
identifier="view-side",
lat=location.latitude,
lng=location.longitude,
markers=[(location.latitude, location.longitude)],
zoom = 12
)
else:
print "location is none"
except AttributeError as e:
flash("Ort nicht gefunden")
print e
return redirect(url_for('index'))
except GeocoderTimedOut as e:
print e
sleep(1)
return render_template('zimmer_gefunden.html', mymap=mymap, all_rooms_in_city=all_rooms_in_city, findroomcity=findroomcity, form=form)
EDIT
I have really no idea where it comes from, I use now:
if location.latitude is not None:

Do I need to use transactions in google appengine

update 0
My def post() code has changed dramatically because originally it was base on a digital form which included both checkboxes and text entry fields, not just text entry fields, which is the current design to be more paper-like. However, as a result I have other problems which may be solved by one of the proposed solutions, but I cannot exactly follow that proposed solution, so let me try to explain new design and the problems.
The smaller problem is the inefficiency of my implementation because in the def post() I create a distinct name for each input timeslot which is a long string <courtname><timeslotstarthour><timeslotstartminute>. In my code this name is read in a nested for loop with the following snippet [very inefficient, I imagine].
tempreservation=courtname+str(time[0])+str(time[1])
name = self.request.get('tempreservation',None)
The more serious immediate problem is that my def post() code is never read and I cannot figure out why (and maybe it wasn't being read before, either, but I had not tested that far). I wonder if the problem is that for now I want both the post and the get to "finish" the same way. The first line below is for the post() and the second is for the get().
return webapp2.redirect("/read/%s" % location_id)
self.render_template('read.html', {'courts': courts,'location': location, ... etc ...}
My new post() is as follows. Notice I have left in the code the logging.info to see if I ever get there.
class MainPageCourt(BaseHandler):
def post(self, location_id):
logging.info("in MainPageCourt post ")
startTime = self.request.get('startTime')
endTime = self.request.get('endTime')
day = self.request.get('day')
weekday = self.request.get('weekday')
nowweekday = self.request.get('nowweekday')
year = self.request.get('year')
month = self.request.get('month')
nowmonth = self.request.get('nowmonth')
courtnames = self.request.get_all('court')
for c in courtnames:
logging.info("courtname: %s " % c)
times=intervals(startTime,endTime)
for courtname in courtnames:
for time in times:
tempreservation=courtname+str(time[0])+str(time[1])
name = self.request.get('tempreservation',None)
if name:
iden = courtname
court = db.Key.from_path('Locations',location_id,'Courts', iden)
reservation = Reservations(parent=court)
reservation.name = name
reservation.starttime = time
reservation.year = year
reservation.nowmonth = int(nowmonth)
reservation.day = int(day)
reservation.nowweekday = int(nowweekday)
reservation.put()
return webapp2.redirect("/read/%s" % location_id)
Eventually I want to add checking/validating to the above get() code by comparing the existing Reservations data in the datastore with the implied new reservations, and kick out to an alert which tells the user of any potential problems which she can address.
I would also appreciate any comments on these two problems.
end of update 0
My app is for a community tennis court. I want to replace the paper sign up sheet with an online digital sheet that mimics a paper sheet. As unlikely as it seems there may be "transactional" conflicts where two tennis appointments collide. So how do I give the second appointment maker a heads up to the conflict but also give the successful party the opportunity to alter her appointment like she would on paper (with an eraser).
Each half hour is a time slot on the form. People normally sign up for multiple half hours at one time before "submitting".
So in my code within a loop I do a get_all. If any get succeeds I want to give the user control over whether to accept the put() or not. I am still thinking the put() would be an all or nothing, not selective.
So my question is, do I need to make part of the code use an explicit "transaction"?
class MainPageCourt(BaseHandler):
def post(self, location_id):
reservations = self.request.get_all('reservations')
day = self.request.get('day')
weekday = self.request.get('weekday')
nowweekday = self.request.get('nowweekday')
year = self.request.get('year')
month = self.request.get('month')
nowmonth = self.request.get('nowmonth')
if not reservations:
for r in reservations:
r=r.split()
iden = r[0]
temp = iden+' '+r[1]+' '+r[2]
court = db.Key.from_path('Locations',location_id,'Courts', iden)
reservation = Reservations(parent=court)
reservation.starttime = [int(r[1]),int(r[2])]
reservation.year = int(r[3])
reservation.nowmonth = int(r[4])
reservation.day = int(r[5])
reservation.nowweekday = int(nowweekday)
reservation.name = self.request.get(temp)
reservation.put()
return webapp2.redirect("/read/%s" % location_id)
else:
... this important code is not written, pending ...
return webapp2.redirect("/adjust/%s" % location_id)
Have a look at optimistic concurrency control:
http://en.wikipedia.org/wiki/Optimistic_concurrency_control
You can check for the availability of the time slots in a given Court, and write the corresponding Reservations child entities only if their stat_time don't conflict.
Here is how you would do it for 1 single reservation using a ancestor Query:
#ndb.transactional
def make_reservation(court_id, start_time):
court = Court(id=court_id)
existing = Reservation.query(Reservation.start_time == start_time,
ancestor=court.key).fetch(2, keys_only=True)
if len(existing):
return False, existing[0]
return True, Reservation(start_time=start_time, parent=court.key).put()
Alternativly, if you make the slot part of the Reservation id, you can remove the query and construct the Reservation entity keys to check if they already exists:
#ndb.transactional
def make_reservations(court_id, slots):
court = Court(id=court_id)
rs = [Reservation(id=s, parent=court.key) for s in slots]
existing = ndb.get_multi(r.key for r in rs)
if any(existing):
return False, existing
return True, ndb.put_multi(rs)
I think you should always use transactions, but I don't think your concerns are best addressed by transactions.
I think you should implement a two-stage reservation system - which is what you see on most shopping bags and ticketing companies.
Posting the form creates a "reservation request" , which blocks out the time(s) as "in someone else's shopping bag" for 5-15 minutes
Users must submit again on an approval screen to confirm the times. You can give them the ability to update the conflicts on that screen too, and reset the 'reservation lock' on the timeslots as long as possible.
A cronjob - or a faked one that is triggered by a request coming in at a certain window - clears out expired reservation locks and returns the times back to the pool of available slots.

Categories