cant add tracks to spotify playlist - python

having an issue with adding tracks to a playlist . i can obtain the currently playing song and create a playlist if it doesnt already exist but once i try to add tracks to the playlist it gives me this error
An error occurred: http status: 403, code:-1
You cannot add tracks to a playlist you don't own., reason: None
def add_artist_songs_to_playlist():
sp_oauth = SpotifyOAuth(client_id, client_secret, redirect_uri,
scope="app-remote-control user-library-read user-read-playback-state user-read-private user-read-email playlist-read-private playlist-modify-public playlist-modify-private",
cache_path=".cache-" + username)
spotify_api = spotipy.Spotify(auth_manager=sp_oauth)
artist_name, artist_id = get_artist_info()
try:
songs = []
offset = 0
while True:
result = spotify_api.artist_albums(artist_id, offset=offset)
albums = result['items']
if not albums:
break
for album in albums:
album_tracks = spotify_api.album_tracks(album['id'])
for track in album_tracks['items']:
songs.append(track['id'])
offset += 20
spotify_api.playlist_add_items(playlist_id, songs)
print(f"Successfully added {len(songs)} songs to the playlist!")
except spotipy.client.SpotifyException as e:
print(f"An error occurred: {e}")

How do you get playlist_id? It is likely not allowing you modify a playlist that does not belong to the authenticated user.
You can try getting the current user's playlists and see if the playlist_id you are trying to modify is listed in there: https://spotipy.readthedocs.io/en/2.22.1/#spotipy.client.Spotify.current_user_playlists

Related

How to check the deleted tweets?

I have millions of tweet ID, I wanted to check whether the tweet is deleted or not.
I can crawl all the tweet ID using Tweepy and match it with previous data difference will be deleted tweets.
def lookup_tweets(tweet_IDs, api):
full_tweets = []
tweet_count = len(tweet_IDs)
try:
for i in range((tweet_count//100) + 1):
end_loc = min((i + 1) * 100, tweet_count)
id=tweet_IDs[i * 100:end_loc]
full_tweets.extend(api.statuses_lookup(id))
return full_tweets
except tweepy.TweepError as e:
print(e)
print('Something went wrong, quitting...')
I tried to get HTTP status but it didn't work.
How can I know the tweet ID which is deleted?
In v1.1 of the API, Twitter returns response code 144 alongside HTTP 404 when a Tweet has been deleted.

Handling googleapi client errors

I am reading and creating the calendar event for set of emails through Google calendar API. Now If I give one email id is wrong it's throwing an error .
googleapiclient.errors.HttpError: <HttpError 404 when requesting https://www.goo
gleapis.com/calendar/v3/calendars/xxx%40gmail.com/events?timeMin=2019-1
2-18T00%3A00%3A00%2B05%3A30&maxResults=240&timeMax=2019-12-18T23%3A59%3A00%2B05%
3A30&singleEvents=true&orderBy=startTime&alt=json returned "Not Found">
I can understand there is wrong in the email and I am getting this error. But I want to handle this exception,like if my email is wrong also it should skip the wrong email and it should go further and display the proper result.
What I tried is
from googleapiclient.errors import HttpError
def my_funtion():
try:
----
-----
except HttpError as err:
print("The exception is",err)
finally:
return "I am returning whatever i get it from try
Is it correct try catch block?
for the above code I am getting the same googleclientapi error,It's not going inside the excpet block
What I expect here is
It should go to the try block,if one of the email id is wrong,it should skip the email id and it should return the result of whatever is getting fetched from the try block.
I can say it should omit the apiclient but and return the result.
#for calendar_id in calendar_ids:
eventsResult = service.events().list(calendarId=["a#gmail.com","b#gmail.com","c#gmail.com"],timeMin=start_date,timeMax=end_date,singleEvents=True,orderBy='startTime').execute()
events = eventsResult.get('items', [])
if not events:
print('No upcoming events found.')
print(events)
while True:
for event in events.get('items', []):
print(event['summary'])
page_token = events.get('nextPageToken') #check if any event present in next page of the calendar
if page_token:
events = service.events().list(calendarId='primary', pageToken=page_token).execute()
else:
break
for calendar_id in calendar_ids:
count = 0
print('\n----%s:\n' % calendar_id)
try:
eventsResult = service.events().list(
calendarId=calendar_id,
timeMin=start_date,
timeMax=end_date,
singleEvents=True,
orderBy='startTime').execute()
events = eventsResult.get('items', [])
if not events:
print('No upcoming events found.')
for event in events:
if 'summary' in event:
if 'PTO' in event['summary']:
count += 1
start = event['start'].get(
'dateTime', event['start'].get('date'))
print(start, event['summary'])
except exception as err:
print("I am executing",err)
finally:
print('Total days off for %s is %d' % (calendar_id, count))```
I have got the answer for this post. I have used 'pass' in exception block and it worked well.Thanks

YouTube Data Api: breaking a nextPageToken while loop if quota limit is reached?

I am using Python 3 and the YouTube Data API V3 to fetch comments from a YouTube video. This particular video has around 280,000 comments. I am trying to write a while loop that will get as many comments as possible before hitting the quota limit and then breaking if the quota limit is reached.
My loop appears to be successfully calling next page tokens and appending the requested metadata to my list, but when the quota is reached, it doesn't end the loop, instead registering an HttpError, and not saving any of the correctly fetched comment data.
Here is my current code:
# Get resources:
def get(resource, **kwargs):
print(f'Getting {resource} with params {kwargs}')
kwargs['key'] = API_KEY
response = requests.get(url=f'{YOUTUBE_BASE_URL}/{resource}',
params=remove_empty_kwargs(kwargs))
print(f'Response: {response.status_code}')
return response.json()
# Getting ALL comments for a video:
def getComments(video_id):
comments = []
res = get('commentThreads', part='id,snippet,replies', maxResults=100, videoId=video_id)
try:
nextPageToken = res['nextPageToken']
except TypeError:
nextPageToken = None
while (nextPageToken):
try:
res = get('commentThreads', part='id,snippet,replies', maxResults=100, videoId=video_id)
for i in res['items']:
comments.append(i)
nextPageToken = res['nextPageToken']
except HttpError as error:
print('An error occurred: %s' % error)
break
return comments
test = 'video-id-here'
testComments = getComments(test)
So, what happens is this correctly seems to be looping through all the comments. But after some time, i.e., after it has looped through several hundred times, I get the following error:
Getting commentThreads with params {'part': 'id,snippet,replies', 'maxResults': 100, 'videoId': 'real video ID shows here'}
Response: 403
KeyError Traceback (most recent call last)
<ipython-input-39-6582a0d8f122> in <module>
----> 1 testComments = getComments(test)
<ipython-input-29-68952caa30dd> in getComments(video_id)
12 res = get('commentThreads', part='id,snippet,replies', maxResults=100, videoId=video_id)
13
---> 14 for i in res['items']:
15 comments.append(i)
16
KeyError: 'items'
So, first I get the expected 403 respsonse from the API after some time, which indicates reaching the quota limit. Then it throws the error for 'items', but the reason this error is thrown is because it didn't catch anymore comment threads, so there are no more 'items' to append.
My expected result is that the loop will just break when the quota limit is reached and save the comment data it managed to fetch before reaching the quota.
I think this is probably related to my 'try' and 'except' handling, but I can't seem to figure out.
Thanks!
Ultimately fixed it with this code:
def getComments(video_id):
comments = []
res = get('commentThreads', part='id,snippet,replies', maxResults=100, videoId=video_id)
try:
nextPageToken = res['nextPageToken']
except KeyError:
nextPageToken = None
except TypeError:
nextPageToken = None
while (nextPageToken):
try:
res = get('commentThreads', part='id,snippet,replies', maxResults=100, videoId=video_id)
for i in res['items']:
comments.append(i)
nextPageToken = res['nextPageToken']
except KeyError:
break
return comments
Proper exception handling for the KeyError was the ultimate solution, since my get() function returns a JSON object.
You are catching an HttpError but it never happens, because when your limit runs out the API just returns 403.
There is no HttpError to catch and so you try to read a value which isn't there and get a KeyError.
The most robust way is probably to check the status code.
res = get('commentThreads', part='id,snippet,replies', maxResults=100, videoId=video_id)
if res.status_code != 200:
break
for i in res['items']:
comments.append(i)
nextPageToken = res['nextPageToken']
The res.status_code is assuming you're using requests.

GMail api users().history().list returns 'nextPageToken', but no 'history' field?

I use the GMail API's history.list to retrieve a list of changed messages, this works fine with several pages of history - but sometimes when a nextPageToken is returned, it is used to retrieve the next page, which is returned without a history field. No HttpError is raised.
results = self.service.users ().history ().list (userId = self.account, startHistoryId = start).execute ()
if 'history' in results:
yield results['history']
while 'nextPageToken' in results:
pt = results['nextPageToken']
results = self.service.users ().history ().list (userId = self.account, startHistoryId = start, pageToken = pt).execute ()
yield results['history'] # this fails with missing 'history' member.
You need to expect no results on history page if I understand the question right.
while 'nextPageToken' in response:
page_token = response['nextPageToken']
response = gcon.users().messages().list(userId='me', pageToken=page_token).execute()
if response['resultSizeEstimate'] is 0:
break
email.extend(response['messages'])
return email
I think this will help.
if response['resultSizeEstimate'] is 0:
break

How to search youtube video in channel using youtube

def get_videos(search_keyword):
youtube = build(YOUTUBE_API_SERVICE_NAME,
YOUTUBE_API_VERSION,
developerKey=DEVELOPER_KEY)
try:
search_response = youtube.search().list(
q=search_keyword,
part="id,snippet",
channelId=os.environ.get("CHANNELID", None),
maxResults=10, #max = 50, default = 5, min = 0
).execute()
videos = []
channels = []
for search_result in search_response.get("items", []):
if search_result["id"]["kind"] == "youtube#video":
title = search_result["snippet"]["title"]
videoId = search_result["id"]["videoId"]
channelTitle = search_result["snippet"]["channelTitle"]
cam_thumbnails = search_result["snippet"]["thumbnails"]["medium"]["url"]
publishedAt = search_result["snippet"]["publishedAt"]
channelId = search_result["snippet"]["channelId"]
data = {'title' : title,
'videoId' : videoId,
'channelTitle' : channelTitle,
'cam_thumbnails' : cam_thumbnails,
'publishedAt' : publishedAt}
videos.append(data)
elif search_result["id"]["kind"] == "youtube#channel":
channels.append("%s (%s)" % (search_result["snippet"]["title"],
search_result["id"]["channelId"]))
except Exception as e:
print e
Now, I'am using python youtube data api, I get youtube video data that is searched by keyword in specified channel, But I want to get All data that is not searched by keyword in specified channel
How get I youtube video data in specified channel? data that i want to get must be all data in specified channel
I'm not 100% sure I know what you're asking, but I think you're asking how you can get all videos in a channel and not just those related to your keyword? If that's correct, you should just be able to remove:
q=search_keyword,
from your request, and the API should then return all videos in the channel. If you're asking something else, please clarify in your question.

Categories