im working with OpenAI API and I want generate a request from a list, but when the request fails the script stops with Exit Code 1 and dont continue with the next in the list. This is the function called from my "For"
try:
response = openai.Completion.create(
engine=config['OPENAI']['engine'].strip(),
prompt=prompt,
temperature=float(config['OPENAI']['temperature'].strip()),
max_tokens=int(config['OPENAI']['max_tokens'].strip()),
top_p=float(config['OPENAI']['top_p'].strip()),
frequency_penalty=float(config['OPENAI']['frequency_penalty'].strip()),
presence_penalty=float(config['OPENAI']['presence_penalty'].strip()),
)
with open('./output/openai_log.txt', 'a') as f:
f.write(prompt + '\n')
f.write(json.dumps(response, indent=4,) + '\n')
f.write('-_' * 60 + '\n')
return response
except openai.APIError as e:
print(f'\nFailed to generate for: {prompt}', e)
return None
Ive tried with try/catch but fails
Related
I have been working on an AI project using Spotipy and the Spotify Web API. I have been getting a list of preview_url's to do some analysis on and I have successfully gotten many, but I ran into issues lately. Whenever I try to use .track(track_id) it gets stuck on the line and doesn't continue past the line. I was thinking it could be an issue with the API, but other commands work fine, it's only track that is giving me issues. I cannot figure out the issue because it doesn't give me any errors, it just gets stuck trying to execute that line and never finishes.
Refreshing the client secret does nothing now. This is the code I have so far.
from spotipy.oauth2 import SpotifyClientCredentials
cid = '121e03d3acd1440188ae4c0f58b844d4'
secret = '431a5e56bcd544c3aefce8166a9c3703'
client_credentials_manager = SpotifyClientCredentials(client_id=cid, client_secret=secret)
sp = spotipy.Spotify(client_credentials_manager=client_credentials_manager)
number = 2
output_file = open('data\\25k_data_preview\\track_url_preview_' + str(number) + '.txt', 'a')
for l in open('data\\25k_data\\track_url_' + str(number) + '.txt'):
line = l.replace('\n','')
print(line)
try:
track = sp.track(line)
try:
testing = track['preview_url']
if testing != None:
output_file.write(line + " " + testing + "\n")
except:
x = 0
except:
x = 0
output_file.close()
I'm new to python and I want this code to run only once and stops, not every 30 seconds
because I want to run multiple codes like this with different access tokens every 5 seconds using the command line.
and when I tried this code it never jumps to the second one because it's a while true:
import requests
import time
api_url = "https://graph.facebook.com/v2.9/"
access_token = "access token"
graph_url = "site url"
post_data = { 'id':graph_url, 'scrape':True, 'access_token':access_token }
# Beware of rate limiting if trying to increase frequency.
refresh_rate = 30 # refresh rate in second
while True:
try:
resp = requests.post(api_url, data = post_data)
if resp.status_code == 200:
contents = resp.json()
print(contents['title'])
else:
error = "Warning: Status Code {}\n{}\n".format(
resp.status_code, resp.content)
print(error)
raise RuntimeWarning(error)
except Exception as e:
f = open ("open_graph_refresher.log", "a")
f.write("{} : {}".format(type(e), e))
f.close()
print(e)
time.sleep(refresh_rate)
From what I understood you're trying to execute the piece of code for multiple access tokens. To make your job simple, have all your access_tokens as lists and use the following code. It assumes that you know all your access_tokens in advance.
import requests
import time
def scrape_facebook(api_url, access_token, graph_url):
""" Scrapes the given access token"""
post_data = { 'id':graph_url, 'scrape':True, 'access_token':access_token }
try:
resp = requests.post(api_url, data = post_data)
if resp.status_code == 200:
contents = resp.json()
print(contents['title'])
else:
error = "Warning: Status Code {}\n{}\n".format(
resp.status_code, resp.content)
print(error)
raise RuntimeWarning(error)
except Exception as e:
f = open (access_token+"_"+"open_graph_refresher.log", "a")
f.write("{} : {}".format(type(e), e))
f.close()
print(e)
access_token = ['a','b','c']
graph_url = ['sss','xxx','ppp']
api_url = "https://graph.facebook.com/v2.9/"
for n in range(len(graph_url)):
scrape_facebook(api_url, access_token[n], graph_url[n])
time.sleep(5)
while var == 1:
test_url = 'https://testurl.com'
get_response = requests.get(url=test_url)
parsed_json = json.loads(get_response.text)
test = requests.get('https://api.telegram.org/botid/' + 'sendMessage', params=dict(chat_id=str(0815), text="test"))
ausgabe = json.loads(test.text)
print(ausgabe['result']['text'])
time.sleep(3)
How do i put in a try-catch routine to this code, once per 2 days i get an Error in Line 4 at json.loads() and i cant reproduce it. What i´m trying to do is that the while loop is in a "try:" block and an catch block that only triggers when an error occurs inside the while loop. Additionally it would be great if the while loop doesnt stop on an error. How could i do this. Thank you very much for your help. (I started programming python just a week ago)
If you just want to catch the error in forth line, a "Try except" wrap the forth line will catch what error happened.
while var == 1:
test_url = 'https://testurl.com'
get_response = requests.get(url=test_url)
try:
parsed_json = json.loads(get_response.text)
except Exception as e:
print(str(e))
print('error data is {}',format(get_response.text))
test = requests.get('https://api.telegram.org/botid/' + 'sendMessage', params=dict(chat_id=str(0815), text="test"))
ausgabe = json.loads(test.text)
print(ausgabe['result']['text'])
time.sleep(3)
You can simply
while var == 1:
try:
test_url = 'https://testurl.com'
get_response = requests.get(url=test_url)
parsed_json = json.loads(get_response.text)
test = requests.get('https://api.telegram.org/botid/' + 'sendMessage', params=dict(chat_id=str(0815), text="test"))
ausgabe = json.loads(test.text)
print(ausgabe['result']['text'])
time.sleep(3)
except Exception as e:
print "an exception {} of type {} occurred".format(e, type(e).__name__)
I wrote a hiscore checker for a game that I play, basically you enter a list of usernames into the .txt file & it outputs the results in found.txt.
However if the page responds a 404 it throws an error instead of returning output as " 0 " & continuing with the list.
Example of script,
#!/usr/bin/python
import urllib2
def get_total(username):
try:
req = urllib2.Request('http://services.runescape.com/m=hiscore/index_lite.ws?player=' + username)
res = urllib2.urlopen(req).read()
parts = res.split(',')
return parts[1]
except urllib2.HTTPError, e:
if e.code == 404:
return "0"
except:
return "err"
filename = "check.txt"
accs = []
handler = open(filename)
for entry in handler.read().split('\n'):
if "No Displayname" not in entry:
accs.append(entry)
handler.close()
for account in accs:
display_name = account.split(':')[len(account.split(':')) - 1]
total = get_total(display_name)
if "err" not in total:
rStr = account + ' - ' + total
handler = open('tried.txt', 'a')
handler.write(rStr + '\n')
handler.close()
if total != "0" and total != "49":
handler = open('found.txt', 'a')
handler.write(rStr + '\n')
handler.close()
print rStr
else:
print "Error searching"
accs.append(account)
print "Done"
HTTPERROR exception that doesn't seem to be working,
except urllib2.HTTPError, e:
if e.code == 404:
return "0"
except:
return "err"
Error response shown below.
Now I understand the error shown doesn't seem to be related to a response of 404, however this only occurs with users that return a 404 response from the request, any other request works fine. So I can assume the issue is within the 404 response exception.
I believe the issue may lay in the fact that the 404 is a custom page which you get redirected too?
so the original page is " example.com/index.php " but the 404 is " example.com/error.php "?
Not sure how to fix.
For testing purposes, format to use is,
ID:USER:DISPLAY
which is placed into check.txt
It seems that total can end up being None. In that case you can't check that it has 'err' in it. To fix the crash, try changing that line to:
if total is not None and "err" not in total:
To be more specific, get_total is returning None, which means that either
parts[1] is None or
except urllib2.HTTPError, e: is executed but e.code is not 404.
In the latter case None is returned as the exception is caught but you're only dealing with the very specific 404 case and ignoring other cases.
I am using Tweepy package in Python to collect tweets. I track several users and collect their latest tweets. For some users I get an error like "Failed to parse JSON payload: ", e.g. "Failed to parse JSON payload: Expecting ',' delimiter or '}': line 1 column 694303 (char 694302)". I took a note of the userid and tried to reproduce the error and debug the code. The second time I ran the code for that particular user, I got results (i.e. tweets) with no problem. I adjusted my code so that when I get this error I try once more to extract the tweets. So, I might get this error once, or twice for a user, but in a second or third attempt the code returns the tweets as usual without the error. I get similar behaviour for other userids too.
My question is, why does this error appear randomly? Nothing else has changed. I searched on the internet but couldn't find a similar report. A snippet of my code follows
#initialize a list to hold all the tweepy Tweets
alltweets = []
ntries = 0
#make initial request for most recent tweets (200 is the maximum allowed count)
while True:
try: #if process fails due to connection problems, retry.
if beforeid:
new_tweets = api.user_timeline(user_id = user,count=200, since_id=sinceid, max_id=beforeid)
else:
new_tweets = api.user_timeline(user_id = user,count=200, since_id=sinceid)
break
except tweepy.error.RateLimitError:
print "Rate limit error:", sys.exc_info()[0]
print("Timeout, retry in 5 minutes...\n")
time.sleep(60 * 5)
continue
except tweepy.error.TweepError as er:
print('TweepError: ' + er.message)
if er.message == 'Not authorized.':
new_tweets = []
break
else:
print(str(ntries))
ntries +=1
pass
except:
print "Unexpected error:", sys.exc_info()[0]
new_tweets = []
break