Problem: the program is facing a 500 Internal Error when accessing a website using this code: (I work with PyQt)
Happens only on my windows box (WIN7) and not on my linux (ubuntu 12.04 LTS) fyi they are different computers (but on the same LAN)
def sendBearer_req(self):
request = QNetworkRequest()
request.setUrl(QUrl("https://api.twitter.com/oauth2/token"))
request.setRawHeader('Content-Type', 'application/x-www-form-urlencoded;charset=UTF-8')
request.setRawHeader('Authorization', 'Basic %s' % cons_enc)
self.network_manager = QNetworkAccessManager()
if self.network_manager.receivers(SIGNAL("finished")) > 0:
self.network_manager.finished.disconnect()
self.network_manager.finished.connect(self._request_finished)
self.network_manager.post(request, self.urlencode_post({'grant_type' : 'client_credentials'}))
def _request_finished(self, reply):
if not reply.error() == QNetworkReply.NoError:
# request probably failed
print(reply.error())
print(reply.errorString())
print("retrying")
self.sendBearer_req()
else:
self.sendBearer(reply)
Output:
299
Error downloading URL - server replied: Internal Server Error
retrying
where URL is the page url.
I tried it with many URLs in case the problem was really in the server itself but it's not.
cons_enc is valid (b64 encoded string)
How to fix it? and if you know why in ubuntu it works?
Status code 299 is non-standard and the Twitter API Error Code Reference does not list it. Likely the cause is not related to this stub of code
QNetworkReply defines code 299 as "unknown error", maybe it's a Qt bug.
As a suggestion, try using a proper Twitter API for Python.
Problem found, apparently I runned it accidently on Python 3.3 on windows and it was built for Python 2.7 (which was used in the linux box).
Maybe PyQt should check that before starting up!
Related
From behind my vpn, http proxy: 127.0.0.1:8118, I'm just trying to get the example from the quickstart of the ia package going as from the website:
https://archive.org/developers/tutorial-find-identifier-item.html
https://github.com/jjjake/internetarchive/tree/master/internetarchive
Standalone, this always works fine, ( so I believe the issue is in the package and not my computer ):
requests.get('http://google.com', proxies={'https'=127.0.0.1:8118'})
//ia uses HTTPS
I've tried setting the environment variable HTTPS_PROXY:
HTTPS_PROXY='http://127.0.0.1:8118'
I'm just trying to get the first working example from the website going (from behind a vpn), very simple:
from internetarchive import get_item
md = {'collection': 'test_collection', 'title': 'My New Item', 'mediatype': 'movies'}
r = item.upload('<identifier>', files=['film.txt', 'film.mov'], metadata=md, access_key='YoUrAcCEssKey', secret_key='youRSECRETKEY')
r[0].status_code
I've been digging around the ia package for places to enter the proxy variable in the requests.get(..., proxies={'http':127.0.0.1:8118' }) and running test with no luck...
I'm still having no luck here. Are there any suggestions?
I am using Python with Jupyter Notebook.
This program downloads many pdfs and works on other machines, however when putting it on another machine with Windows Server 2016 Standard it shows an error.
The function that is causing the error is:
def download_doc(pasta_pdf,base_links):
os.chdir(pasta_pdf)
for link in base_links['Link_Download_Regulamento']:
if link != None:
wget.download(link)
else:
continue
download_doc(pasta_origem,df_regulamentos_novos)
The error print:
enter image description here
Thanks for your assistance.
You got
HTTPError: HTTP Error 403: Forbidden
If you got information about HTTP response status code, but do not know what it does mean, then you might consult developer.mozilla.org docs, in this case this is 403 Forbidden
The client does not have access rights to the content; that is, it is
unauthorized, so the server is refusing to give the requested
resource. Unlike 401, the client's identity is known to the server.
I am getting
DownloadError: ApplicationError: 2 (11004, 'getaddrinfo failed')
I'm making an API call to an application deployed on Tomcat on my machine and expecting a XML response, but doesn't work. Can someone propose a solution or an alternate route?
I am working on an IT build laptop and suspected that that could be the issue so I tried from a laptop that is not behind firewall and not on company network, but that didnt' help either.
Code Snippet:
base_url = 'http://localhost:8080/recomendations/'
url = (base_url + 'api/1.0/otherusersalsoviewed?' + urlencode(args))
req = urlfetch.fetch(url, None, urlfetch.GET, {}, False, True, 60, False)
if req.status_code==200:
root = ElementTree(fromstring(req.content)).getroot()
Any idea what is wrong?
Note that the url you are specifying is pointing to localhost, which means 'this computer' (see http://en.wikipedia.org/wiki/Localhost), i.e. when you deploy your app to GAE localhost refer to GAE, which is meaningless.
Just in case, I was geting:
ApplicationError: 2
I fixed removing references to users.create_logout_url()
I'm looking to send an SMS with the Twilio api, but I'm getting the following error:
"unknown url type: https"
I've recompiled python with Openssl, so my code runs fine from the python interpretor, but whenever I try to run it in one of my django views I get this error. Here is my code from my view:
def send_sms(request):
recipient = '1234567890'
account = twilio.Account(settings.TWILIO_ID, settings.TWILIO_TOKEN)
params = { 'From': settings.TWILIO_NUM, 'To': recipient, 'Body': 'This is a test message.', }
account.request('/%s/Accounts/%s/SMS/Messages' % (settings.TWILIO_API_VERSION, settings.TWILIO_ID), 'POST', params)
Edit- More info (thanks for bringing that up Stefan)
The project is hosted on dreamhost via Passenger wsgi. Django is using the same python install location and interp.
I appreciate any insight anyone may have, thanks!
Looks like it was just user error. My wsgi file was using a different interpreter but the paths were so similar I was just over looking it. Once I fixed that django was using the python version that I compiled with openssl and everything worked fine.
Always check if the tv is plugged in before you take it apart. Thanks stefanw!
Does urllib2 in Python 2.6.1 support proxy via https?
I've found the following at http://www.voidspace.org.uk/python/articles/urllib2.shtml:
NOTE
Currently urllib2 does not support
fetching of https locations through a
proxy. This can be a problem.
I'm trying automate login in to web site and downloading document, I have valid username/password.
proxy_info = {
'host':"axxx", # commented out the real data
'port':"1234" # commented out the real data
}
proxy_handler = urllib2.ProxyHandler(
{"http" : "http://%(host)s:%(port)s" % proxy_info})
opener = urllib2.build_opener(proxy_handler,
urllib2.HTTPHandler(debuglevel=1),urllib2.HTTPCookieProcessor())
urllib2.install_opener(opener)
fullurl = 'https://correct.url.to.login.page.com/user=a&pswd=b' # example
req1 = urllib2.Request(url=fullurl, headers=headers)
response = urllib2.urlopen(req1)
I've had it working for similar pages but not using HTTPS and I suspect it does not get through proxy - it just gets stuck in the same way as when I did not specify proxy. I need to go out through proxy.
I need to authenticate but not using basic authentication, will urllib2 figure out authentication when going via https site (I supply username/password to site via url)?
EDIT:
Nope, I tested with
proxies = {
"http" : "http://%(host)s:%(port)s" % proxy_info,
"https" : "https://%(host)s:%(port)s" % proxy_info
}
proxy_handler = urllib2.ProxyHandler(proxies)
And I get error:
urllib2.URLError: urlopen error
[Errno 8] _ssl.c:480: EOF occurred in
violation of protocol
Fixed in Python 2.6.3 and several other branches:
_bugs.python.org/issue1424152 (replace _ with http...)
http://www.python.org/download/releases/2.6.3/NEWS.txt
Issue #1424152: Fix for httplib, urllib2 to support SSL while working through
proxy. Original patch by Christopher Li, changes made by Senthil Kumaran.
I'm not sure Michael Foord's article, that you quote, is updated to Python 2.6.1 -- why not give it a try? Instead of telling ProxyHandler that the proxy is only good for http, as you're doing now, register it for https, too (of course you should format it into a variable just once before you call ProxyHandler and just repeatedly use that variable in the dict): that may or may not work, but, you're not even trying, and that's sure not to work!-)
Incase anyone else have this issue in the future I'd like to point out that it does support https proxying now, make sure the proxy supports it too or you risk running into a bug that puts the python library into an infinite loop (this happened to me).
See the unittest in the python source that is testing https proxying support for further information:
http://svn.python.org/view/python/branches/release26-maint/Lib/test/test_urllib2.py?r1=74203&r2=74202&pathrev=74203