wordpress with python on proxy server - python

This is a code for posting on a blog. It is my first try. I dont know what is the error in it. I am using proxy server and the error I'm getting is connection to server failed.
Can anyone help me out pleaseeeeeeeeee :/
import wordpresslib
# dummy data to be on safe side
data = "Post content, just ensuring data is not empty"
url='http://agneesa.wordpress.com/wordpress/xmlrpc.php'
# insert correct username and password
wp=wordpresslib.WordPressClient(url,'agnsa','pan#13579')
wp.selectBlog(0)
post=wordpresslib.WordPressPost()
post.title='try'
post.description=data
idPost=wp.newPost(post,True)
here is the traceback
here is the traceback file
Traceback (most recent call last):
File "C:\Python27\Lib\example.py", line 34, in <module>
post.categories = (wp.getCategoryIdFromName('Python'),)
File "C:\Python27\Lib\wordpresslib.py", line 332, in getCategoryIdFromName
for c in self.getCategoryList():
File "C:\Python27\Lib\wordpresslib.py", line 321, in getCategoryList
self.user, self.password)
File "C:\Python27\Lib\xmlrpclib.py", line 1224, in __call__
return self.__send(self.__name, args)
File "C:\Python27\Lib\xmlrpclib.py", line 1578, in __request
verbose=self.__verbose
File "C:\Python27\Lib\xmlrpclib.py", line 1264, in request
return self.single_request(host, handler, request_body, verbose)
File "C:\Python27\Lib\xmlrpclib.py", line 1292, in single_request
self.send_content(h, request_body)
File "C:\Python27\Lib\xmlrpclib.py", line 1439, in send_content
connection.endheaders(request_body)
File "C:\Python27\Lib\httplib.py", line 954, in endheaders
self._send_output(message_body)
File "C:\Python27\Lib\httplib.py", line 814, in _send_output
self.send(msg)
File "C:\Python27\Lib\httplib.py", line 776, in send
self.connect()
File "C:\Python27\Lib\httplib.py", line 757, in connect
self.timeout, self.source_address)
File "socket.py", line 571, in create_connection
raise err
error: [Errno 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond

From the looks of your site, the url you posted returns a 404 (not actually there). However, this does seem ready to receive POST requests: http://agneesa.wordpress.com/xmlrpc.php
I suggest you try checking that URL for accuracy.
This is what I get when I try your code with your original URL:
xmlrpclib.ProtocolError: <ProtocolError for \
agneesa.wordpress.com/wordpress/xmlrpc.php: 404 Not Found>
This is what I get when I try it with the modified URL:
wordpresslib.WordPressException: \
<WordPressException 403: 'Bad login/pass combination.'>
... obviously because thats not your real account info. In a nutshell, its possible your proxy could also be contributing to problems if its not set up to properly forward the request, but without us knowing specifics about your proxy config, there is no way to know for sure.

Related

elasticsearch.exceptions.SSLError: ConnectionError hostname doesn't match

I've been using the Elasticsearch Python API to do some basic operation on a cluster (like creating an index or listing them). Everything worked fine but I decided to activate SSL authentification on the cluster and my scripts aren't working anymore.
I have the following errors :
Certificate did not match expected hostname: X.X.X.X. Certificate: {'subject': ((('commonName', 'X.X.X.X'),),), 'subjectAltName': [('DNS', 'X.X.X.X')]} GET https://X.X.X.X:9201/ [status:N/A request:0.009s] Traceback (most recent call last): File "/home/esadm/env/lib/python3.7/site-packages/urllib3/connectionpool.py", line 672, in urlopen
chunked=chunked, File "/home/esadm/env/lib/python3.7/site-packages/urllib3/connectionpool.py", line 376, in _make_request
self._validate_conn(conn) File "/home/esadm/env/lib/python3.7/site-packages/urllib3/connectionpool.py", line 994, in _validate_conn
conn.connect() File "/home/esadm/env/lib/python3.7/site-packages/urllib3/connection.py", line 386, in connect
_match_hostname(cert, self.assert_hostname or server_hostname) File "/home/esadm/env/lib/python3.7/site-packages/urllib3/connection.py", line 396, in _match_hostname
match_hostname(cert, asserted_hostname) File "/home/esadm/env/lib/python3.7/ssl.py", line 338, in match_hostname
% (hostname, dnsnames[0])) ssl.SSLCertVerificationError: ("hostname 'X.X.X.X' doesn't match 'X.X.X.X'",)
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/home/esadm/env/lib/python3.7/site-packages/elasticsearch/connection/http_urllib3.py", line 233, in perform_request
method, url, body, retries=Retry(False), headers=request_headers, **kw File "/home/esadm/env/lib/python3.7/site-packages/urllib3/connectionpool.py", line 720, in urlopen
method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] File "/home/esadm/env/lib/python3.7/site-packages/urllib3/util/retry.py", line 376, in increment
raise six.reraise(type(error), error, _stacktrace) File "/home/esadm/env/lib/python3.7/site-packages/urllib3/packages/six.py", line 734, in reraise
raise value.with_traceback(tb) File "/home/esadm/env/lib/python3.7/site-packages/urllib3/connectionpool.py", line 672, in urlopen
chunked=chunked, File "/home/esadm/env/lib/python3.7/site-packages/urllib3/connectionpool.py", line 376, in _make_request
self._validate_conn(conn) File "/home/esadm/env/lib/python3.7/site-packages/urllib3/connectionpool.py", line 994, in _validate_conn
conn.connect() File "/home/esadm/env/lib/python3.7/site-packages/urllib3/connection.py", line 386, in connect
_match_hostname(cert, self.assert_hostname or server_hostname) File "/home/esadm/env/lib/python3.7/site-packages/urllib3/connection.py", line 396, in _match_hostname
match_hostname(cert, asserted_hostname) File "/home/esadm/env/lib/python3.7/ssl.py", line 338, in match_hostname
% (hostname, dnsnames[0])) urllib3.exceptions.SSLError: ("hostname 'X.X.X.X' doesn't match 'X.X.X.X'",)
The thing I don't understand is that this message doesn't make any sense :
"hostname 'X.X.X.X' doesn't match 'X.X.X.X'"
Because the two adresses matches, they are exactly the same !
I've followed the docs and my configuration of the instance Elasticsearch looks like this :
Elasticsearch([get_ip_address()],
http_auth=('elastic', 'pass'),
use_ssl=True,
verify_certs=True,
port=get_instance_port(),
ca_certs='ca.crt',
client_cert='pvln0047.crt',
client_key='pvln0047.key'
)
Thanks for your help
Problem solved, the issue was in the constructor :
Elasticsearch([get_ip_address()],
http_auth=('elastic', 'pass'),
use_ssl=True,
verify_certs=True,
port=get_instance_port(),
ca_certs='ca.crt',
client_cert='pvln0047.crt',
client_key='pvln0047.key'
)
Instead of mentioning the ip address I needed to mention the DNS name, I also changed the arguments by using context object just to follow the original docs.
context = create_default_context(cafile="ca.crt")
context.load_cert_chain(certfile="pvln0047.crt", keyfile="pvln0047.key")
context.verify_mode = CERT_REQUIRED
Elasticsearch(['dns_name'],
http_auth=('elastic', 'pass'),
scheme="https",
port=get_instance_port(),
ssl_context=context
)
This is how I generated the certificates :
bin/elasticsearch-certutil cert ca --pem --in /tmp/instance.yml --out /home/user/certs.zip
And this is my instance.yml file :
instances:
- name: 'dns_name'
dns: [ 'dns_name' ]
Hope, it will help someone !

HTTPS request with Python standard library

UPDATE: I managed to do a request with urllib2, but I'm still wondering what is happening here.
I would like to do a HTTPS request with Python.
This works fine with the requests module, but I don't want to use external dependencies, so I'd like to use the standard library.
httplib
When I follow this example I don't get a response. I get a timeout instead. I'm out of ideas as to what would cause this.
Code:
import requests
print requests.get('https://python.org')
from httplib import HTTPSConnection
conn = HTTPSConnection('www.python.org')
conn.request('GET', '/index.html')
print conn.getresponse()
Output:
<Response [200]>
Traceback (most recent call last):
File "test.py", line 6, in <module>
conn.request('GET', '/index.html')
File "C:\Python27\lib\httplib.py", line 1069, in request
self._send_request(method, url, body, headers)
File "C:\Python27\lib\httplib.py", line 1109, in _send_request
self.endheaders(body)
File "C:\Python27\lib\httplib.py", line 1065, in endheaders
self._send_output(message_body)
File "C:\Python27\lib\httplib.py", line 892, in _send_output
self.send(msg)
File "C:\Python27\lib\httplib.py", line 854, in send
self.connect()
File "C:\Python27\lib\httplib.py", line 1282, in connect
HTTPConnection.connect(self)
File "C:\Python27\lib\httplib.py", line 831, in connect
self.timeout, self.source_address)
File "C:\Python27\lib\socket.py", line 575, in create_connection
raise err
socket.error: [Errno 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
urllib
This fails for a different (but possibly related) reason. Code:
import urllib
print urllib.urlopen("https://python.org")
Output:
Traceback (most recent call last):
File "test.py", line 10, in <module>
print urllib.urlopen("https://python.org")
File "C:\Python27\lib\urllib.py", line 87, in urlopen
return opener.open(url)
File "C:\Python27\lib\urllib.py", line 215, in open
return getattr(self, name)(url)
File "C:\Python27\lib\urllib.py", line 445, in open_https
h.endheaders(data)
File "C:\Python27\lib\httplib.py", line 1065, in endheaders
self._send_output(message_body)
File "C:\Python27\lib\httplib.py", line 892, in _send_output
self.send(msg)
File "C:\Python27\lib\httplib.py", line 854, in send
self.connect()
File "C:\Python27\lib\httplib.py", line 1290, in connect
server_hostname=server_hostname)
File "C:\Python27\lib\ssl.py", line 369, in wrap_socket
_context=self)
File "C:\Python27\lib\ssl.py", line 599, in __init__
self.do_handshake()
File "C:\Python27\lib\ssl.py", line 828, in do_handshake
self._sslobj.do_handshake()
IOError: [Errno socket error] [SSL: UNKNOWN_PROTOCOL] unknown protocol (_ssl.c:727)
What is requests doing that makes it succeed where both of these libraries fail?
requests.get without timeout parameter mean no timeout at all.
httplib.HTTPSConnection accept parameter timeout in Python 2.6 and newer according to httplib docs. If your problem was caused by timeout, setting high enough timeout should help. Please try replacing:
conn = HTTPSConnection('www.python.org')
with:
conn = HTTPSConnection('www.python.org', timeout=300)
which will give 300 seconds (5 minutes) for processing.

using dhcp but IP is xx.xx.xx.0

I am a newbie in fedora , and met this awful problem from I erase my /etc/sysconfig/ifcfg-enp0s25 by mistake.
And then my ip became an external ip xxx.xx.6.168.
To solve my ip to be an internal ip , I tried to rebuild the file by using # vi ifcfg-enp0s25 and insert:
DEVICE=enp0s25
NAME=enp0s25
BOOTPROTO=dhcp
ONBOOT=yes
TYPE=Ethernet
then # service network restart
and my IP became:10.xx.xx.0
This IP could connect internal network successfully but get wrong to get own socket:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "ldtp/__init__.py", line 593, in <module>
_populateNamespace(globals())
File "ldtp/__init__.py", line 247, in _populateNamespace
for method in client._client.system.listMethods():
File "/usr/lib64/python2.7/xmlrpclib.py", line 1243, in __call__
return self.__send(self.__name, args)
File "/usr/lib64/python2.7/xmlrpclib.py", line 1602, in __request
verbose=self.__verbose
File "ldtp/client.py", line 146, in request
self.send_content(h, request_body)
File "/usr/lib64/python2.7/xmlrpclib.py", line 1459, in send_content
connection.endheaders(request_body)
File "/usr/lib64/python2.7/httplib.py", line 1053, in endheaders
self._send_output(message_body)
File "/usr/lib64/python2.7/httplib.py", line 897, in _send_output
self.send(msg)
File "/usr/lib64/python2.7/httplib.py", line 859, in send
self.connect()
File "/usr/lib64/python2.7/httplib.py", line 836, in connect
self.timeout, self.source_address)
File "/usr/lib64/python2.7/socket.py", line 575, in create_connection
raise err
socket.error: [Errno 113] No route to host
So how can I return to my past IP ??
Any advice is helpful!
I think there should be quite more informations in that file, check this page for further details. Hope this helps
Edit: since you can't visit the page i copied for you the example of that file, maybe here you will find some of the options you miss... maybe you had a static IP
HWADDR=REPLACE:WITH:YOUR:MAC:ADDRESS:HERE
TYPE=Ethernet
BOOTPROTO=none
IPADDR0=REPLACE.YOUR.IP.ADDRESS
PREFIX0=23
GATEWAY0=REPLACE.YOUR.GATEWAY.IP
DNS0=REPLACE.YOUR.DNS.IP
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=no
IPV6_AUTOCONF=no
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_PRIVACY=no
NAME=enp0s25
ONBOOT=yes
DOMAIN="REPLACE.YOUR.DOMAIN"
NM_CONTROLLED="no"

python ssl eof occurred in violation of protocol, wantwriteerror, zeroreturnerror

I'm running many celery tasks (20,000) using gevent for the pool (also monkey patching all). Each of these tasks hit 3rd party services like adwords to pull data.
I keep having tasks fail because of underlying SSL errors. Below are the stack-traces from a few of the exceptions (in no particular order, these are failures from separate tasks). I also get WantWriteError and ZeroReturnError occasionally but the EOF error seems to come up the most.
These errors happen while using different client libraries like googleads (suds library for soap communication) as well as requests and elasticsearch. I'm guessing some of these libraries use urllib3 while others use urllib2 etc.
There has been a lot of info on the EOF issue and forcing TLSv1 but I can't seem to find a resolution that works.
I'm not sure if I'm running too many requests at once, if somethings blocking or what; any help would be greatly appreciated, I'm pulling my hair out over this one.
Traceback (most recent call last):
...
File "/srv/reporting/src/reporting/stats/adwords/client.py", line 58, in _awql_report
downloader = self._get_client(client_id).GetReportDownloader(version=self.REPORT_DOWNLOADER_VERSION)
File "/usr/local/lib/python2.7/dist-packages/googleads/adwords.py", line 283, in GetReportDownloader
return ReportDownloader(self, version, server)
File "/usr/local/lib/python2.7/dist-packages/googleads/adwords.py", line 400, in __init__
proxy=proxy_option, cache=self._adwords_client.cache).wsdl.schema
File "/usr/local/lib/python2.7/dist-packages/suds/client.py", line 115, in __init__
self.wsdl = reader.open(url)
File "/usr/local/lib/python2.7/dist-packages/suds/reader.py", line 150, in open
d = self.fn(url, self.options)
File "/usr/local/lib/python2.7/dist-packages/suds/wsdl.py", line 136, in __init__
d = reader.open(url)
File "/usr/local/lib/python2.7/dist-packages/suds/reader.py", line 74, in open
d = self.download(url)
File "/usr/local/lib/python2.7/dist-packages/suds/reader.py", line 92, in download
fp = self.options.transport.open(Request(url))
File "/usr/local/lib/python2.7/dist-packages/suds/transport/https.py", line 62, in open
return HttpTransport.open(self, request)
File "/usr/local/lib/python2.7/dist-packages/suds/transport/http.py", line 67, in open
return self.u2open(u2request)
File "/usr/local/lib/python2.7/dist-packages/suds/transport/http.py", line 132, in u2open
return url.open(u2request, timeout=tm)
File "/usr/lib/python2.7/urllib2.py", line 400, in open
response = self._open(req, data)
File "/usr/lib/python2.7/urllib2.py", line 418, in _open
'_open', req)
File "/usr/lib/python2.7/urllib2.py", line 378, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 1216, in https_open
return self.do_open(httplib.HTTPSConnection, req)
File "/usr/lib/python2.7/urllib2.py", line 1178, in do_open
raise URLError(err)
URLError: <urlopen error [Errno 8] _ssl.c:504: EOF occurred in violation of protocol>
Traceback (most recent call last):
...
File "/srv/reporting/src/reporting/stats/analytics/client.py", line 57, in get_access_token
response = requests.post('https://accounts.google.com/o/oauth2/token', data)
File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 88, in post
return request('post', url, data=data, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 44, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 456, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 559, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/adapters.py", line 382, in send
raise SSLError(e, request=request)
SSLError: [Errno bad handshake] (-1, 'Unexpected EOF')
Traceback (most recent call last):
...
self.es.index(index=self.INDICE, doc_type=self.ROOT_CLASS.__name__, body=self.export(obj), id=obj.id)
File "/usr/local/lib/python2.7/dist-packages/elasticsearch/client/utils.py", line 68, in _wrapped
return func(*args, params=params, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/elasticsearch/client/__init__.py", line 213, in index
_make_path(index, doc_type, id), params=params, body=body)
File "/usr/local/lib/python2.7/dist-packages/elasticsearch/transport.py", line 284, in perform_request
status, headers, data = connection.perform_request(method, url, params, body, ignore=ignore, timeout=timeout)
File "/usr/local/lib/python2.7/dist-packages/elasticsearch/connection/http_requests.py", line 44, in perform_request
response = self.session.request(method, url, data=body, timeout=timeout or self.timeout)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 456, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 559, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/adapters.py", line 327, in send
timeout=timeout
File "/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/connectionpool.py", line 493, in urlopen
body=body, headers=headers)
File "/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/connectionpool.py", line 319, in _make_request
httplib_response = conn.getresponse(buffering=True)
File "/usr/lib/python2.7/httplib.py", line 1030, in getresponse
response.begin()
File "/usr/lib/python2.7/httplib.py", line 407, in begin
version, status, reason = self._read_status()
File "/usr/lib/python2.7/httplib.py", line 365, in _read_status
line = self.fp.readline()
File "/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/contrib/pyopenssl.py", line 273, in readline
data = self._sock.recv(self._rbufsize)
File "/usr/local/lib/python2.7/dist-packages/OpenSSL/SSL.py", line 995, in recv
self._raise_ssl_error(self._ssl, result)
File "/usr/local/lib/python2.7/dist-packages/OpenSSL/SSL.py", line 851, in _raise_ssl_error
raise ZeroReturnError()
ZeroReturnError
So let's break this down by each traceback block. The first ends with:
File "/usr/lib/python2.7/urllib2.py", line 1178, in do_open
raise URLError(err)
URLError: <urlopen error [Errno 8] _ssl.c:504: EOF occurred in violation of protocol>
This is coming from urllib2. The fact that this receives an EOF makes me think that the server closed the connection while you were waiting for that "thread" to read from the socket again. You might want to use more time.sleep(0) to yield to gevent.
The second traceback comes from requests:
File "/usr/local/lib/python2.7/dist-packages/requests/adapters.py", line 382, in send
raise SSLError(e, request=request)
SSLError: [Errno bad handshake] (-1, 'Unexpected EOF')
The [Errno bad handshake] would make me tend to think this is a problem establishing the connection which could be caused by an unexpected EOF. Is that caused by using gevent? I'm uncertain.
The final traceback is definitely from requests as well but it also is coming out of PyOpenSSL and isn't being caught by urllib3 or requests.
File "/usr/local/lib/python2.7/dist-packages/OpenSSL/SSL.py", line 851, in _raise_ssl_error
raise ZeroReturnError()
ZeroReturnError
I did some searching and found that "According to the pyOpenSSL docs ZeroReturnError means that the SSL connection has been closed cleanly." This says to me that the server again closed the connection because you took to long to read anything from the socket.
In short, I think you need to explicitly yield more often just to ensure that these socket problems don't arise. That's just a guess though, so take it with a grain of salt.

urllib error of Google App Engine & python.[Errno 11003] getaddrinfo failed

Thanks for your help in advance!
I want to get contents of a website, so I use urllib.urlopen(url).
set url='http://localhost:8080'(tomcat page)
If I use Google App Engine Launcher, run the application, browse http://localhost:8082 , it works well.
But if I specify the address and port for the application:
python `"D:\Program Files\Google\google_appengine\dev_appserver.py" -p 8082 -a 10.96.72.213 D:\pagedemon\videoareademo`
there's something wrong:
Traceback (most recent call last):
File "D:\Program Files\Google\google_appengine\google\appengine\ext\webapp\_webapp25.py", line 701, in __call__
handler.get(*groups)
File "D:\pagedemon\videoareademo\home.py", line 76, in get
wp = urllib.urlopen(url)
File "C:\Python27\lib\urllib.py", line 84, in urlopen
return opener.open(url)
File "C:\Python27\lib\urllib.py", line 205, in open
return getattr(self, name)(url)
File "C:\Python27\lib\urllib.py", line 343, in open_http
errcode, errmsg, headers = h.getreply()
File "D:\Program Files\Google\google_appengine\google\appengine\dist\httplib.py", line 334, in getreply
response = self._conn.getresponse()
File "D:\Program Files\Google\google_appengine\google\appengine\dist\httplib.py", line 222, in getresponse
deadline=self.timeout)
File "D:\Program Files\Google\google_appengine\google\appengine\api\urlfetch.py", line 263, in fetch
return rpc.get_result()
File "D:\Program Files\Google\google_appengine\google\appengine\api\apiproxy_stub_map.py", line 592, in get_result
return self.__get_result_hook(self)
File "D:\Program Files\Google\google_appengine\google\appengine\api\urlfetch.py", line 365, in _get_fetch_result
raise DownloadError(str(err))
DownloadError: ApplicationError: 2 [Errno 11003] getaddrinfo failed
The strangest thing is when I change the url form "http://localhost:8080" to "http://127.0.0.1:8080", it works well!
I googled a lot, but I didn't find any good solutions.Hoping for some help!
Also, I didn't configure any proxy.IE works well.
Your system doesn't necessarily know that localhost should resolve to 127.0.0.1. You might need to put an entry into your hosts file. On Windows, it's located at C:\Windows\System32\drivers\etc\hosts

Categories