I write this python code:
import socks
import socket
socks.setdefaultproxy(socks.PROXY_TYPE_SOCKS5, "64.83.219.7", 58279)
socket.socket = socks.socksocket
socket.setdefaulttimeout(19)
import urllib2
print urllib2.urlopen('http://www.google.com').read()
but when I execute it, I get this error:
urllib2.URLError: <urlopen error timed out>
What am I doing wrong?
Something timed out in your script. I guess the connection to google because of wrong proxy setup. I think your goal is to fetch the contents of http://www.google.com through a proxy?
I don't know about this method to set it using socket/socks module. Maybe you want to take a look at the following chapters in the python documentation:
http://docs.python.org/library/urllib2.html?highlight=urllib2#examples (code sinppet 5 and the text above)
http://docs.python.org/library/urllib2.html?highlight=urllib2#urllib2.Request.set_proxy
http://docs.python.org/library/urllib2.html?highlight=urllib2#proxyhandler-objects
Related
I am trying to connect to Gmail's SMTP using sockets in Python3. With this code (omitting the response-recieving parts):
import ssl
import base64
from socket import *
cs = socket(AF_INET, SOCK_STREAM)
cs.connect(("smtp.gmail.com", 587))
cs.send(b'EHLO smtp.google.com\r\n')
cs.send(b'STARTTLS\r\n')
ws = ssl.wrap_socket(cs, ssl_version=ssl.PROTOCOL_TLSv1, ciphers="ADH-AES256-SHA")
But I'm getting the following error in do_handshake
in the last line:
ssl.SSLError: [SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:645)
I have also tried the following version in the last line:
ssl_version=ssl.PROTOCOL_SSLv23
ssl.PROTOCOL_TLSv1
ssl.OP_NO_SSLv3
ssl.OP_NO_TLSv1
ssl.PROTOCOL_SSLv2
ssl.PROTOCOL_SSLv23
ssl.PROTOCOL_SSLv3
ssl.PROTOCOL_TLSv1
Am I doing something wrong ? Thanks.
The problem is that you're never receiving on your socket. You may be able to get away with that on some servers (have my doubts) but Google's servers don't like it. Really don't think it would work with any server as I'm pretty sure you need a clean receive buffer so the TLS negotiation can take place without a bunch of prior junk still in the pipeline.
Your code is working with the following changes for me:
import ssl
import base64
from socket import *
cs = socket(AF_INET, SOCK_STREAM)
cs.connect(("smtp.gmail.com", 587))
print(cs.recv(4096))
cs.send(b'EHLO smtp.google.com\r\n')
print(cs.recv(4096))
cs.send(b'STARTTLS\r\n')
print(cs.recv(4096))
ws = ssl.wrap_socket(cs, ssl_version=ssl.PROTOCOL_TLSv1)
ws.send(b'MAIL FROM: abc#def.com\r\n')
print(ws.recv(4096))
But as has already been mentioned in the comments, unless this is just for fun and/or a learning experience, you should be using python's smtplib.
I wrote the following python script wich uses urllib2 :
def http_response(url):
try:
connection = urllib2.urlopen(url)
return connection.getcode()
connection.close()
except urllib2.HTTPError, e:
return e.getcode()
It works well, but if I want to run it through tor, using Proxychains I get a Connection Refused error. My question is, is there a way to do the same things in a way that works also with proxychains, without making the Python script connect to the socks proxy itself?
I am writing a crawler in Python that will run through Tor. I have Tor working and used code from this YouTube tutorial on how to route my Python requests to go through the Tor SOCKS proxy at 127.0.0.1:9050.
What I can't figure out is how to toggle this on/off within my script. Some requests I want to go through Tor and some I don't. Basically, I can't figure out the correct "close" or "shutdown" method in the socket objects I am using because I don't understand them.
Here's what happens now
import socket
import socks
import requests
def connect_to_socks():
socks.setdefaultproxy(socks.PROXY_TYPE_SOCKS5, '127.0.0.1', 9050, True)
socket.socket = socks.socksocket
r = requests.get('http://wtfismyip.com/text')
print r.text #prints my ordinary IP address
connect_to_socks()
r = requests.get('http://wtfismyip.com/text')
print r.text #prints my Tor IP address
How do I turn off the socket routing to the SOCKS proxy so that it goes through my ordinary internet connection?
I'm hoping to use requests instead of urllib2 as it seems a lot easier but if I have to get into the guts of urllib2 or even httplib I will. But would prefer not to.
Figured it out by listening to this good YouTube tutorial.
Just need to call socket.setdefaultproxy() and it brings me back.
For Python 3 you can set back default socket by using this:
socks.setdefaultproxy(None)
socket.socket = socks.socksocket
I am writing some code that uses poplib and imaplib to collect emails through a proxy server.
I use the following to set up a proxy connection:-
import socks
import socket
socks.setdefaultproxy(socks.PROXY_TYPE_SOCKS4,proxy_ip,port,True)
socket.socket = socks.socksocket
Which I got from the stackoverflow post:-
http://stackoverflow.com/questions/3386724/python-how-can-i-fetch-emails-via-pop-or-imap-through-a-proxy
Then I make my connection with the email server:-
server = poplib.POP3(self.host, self.port)
server.user(self.username)
server.pass_(self.password)
I am testing my code in a unittest and have encountered a problem that I believe relates to my connection with the proxy not closing down properly.
An example is:-
I have set up the proxy connection and am trying to establish a connection with the email server. As part of the unittest I intentionally use an incorrect email server password.
The poplib library throws an exception that it can't connect. I catch the exception in the unittest, then move on to the next unittest, trusting the poplib library would properly close my previous connection.
My understanding is that this is not a good thing and that I should be ensuring the email and proxy server connections are properly closed.
I know how to close the pop3 connection:-
server.quit()
But do not know how to close the connection with the proxy server or if I have to do so.
Could someone please help me with this question or with my understanding if that's where the problem lies :)
No special action is required. When you close the POP connection, the proxy connection will close automatically, since it's only needed while you are connected to something through the proxy.
I'm using python 2.7 and I'd like to get the contents of a webpage that requires sslv3. Currently when I try to access the page I get the error SSL23_GET_SERVER_HELLO, and some searching on the web lead me to the following solution which fixes things in Python 3
urllib.request.install_opener(urllib.request.build_opener(urllib.request.HTTPSHandler(context=ssl.SSLContext(ssl.PROTOCOL_TLSv1))))
How can I get the same effect in python 2.7, as I can't seem to find the equivalent of the context argument for the HTTPSHandler class.
I realize this response is a few years too late, but I also ran into the same problem, and didn't want to depend on libcurl being installed on a machine where I ran this. Hopefully, this will be useful to those who find this post in the future.
The problem is that httplib.HTTPSConnection.connect doesn't have a way to specify SSL context or version. You can overwrite this function before you hit the meat of your script for a quick solution.
An important consideration is that this workaround, as discussed above, will not verify the validity of the server's certificate.
import httplib
import socket
import ssl
import urllib2
def connect(self):
"Connect to a host on a given (SSL) port."
sock = socket.create_connection((self.host, self.port),
self.timeout, self.source_address)
if self._tunnel_host:
self.sock = sock
self._tunnel()
self.sock = ssl.wrap_socket(sock, self.key_file, self.cert_file, ssl_version=ssl.PROTOCOL_TLSv1)
httplib.HTTPSConnection.connect = connect
opener = urllib2.build_opener()
f = opener.open('https://www.google.com/')
*Note: this alternate connect() function was copy/pasted from httplib.py, and simply modified to specify the ssl_version in the wrap_socket() call
SSL should be handled automatically as long as you have the SSL libraries installed on your server (i.e. you shouldn't have to specificially add it as a handler)
http://docs.python.org/library/urllib2.html#urllib2.build_opener
If the Python installation has SSL support (i.e., if the ssl module can be imported), HTTPSHandler will also be added.
Also, note that urllib and urllib2 have been merged in python 3 so their approach is a little different
Since I was unable to do this using urllib2, I eventually gave in and moved to using the libCurl bindings like #Bruno had suggested in the comments to pastylegs answer.