Proxychains and urllib2 - Python - python

I wrote the following python script wich uses urllib2 :
def http_response(url):
try:
connection = urllib2.urlopen(url)
return connection.getcode()
connection.close()
except urllib2.HTTPError, e:
return e.getcode()
It works well, but if I want to run it through tor, using Proxychains I get a Connection Refused error. My question is, is there a way to do the same things in a way that works also with proxychains, without making the Python script connect to the socks proxy itself?

Related

Is there any way to test the weblogic admin connecting URL (t3/t3s) before connecting to it

I'm using following command to connect to weblgic using WLST,
java weblogic.wlst core.py
inside core.py I'm calling following command to connect to the weblogic admin. but some times the service url becomes unresponsive And my script hangs occasionally due to this. Is there any way to give a timeout to this connect() method or any other method to implement a timeout functionality?. Appreciate if someone can shed some light on this. Thanks.
connect(username,password,t3://:)
in earlier WebLogic versions they have provided following functionality(to ping), but they have removed it after 12.2*
java weblogic.Admin -url t3://localhost:7001 -username weblogic -password weblog
ic ping 3 100
This is a very common situation, where you can use Python's socket module to check that the Admin port is opened or not with the following function.
import socket
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
AdminIP = '192.168.33.10'
result = sock.connect_ex((AdminIP,7001))
if result == 0:
print "AdminPort is open you can connect"
else:
print "Admin Port is not yet open"
sock.close()
add your logic accordingly, HTH!

Close MySQL connection upon Python exception within Scrapy Framework?

I am using Scrapy 2.4.x pipeline.py to write data sets to a remote MySQL 5.7.32 server. In some cases errors happen and the script throws an exception - which is OK.
for selector in selectors:
unit = unitItem()
try:
unit['test'] = selector.xpath('form/text()').extract_first()
if not unit['test']:
self.logger.error('Extraction failed on %s', response.url)
continue
else:
unit['test'] = str(unit['test'].strip()
except Exception as e:
self.logger.error('Exception: %s', e)
continue
# more code
yield unit
There are 2 problems:
RAM usage is climbing up constantly. Do I somehow need to destroy the item?
There are many MySQL abborted connections errors. I believe this is due to MySQL connection is not closed
mysql error Log:
Aborted connection 63182018 to db: 'mydb' user: 'test' host: 'myhost' (Got an error reading communication packets)
The connection got opened at the very beginning of process_item and get closed at the very end of the method.
Would it help to close the connection upon exception? If so, is there a recommended routine?
I believe it would be more effective to open SQL connection on spider_opened()
and close it on spider_closed()
The only thing to keep in mind is that spider_closed() signal is fired when spider is closed gracefully.

Ping server in Python without root permissions

I have a python script, that is working only if my server is available. So before the script is beginning, i want to ping my server or rather checking the availability of the server.
There are already some related SO questions.
E.g
pyping module
response = pyping.ping('Your IP')
if response.ret_code == 0:
print("reachable")
else:
print("unreachable")
ping process in python
response = os.system("ping -c 1 " + hostname)
These answers works well, but only as ROOT user!
When i use this solutions as a common user i get the following error message:
ping: Lacking privilege for raw socket.
I need a solution, that i can do that as a common user, because i run this script in a jenkins job and have not the option to run in as root.
Would trying to perform a HTTP HEAD request, assuming the machine has a http server running, suffice?
from http.client import HTTPConnection # python3
try:
conn = HTTPConnection(host, port, timeout)
conn.request("HEAD", "/")
conn.close()
# server must be up
except:
# server is not up, do other stuff

Close lingering connection

I'm using python-requests for a client tool. It makes repeated requests to servers at an interval. However if the server disconnects, the client fails with a socket error on its next request. It appears the client is keeping the connection open from its side, rather than reconnecting. These connections could be hours apart, so it is unlikely the server wouldn't disconnect it.
Is there a way to override keep alive and force it to close? Is there something similar to:
with requests.get(url) as r:
doStuff(r)
# R is cleaned up, the socket is closed.
that would force the connection to clean up after I'm done?
As written that doesn't work, because requests.Response doesn't have an __ exit__ call.
How about this?
I haven't tested it, based only on the API doc:
s = requests.Session()
r = s.get(url)
doStuff(r)
s.close()
Or, to make sure that the close is always called, even if there's an exception, here's how to emulate the with-statement using a try/finally:
s = requests.Session()
try:
r = s.get(url)
doStuff(r)
finally:
s.close()

python socket problem

I write this python code:
import socks
import socket
socks.setdefaultproxy(socks.PROXY_TYPE_SOCKS5, "64.83.219.7", 58279)
socket.socket = socks.socksocket
socket.setdefaulttimeout(19)
import urllib2
print urllib2.urlopen('http://www.google.com').read()
but when I execute it, I get this error:
urllib2.URLError: <urlopen error timed out>
What am I doing wrong?
Something timed out in your script. I guess the connection to google because of wrong proxy setup. I think your goal is to fetch the contents of http://www.google.com through a proxy?
I don't know about this method to set it using socket/socks module. Maybe you want to take a look at the following chapters in the python documentation:
http://docs.python.org/library/urllib2.html?highlight=urllib2#examples (code sinppet 5 and the text above)
http://docs.python.org/library/urllib2.html?highlight=urllib2#urllib2.Request.set_proxy
http://docs.python.org/library/urllib2.html?highlight=urllib2#proxyhandler-objects

Categories