I'd like to write a simple command line proxy in Python to sit between a Telnet/SSH connection and a local serial interface. The application should simply bridge I/O between the two, but filter out certain unallowed strings (matched by regular expressions). (This for a router/switch lab in which the user is given remote serial access to the boxes.)
Basically, a client established a Telnet or SSH connection to the daemon. The daemon passes the client's input out (for example) /dev/ttyS0, and passes input from ttyS0 back out to the client. However, I want to be able to blacklist certain strings coming from the client. For instance, the command 'delete foo' should not be allowed.
I'm not sure how best to approach this. Communication must be asynchronous; I can't simply wait for a carriage return to allow the buffer to be fed out the serial interface. Matching regular expressions against the stream seems tricky too, as all of the following must be intercepted:
delete foo(enter)
del foo(enter)
el foo(ctrl+a)d(enter)
dl(left)e(right) foo(enter)
...and so forth. The only solid delimiter is the CR/LF.
I'm hoping someone can point me in the right direction. I've been looking through Python modules but so far haven't come up with anything.
Python is not my primary language, so I'll leave that part of the answer for others. I do alot of security work, though, and I would urge a "white list" approach, not a "black list" approach. In other words, pick a set of safe commands and forbid all others. This is much much easier than trying to think of all the malicious possibilities and guarding against all of them.
As all the examples you show finish with (enter), why is it that...:
Communication must be asynchronous; I
can't simply wait for a carriage
return to allow the buffer to be fed
out the serial interface
if you can collect incoming data until the "enter", and apply the "edit" requests (such as the ctrl-a, left, right in your examples) to the data you're collecting, then you're left with the "completed command about to be sent" in memory where it can be matched and rejected or sent on.
If you must do it character by character, .read(1) on the (unbuffered) input will allow you to, but the vetting becomes potentially more problematic; again you can keep an in-memory image of the edited command that you've sent so far (as you apply the edit requests even while sending them), but what happens when the "enter" arrives and your vetting shows you that the command thus composed must NOT be allowed -- can you e.g. send a number of "delete"s to the device to wipe away said command? Or is there a single "toss the complete line" edit request that would serve?
If you must send every character as you receive it (not allowed to accumulate them until decision point) AND there is no way to delete/erase characters already sent, then the task appears to be impossible (though I don't understand the "can't wait for the enter" condition AT ALL, so maybe there's hope).
After thinking about this for a while, it doesn't seem like there's any practical, reliable method to filter on client input. I'm going to attempt this from another angle: if I can identify persistent patterns in warning messages coming from the serial devices (e.g. confirmation prompts) I may be able to abort reliably. Thanks anyway for the input!
Fabric is doing a similar thing.
For SSH api you should check paramiko.
Related
I am trying to simulate a communication protocol where I am following a pattern, so I constantly loop though looking for the same set of characters to reply information. I'm using an RS-232 adapter and the protocol I am simulating is asynchronous and half-duplex where the rx/tx lines are tied together by design and that causes a sort of echo when reading after writing.
That said, I need to be able to clear the input buffer after every write I send out in order to avoid reading what I just wrote. So whenever I use reset_input_buffer() it does not clear the last message I sent out. I have tried to fix this using a couple of methods, such as: using reset_output_buffer() together with reset_input_buffer(), using reset_input_buffer() twice, and using flush(). None of these methods make any difference, the only other method that works to clear the buffer is closing and immediately opening the port but this causes a delay that messes with the timing as it is critical at certain times.
I'm open to any suggestions, please help!
I was wondering if there is a way I can tell python to wait until it gets a response from a server to continue running.
I am writing a turn based game. I make the first move and it sends the move to the server and then the server to the other computer. The problem comes here. As it is no longer my turn I want my game to wait until it gets a response from the server (wait until the other player makes a move). But my line:
data=self.sock.recv(1024)
hangs because (I think) it's no getting something immediately. So I want know how can I make it wait for something to happen and then keep going.
Thanks in advance.
The socket programming howto is relevant to this question, specifically this part:
Now we come to the major stumbling block of sockets - send and recv operate on the
network buffers. They do not necessarily handle all the bytes you hand them (or expect
from them), because their major focus is handling the network buffers. In general, they
return when the associated network buffers have been filled (send) or emptied (recv).
They then tell you how many bytes they handled. It is your responsibility to call them
again until your message has been completely dealt with.
...
One complication to be aware of: if your conversational protocol allows multiple
messages to be sent back to back (without some kind of reply), and you pass recv an
arbitrary chunk size, you may end up reading the start of a following message. You’ll
need to put that aside >and hold onto it, until it’s needed.
Prefixing the message with it’s length (say, as 5 numeric characters) gets more complex,
because (believe it or not), you may not get all 5 characters in one recv. In playing
around, you’ll get away with it; but in high network loads, your code will very quickly
break unless you use two recv loops - the first to determine the length, the second to
get the data part of the message. Nasty. This is also when you’ll discover that send
does not always manage to get rid of everything in one pass. And despite having read
this, you will eventually get bit by it!
The main takeaways from this are:
you'll need to establish either a FIXED message size, OR you'll need to send the the size of the message at the beginning of the message
when calling socket.recv, pass number of bytes you actually want (and I'm guessing you don't actually want 1024 bytes). Then use LOOPs because you are not guaranteed to get all you want in a single call.
That line, sock.recv(1024), blocks until 1024 bytes have been received or the OS detects a socket error. You need some way to know the message size -- this is why HTTP messages include the Content-Length.
You can set a timeout with socket.settimeout to abort reading entirely if the expected number of bytes doesn't arrive before a timeout.
You can also explore Python's non-blocking sockets using setblocking(0).
I have script which can be run by any user who is connected to a server. This script writes to a single log file, but there is no restriction on who can use it at one time. So multiple people could attempt to write to the log and data might be lost. Is there a way for one instance of the code to know if other instances of that code are running? Moreover, is it possible to gather this information dynamically? (ie not allow data saving for the second user until the first user has completed hes/her task)
I know I could do this with a text file. So I could write the user name to the file when the start, then delete it when they finish, but this could lead to errors if the either step misses, such as an unexpected script termination. So what other reliable ways are there?
Some information on the system: Python 2.7 is installed on a Windows 7 64-bit server via Anaconda. All connected machines are also Windows 7 64-bit. Thanks in advance
Here is an implementation:
http://www.evanfosmark.com/2009/01/cross-platform-file-locking-support-in-python/
If you are using a lock, be aware that stale locks (that are left by hung or crashed processes) can be a bitch. Have a process that periodically searches for locks that were created longer than X minutes ago and free them.
It just in't clean allowing multiple users to write to a single log and hoping things go ok..
why dont you write a daemon that handles logs? other processes connect to a "logging port" and in the simplest case they only succeed if no one else has connected.
you can just modify the echoserver example given here: (keep a timeout in the server for all connections)
http://docs.python.org/release/2.5.2/lib/socket-example.html
If you want know exactly who logged what, and make sure no one unauthorized gets in, you can use unix sockest to restrict it to only certain uids/gids etc.
here is a very good example
NTEventLogHandler is probably the easiest way for logging to a given Windows machine/server, but it might make more sense to use SyslogHandler if you have a syslog sink on a Unix server.
The catch I can think of with SyslogHandler is that you'll likely need to poke holes through the Windows firewall in order to send packets over the syslog protocol, i.e., 514/TCP ("reliable syslog") and 514/UDP (traditional or "unreliable syslog").
I have a basic command-line chat client and server in Python, but this would be applicable to probably any language. I ran into a very obvious problem, and I'm not sure if there would be any way around it (aside from using a GUI! Which would quickly solve the problem). When the server sends a message to the client, causing the client to print() the message, it's inserted in the exact same place where the person would be typing their own message, causing it to be split by the incoming message. For example (written as # comments to avoid weird syntax highlighting):
# Client1: Knock-knock!
# Client2: Who's there?
# Client1: Interrupting cow!
# Client2: Inter
# Client1: MOOOOOOO
# Client2: rupting cow who?
Where Client2 hasn't hit enter since typing Who's there?.
So obviously, there's all sorts of workarounds like panels on a GUI, but I'm curious to know if there's anyway to implement this strictly in the native terminal/command prompt. I couldn't find anything remotely like this during my searching the internet for a solution! Thanks!
I'd use something like https://pypi.python.org/pypi/blessings/ which lets you set up a terminal with a cursor.
You can move the cursor "up" when you want to print output from the other connection and then move it back down when you want to get input.
If you want to get crazy you can do all that magic by yourself with terminal control commands (on windows you'll need colorama) which will let you do things like:
print("\033[6;3HHello")
Which moves the cursor to x,y. This requires an ansi terminal.
I want to pull connection tables from a firewall. In some cases it can be more than 200k lines of
"TCP outside 46.33.77.20:53415 inside 10.16.25.63:80, idle 0:00:04, bytes 3230, flags UIOB"
and the like.
I've tried to implement both pexpect and telnetlib in order to grab these tables. Unfortunately both timeout and/or die with anything greater than 40k.
pexpect implementation:
connect.send("sho conn\n")
connect.expect("<--- More --->", timeout=360)
tmp_txt = connect.before
telnetlib implementation:
telnet.write("sho conn\n")
tmp_text = telnet.read_until("<--- More --->")
Is there a more robust method of grabbing this information? I control the number of lines given at a time with a pager value (prior to running this). Also - I'm monitoring the cpu on the firewall, so I know it's displaying the connections. Either there are too many or it's too fast for pexpect or telnetlib to keep up.
Thanks.
It looks like your approach is fine to me. I would also page the output (to keep firewall CPU low) and then capture the output a screen full at a time.
If you are running into timeout errors then why not modify your expect to be a loop that expects each line or specific lines of output (I presume it has a regular format) and then only send space when it gets the "more" line for the next screen. I've used this pattern a lot to deal with long streams of output that may pause at different places.
You mention that the python process dies, we can't help you there - unless you are more detailed about what exception is being raised.