I want to pull connection tables from a firewall. In some cases it can be more than 200k lines of
"TCP outside 46.33.77.20:53415 inside 10.16.25.63:80, idle 0:00:04, bytes 3230, flags UIOB"
and the like.
I've tried to implement both pexpect and telnetlib in order to grab these tables. Unfortunately both timeout and/or die with anything greater than 40k.
pexpect implementation:
connect.send("sho conn\n")
connect.expect("<--- More --->", timeout=360)
tmp_txt = connect.before
telnetlib implementation:
telnet.write("sho conn\n")
tmp_text = telnet.read_until("<--- More --->")
Is there a more robust method of grabbing this information? I control the number of lines given at a time with a pager value (prior to running this). Also - I'm monitoring the cpu on the firewall, so I know it's displaying the connections. Either there are too many or it's too fast for pexpect or telnetlib to keep up.
Thanks.
It looks like your approach is fine to me. I would also page the output (to keep firewall CPU low) and then capture the output a screen full at a time.
If you are running into timeout errors then why not modify your expect to be a loop that expects each line or specific lines of output (I presume it has a regular format) and then only send space when it gets the "more" line for the next screen. I've used this pattern a lot to deal with long streams of output that may pause at different places.
You mention that the python process dies, we can't help you there - unless you are more detailed about what exception is being raised.
Related
I am trying to simulate a communication protocol where I am following a pattern, so I constantly loop though looking for the same set of characters to reply information. I'm using an RS-232 adapter and the protocol I am simulating is asynchronous and half-duplex where the rx/tx lines are tied together by design and that causes a sort of echo when reading after writing.
That said, I need to be able to clear the input buffer after every write I send out in order to avoid reading what I just wrote. So whenever I use reset_input_buffer() it does not clear the last message I sent out. I have tried to fix this using a couple of methods, such as: using reset_output_buffer() together with reset_input_buffer(), using reset_input_buffer() twice, and using flush(). None of these methods make any difference, the only other method that works to clear the buffer is closing and immediately opening the port but this causes a delay that messes with the timing as it is critical at certain times.
I'm open to any suggestions, please help!
Assume you try to interact with a shell via serial port. What happens when you hit the end of the line is that a <space><carriage return> gets inserted. This is already uncomfortable when using screen or minicom because it usually just continues writing on the same line (the linefeed is missing), but it results in buggy code when you need to parse the output stream. I am wondering how I can configure my serial connection to simply do nothing at the end of the line.
Example:
$ python -i -c "import serial; s=serial.Serial('/dev/ttyUSB3', 115200, timeout=1.5)"
>>> s.write("echo \"123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123213\""); print s.readall().encode('string-escape')
211
echo "12312312312312312312312312312312312312312312312312312 \r31231231231231231231231231231231231231231231231231231231231231231231231231231231 \r23123123123123123123123123123123123123123123123123123123123123123123213"
Solving this by changing the parser is not an option, because the parsing is done by a third party library in my case. Also just setting the line length to a very high number might work but is not what I like to do. I'd rather have control over the port's behaviour when the line is full.
It might be that the problem has nothing to do with pyserial. If you send data with the write command to a serial port, it just sends the data to the serial port regardless of what you send (even binary data). The same applies to read.
So, the extra space and newline are inserted by the shell in the other end of the connection. There is probably no reasonable way around this problem without configuring the other end. For example, in Linux you might want to try setterm -linewrap off or just simply change the terminal width to be large enough wtih stty.
I think this question will receive more complete answers in https://unix.stackexchange.com/ if your remote terminal is a Unix/Linux.
I have script which can be run by any user who is connected to a server. This script writes to a single log file, but there is no restriction on who can use it at one time. So multiple people could attempt to write to the log and data might be lost. Is there a way for one instance of the code to know if other instances of that code are running? Moreover, is it possible to gather this information dynamically? (ie not allow data saving for the second user until the first user has completed hes/her task)
I know I could do this with a text file. So I could write the user name to the file when the start, then delete it when they finish, but this could lead to errors if the either step misses, such as an unexpected script termination. So what other reliable ways are there?
Some information on the system: Python 2.7 is installed on a Windows 7 64-bit server via Anaconda. All connected machines are also Windows 7 64-bit. Thanks in advance
Here is an implementation:
http://www.evanfosmark.com/2009/01/cross-platform-file-locking-support-in-python/
If you are using a lock, be aware that stale locks (that are left by hung or crashed processes) can be a bitch. Have a process that periodically searches for locks that were created longer than X minutes ago and free them.
It just in't clean allowing multiple users to write to a single log and hoping things go ok..
why dont you write a daemon that handles logs? other processes connect to a "logging port" and in the simplest case they only succeed if no one else has connected.
you can just modify the echoserver example given here: (keep a timeout in the server for all connections)
http://docs.python.org/release/2.5.2/lib/socket-example.html
If you want know exactly who logged what, and make sure no one unauthorized gets in, you can use unix sockest to restrict it to only certain uids/gids etc.
here is a very good example
NTEventLogHandler is probably the easiest way for logging to a given Windows machine/server, but it might make more sense to use SyslogHandler if you have a syslog sink on a Unix server.
The catch I can think of with SyslogHandler is that you'll likely need to poke holes through the Windows firewall in order to send packets over the syslog protocol, i.e., 514/TCP ("reliable syslog") and 514/UDP (traditional or "unreliable syslog").
I have a long running python process running headless on a raspberrypi (controlling a garden) like so:
from time import sleep
def run_garden():
while 1:
/* do work */
sleep(60)
if __name__ == "__main__":
run_garden()
The 60 second sleep period is plenty of time for any changes happening in my garden (humidity, air temp, turn on pump, turn off fan etc), BUT what if i want to manually override these things?
Currently, in my /* do work */ loop, i first call out to another server where I keep config variables, and I can update those config variables via a web console, but it lacks any sort of real time feel, because it relies on the 60 second loop (e.g. you might update the web console, and then wait 45 seconds for the desired effect to take effect)
The raspberryPi running run_garden() is dedicated to the garden and it is basically the only thing taking up resources. So i know i have room to do something, I just dont know what.
Once the loop picks up the fact that a config var has been updated, the loop could then do exponential backoff to keep checking for interaction, rather than wait 60 seconds, but it just doesnt feel like that is a whole lot better.
Is there a better way to basically jump into this long running process?
Listen on a socket in your main loop. Use a timeout (e.g. of 60 seconds, the time until the next garden update should be performed) on your socket read calls so you get back to your normal functionality at least every minute when there are no commands coming in.
If you need garden-tending updates to happen no faster than every minute you need to check the time since the last update, since read calls will complete significantly faster when there are commands coming in.
Python's select module sounds like it might be helpful.
If you've ever used the unix analog (for example in socket programming maybe?), then it'll be familiar.
If not, here is the select section of a C sockets reference I often recommend. And here is what looks like a nice writeup of the module.
Warning: the first reference is specifically about C, not Python, but the concept of the select system call is the same, so the discussion might be helpful.
Basically, it allows you to tell it what events you're interested in (for example, socket data arrival, keyboard event), and it'll block either forever, or until a timeout you specify elapses.
If you're using sockets, then adding the socket and stdin to the list of events you're interested in is easy. If you're just looking for a way to "conditionally sleep" for 60 seconds unless/until a keypress is detected, this would work just as well.
EDIT:
Another way to solve this would be to have your raspberry-pi "register" with the server running the web console. This could involve a little bit extra work, but it would give you the realtime effect you're looking for.
Basically, the raspberry-pi "registers" itself, by alerting the server about itself, and the server stores the address of the device. If using TCP, you could keep a connection open (which might be important if you have firewalls to deal with). If using UDP you could bind the port on the device before registering, allowing the server to respond to the source address of the "announcement".
Once announced, when config. options change on the server, one of two things usually happen:
A) You send a tiny "ping" (in the general sense, not the ICMP host detection protocol) to the device alerting it that config options have changed. At this point the host would immediately request the full config. set, acquiring the update with it.
B) You send the updated config. option (or maybe the entire config. set) back to the device. This decreases the number of messages between the device and server, but would probably take more work as it seems like more a deviation from your current setup.
Why not use an event based loop instead of sleeping for a certain amount of time.
That way your loop will only run when a change is detected, and it will always run when a change is detected (which is the point of your question?).
You can do such a thing by using:
python event objects
Just wait for one or all of your event objects to be triggered and run the loop. You can also wait for X events to be done, etc, depending if you expect one variable to be updated a lot.
Or even a system like:
broadcasting events
I'd like to write a simple command line proxy in Python to sit between a Telnet/SSH connection and a local serial interface. The application should simply bridge I/O between the two, but filter out certain unallowed strings (matched by regular expressions). (This for a router/switch lab in which the user is given remote serial access to the boxes.)
Basically, a client established a Telnet or SSH connection to the daemon. The daemon passes the client's input out (for example) /dev/ttyS0, and passes input from ttyS0 back out to the client. However, I want to be able to blacklist certain strings coming from the client. For instance, the command 'delete foo' should not be allowed.
I'm not sure how best to approach this. Communication must be asynchronous; I can't simply wait for a carriage return to allow the buffer to be fed out the serial interface. Matching regular expressions against the stream seems tricky too, as all of the following must be intercepted:
delete foo(enter)
del foo(enter)
el foo(ctrl+a)d(enter)
dl(left)e(right) foo(enter)
...and so forth. The only solid delimiter is the CR/LF.
I'm hoping someone can point me in the right direction. I've been looking through Python modules but so far haven't come up with anything.
Python is not my primary language, so I'll leave that part of the answer for others. I do alot of security work, though, and I would urge a "white list" approach, not a "black list" approach. In other words, pick a set of safe commands and forbid all others. This is much much easier than trying to think of all the malicious possibilities and guarding against all of them.
As all the examples you show finish with (enter), why is it that...:
Communication must be asynchronous; I
can't simply wait for a carriage
return to allow the buffer to be fed
out the serial interface
if you can collect incoming data until the "enter", and apply the "edit" requests (such as the ctrl-a, left, right in your examples) to the data you're collecting, then you're left with the "completed command about to be sent" in memory where it can be matched and rejected or sent on.
If you must do it character by character, .read(1) on the (unbuffered) input will allow you to, but the vetting becomes potentially more problematic; again you can keep an in-memory image of the edited command that you've sent so far (as you apply the edit requests even while sending them), but what happens when the "enter" arrives and your vetting shows you that the command thus composed must NOT be allowed -- can you e.g. send a number of "delete"s to the device to wipe away said command? Or is there a single "toss the complete line" edit request that would serve?
If you must send every character as you receive it (not allowed to accumulate them until decision point) AND there is no way to delete/erase characters already sent, then the task appears to be impossible (though I don't understand the "can't wait for the enter" condition AT ALL, so maybe there's hope).
After thinking about this for a while, it doesn't seem like there's any practical, reliable method to filter on client input. I'm going to attempt this from another angle: if I can identify persistent patterns in warning messages coming from the serial devices (e.g. confirmation prompts) I may be able to abort reliably. Thanks anyway for the input!
Fabric is doing a similar thing.
For SSH api you should check paramiko.