I want to talk with a proxy soft, here is official socket example.
This soft would send traffic statistic to connected cli every 10 seconds. I also need to send commands and get command results on same connection.
It cost me some time to figure out that I can't create two socket clients and connect to same server to get different results.
My problem: how do I manage user config while recording traffic statistic ?
Test:
When add a port, would get ok after several cli.recv(1506) (because the stat send every 10 second, it I don't read all the time, ok would behind of many stat) :
>>> cli.send(b'add: {"server_port":8003, "password":"123123"}')
46
>>> print(cli.recv(1506))
stat: {"8002":164}
>>> print(cli.recv(1506))
stat: {"8002":336}
>>> print(cli.recv(1506))
ok
>>> print(cli.recv(1506))
stat: {"8002":31}
>>> print(cli.recv(1506))
# hang here wait for next result
So if I send many command, I can't recognise the which result map which command.
The solution I came up with is:
# client main code
while True:
# need open another sock port to recieve command
c = self.on_new_command()
if c:
self.cli.send(c)
# does python socket server ensure response is FIFO when `setblocking(False)`
self.queue.append(c)
ret = self.cli.recv(1524)
if 'stat' in ret:
self.record(ret)
else :
self.log(self.queue.pop(), ret)
I need open another sock port to receive command in this client, and have to write another client which sends commands to this... I just don't feel it is good.
Because it is my first time to play with socket programming, is my solution the best for this situation? Is there any better way to achieve my goal?
Related
I'm new to socket programming in python. Here is an example of opening a TCP socket in a Mininet host and sending a photo from one host to another. In fact I changed the code that I had used to send a simple message to another host (writing the received data to a text file) in order to meet my requirements. Although when I implement this revised code, there is no error and it seems to transfer correctly, I am not sure whether this is a correct way to do this transmission or not. Since I'm running both hosts on the same machine, I thought it may have an influence on the result. I wanted to ask you to check whether this is a correct way to transfer or I should add or remove something.
mininetSocketTest.py
#!/usr/bin/python
from mininet.topo import Topo, SingleSwitchTopo
from mininet.net import Mininet
from mininet.log import lg, info
from mininet.cli import CLI
def main():
lg.setLogLevel('info')
net = Mininet(SingleSwitchTopo(k=2))
net.start()
h1 = net.get('h1')
p1 = h1.popen('python myClient2.py')
h2 = net.get('h2')
h2.cmd('python myServer2.py')
CLI( net )
#p1.terminate()
net.stop()
if __name__ == '__main__':
main()
myServer2.py
import socket
import sys
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.bind(('10.0.0.1', 12345))
buf = 1024
f = open("2.jpg",'wb')
s.listen(1)
conn , addr = s.accept()
while 1:
data = conn.recv(buf)
print(data[:10])
#print "PACKAGE RECEIVED..."
f.write(data)
if not data: break
#conn.send(data)
conn.close()
s.close()
myClient2.py:
import socket
import sys
f=open ("1.jpg", "rb")
print sys.getsizeof(f)
buf = 1024
data = f.read(buf)
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(('10.0.0.1',12345))
while (data):
if(s.sendall(data)):
#print "sending ..."
data = f.read(buf)
print(f.tell(), data[:10])
else:
s.close()
s.close()
This loop in client2 is wrong:
while (data):
if(s.send(data)):
print "sending ..."
data = f.read(buf)
As the send
docs say:
Returns the number of bytes sent. Applications are responsible for checking that all data has been sent; if only some of the data was transmitted, the application needs to attempt delivery of the remaining data. For further information on this topic, consult the Socket Programming HOWTO.
You're not even attempting to do this. So, while it probably works on localhost, on a lightly-loaded machine, with smallish files, it's going to break as soon as you try to use it for real.
As the help says, you need to do something to deliver the rest of the buffer. Since there's probably no good reason you can't just block until it's all sent, the simplest thing to do is to call sendall:
Unlike send(), this method continues to send data from bytes until either all data has been sent or an error occurs. None is returned on success. On error, an exception is raised…
And this brings up the next problem: You're not doing any exception handling anywhere. Maybe that's OK, but usually it isn't. For example, if one of your sockets goes down, but the other one is still up, do you want to abort the whole program and hard-drop your connection, or do you maybe want to finish sending whatever you have first?
You should at least probably use a with clause of a finally, to make sure you close your sockets cleanly, so the other side will get a nice EOF instead of an exception.
Also, your server code just serves a single client and then quits. Is that actually what you wanted? Usually, even if you don't need concurrent clients, you at least want to loop around accepting and servicing them one by one.
Finally, a server almost always wants to do this:
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
Without this, if you try to run the server again within a few seconds after it finished (a platform-specific number of seconds, which may even depend whether it finished with an exception instead of a clean shutdown), the bind will fail, in the same way as if you tried to bind a socket that's actually in use by another program.
First of all, you should use TCP and not UDP. TCP will ensure that your client/server has received the whole photo properly. UDP is more used for content streaming.
Absolutely not your use case.
Ok, so it's possible that the answer to this question is simply "stop using parallel-ssh and write your own code using netmiko/paramiko. Also, upgrade to python 3 already."
But here's my issue: I'm using parallel-ssh to try to hit as many as 80 devices at a time. These devices are notoriously unreliable, and they occasionally freeze up after giving one or two lines of output. Then, the parallel-ssh code hangs for hours, leaving the script running, well, until I kill it. I've jumped onto the VM running the scripts after a weekend and seen a job that's been stuck for 52 hours.
The relevant pieces of my first code, the one that hangs:
from pssh.pssh2_client import ParallelSSHClient
def remote_ssh(ip_list, ssh_user, ssh_pass, cmd):
client = ParallelSSHClient(ip_list, user=ssh_user, password=ssh_pass, timeout=180, retry_delay=60, pool_size=100, allow_agent=False)
result = client.run_command(cmd, stop_on_errors=False)
return result
The next thing I tried was the channel_timout option, because if it takes more than 4 minutes to get the command output, then I know that the device froze, and I need to move on and cycle it later in the script:
from pssh.pssh_client import ParallelSSHClient
def remote_ssh(ip_list, ssh_user, ssh_pass, cmd):
client = ParallelSSHClient(ip_list, user=ssh_user, password=ssh_pass, channel_timeout=180, retry_delay=60, pool_size=100, allow_agent=False)
result = client.run_command(cmd, stop_on_errors=False)
return result
This version never actually connects to anything. Any advice? I haven't been able to find anything other than channel_timeout to attempt to kill an ssh session after a certain amount of time.
The code is creating a client object inside a function and then returning only the output of run_command which includes remote channels to the SSH server.
Since the client object is never returned by the function it goes out of scope and gets garbage collected by Python which closes the connection.
Trying to use remote channels on a closed connection will never work. If you capture stack trace of the stuck script it is most probably hanging at using remote channel or connection.
Change your code to keep the client alive. Client should ideally also be reused.
from pssh.pssh2_client import ParallelSSHClient
def remote_ssh(ip_list, ssh_user, ssh_pass, cmd):
client = ParallelSSHClient(ip_list, user=ssh_user, password=ssh_pass, timeout=180, retry_delay=60, pool_size=100, allow_agent=False)
result = client.run_command(cmd, stop_on_errors=False)
return client, result
Make sure you understand where the code is going wrong before jumping to conclusions that will not solve the issue, ie capture stack trace of where it is hanging. Same code doing the same thing will break the same way..
I'm having an issue with Python's socket module that I haven't been able to find anywhere else.
I'm building a simple TCP chat client, and while it successfully connects to the server initially, the script hangs endlessly on sock.recv() despite the fact that I explicitly set a timeout length.
I've tried using different timeout values and including setblocking(False) but no matter what I do it keeps acting like the socket is in blocking mode.
Here are the relevant parts of my code:
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.settimeout(1)
def listen_to_server():
global connected
while connected:
ready_to_read, ready_to_write, in_error = select.select([sock], [], [])
if ready_to_read:
try:
data = sock.recv(1024)
except socket.timeout:
print('TIMEOUT')
if not data:
append_to_log('Disconnected from server.\n')
connected = False
else:
append_to_log(str(data))
Any suggestions would be helpful, I'm at a total loss here.
You've mixed two things the socket timeout and the select.
When you set socket timeout then you are telling to that socket: if I try do some operation (e.g. recv()) and it won't be finished until my limit then raise timeout exception.
The select takes file descriptors (on Windows only sockets) and start checking if the rlist (the first parameter) contains any socket ready to read (aka some data have arrived). If any data arrived then the program continues.
Now your code do this:
Set timeout for socket operations
Select start waiting for data (if you don't send them then they never arrives)
and that's it. You stuck at the select.
You should just call the recv() without select. Than your timeout should be applied.
If you need manage multiple sockets at once than you have to use select and set the 4th parameter timeout.
I have got a very simple idea in mind that i want to try out. Say i have a browser, chrome for instance, and i want to search for the ip of the domain name, say www.google.com. I use windows 7 and i have set the dns lookup properties to manual and have given the address 127.0.0.1 where my server (written in Python is running). I started my server and i could see the dns query but it was very weird as in it is showing faces like this:
WAITING FOR CONNECTION.........
.........recieved from : ('127.0.0.1', 59339)
╟╝☺ ☺ ♥www♠google♥com ☺ ☺
The waiting for connection and the received from is from my server. How do i get a human readable dns query?
This is my server code(quiet elementary but still):
Here is the code:
from time import sleep
import socket
host=''
port=53
addr_list=(host,port)
buf_siz=1024
udp=socket.socket(socket.AF_INET,socket.SOCK_DGRAM)
udp.bind(addr_list)
while True:
print 'WAITING FOR CONNECTION.........'
data,addr = udp.recvfrom(buf_siz) print '.........recieved from : ',addr
sleep(3)
print data
DNS uses a compression algorithm and uses [length]string to represent parts of the domain name (as far as I remember). e.g. [3]www[6]google[3]com.
Have a look at the DNS RFCs, e.g. http://www.zoneedit.com/doc/rfc/rfc1035.txt
I'm totally new to Python (as of half an hour ago) and trying to write a simple script to enumerate users on an SMTP server.
The users file is a simple list (one per line) of usernames.
The script runs fine but with each iteration of the loop it slows until, around loop 14, it seems to hang completely. No error - I have to ^c.
Can anyone shed some light on the problem please?
TIA,
Tom
#!/usr/bin/python
import socket
import sys
if len(sys.argv) != 2:
print "Usage: vrfy.py <username file>"
sys.exit(0)
#open user file
file=open(sys.argv[1], 'r')
users=[x.strip() for x in file.readlines()]
file.close
#Just for debugging
print users
# Create a Socket
s=socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# Connect to the Server
connect=s.connect(('192.168.13.222',25))
for x in users:
# VRFY a user
s.send('VRFY ' + x + '\r\n')
result=s.recv(1024)
print result
# Close the socket
s.close()
Most likely your SMTP server is tarpitting your client connection. This is a defense against runaway clients, or clients which submit large volumes of "junk" commands. From the manpage for Postfix smtpd:
smtpd_junk_command_limit (normal: 100, stress: 1)
The number of junk commands (NOOP, VRFY, ETRN or RSET) that a
remote SMTP client can send before the Postfix SMTP server
starts to increment the error counter with each junk command.
The smtpd daemon will insert a 1-second delay before replying after a certain amount of junk is seen. If you have root access to the smtp server in question, try an strace to see if nanosleep syscalls are being issued by the server.
Here is a trace from running your script against my local server. After 100 VRFY commands it starts sleeping between commands. Your server may have a lower limit of ~15 junk commands:
nanosleep({1, 0}, 0x7fffda9a67a0) = 0
poll([{fd=9, events=POLLOUT}], 1, 300000) = 1 ([{fd=9, revents=POLLOUT}])
write(9, "252 2.0.0 pat\r\n", 15) = 15
poll([{fd=9, events=POLLIN}], 1, 300000) = 1 ([{fd=9, revents=POLLIN}])
read(9, "VRFY pat\r\n", 4096) = 10
s.recv blocks so if you have no more data on the socket then it will block forever.
You have to keep track of how much data you are receiving. You need to know this ahead of time so the client and the server can agree on the size.
Solving the exact same problem I also ran into the issue.
I'm almost sure #samplebias is right. I found I could work around the "tarpitting" by abusing the poor system even more, tearing down and rebuilding every connection:
#[ ...Snip... ]
import smtplib
#[ ...Snip... ]
for USER in open(opts.USERS,'r'):
smtpserver = smtplib.SMTP(HOST,PORT)
smtpserver.ehlo()
verifyuser = smtpserver.verify(USER)
print("%s %s: %s") % (HOST.rstrip(), USER.rstrip(), verifyuser)
smtpserver.quit()
I'm curious whether this particular type of hammering would work in a live environment, but too certain it would make some people very unhappy.
PS, python: batteries included.
In a glance, your code has no bugs. However, you shall notice that TCP isn't a "message" oriented protocol. So, you can't use socket.send in a loop assuming that one message will be actually sent through the medium at every call. Thus, if some calls starts to get buffered in the output buffer, and you just call socket.recv after it, your program will stuck in a deadlock.
What you should do is a threaded or asynchronous code. Maybe Twisted Framework may help you.