I'm developting a automation framework with little manual intervention.
There is one server and 3 client machines.
what server does is it sends some command to each client one by one and get the output of that command and stores in a file.
But to establish the connection I have to manually start clients in different machine in the command line, is there a way that the server itself sends a signal or something to start the client sends command stores output and then start next client so on in python?
Edited.
After the below suggestion, I used spur module
import spur
ss = spur.SshShell(hostname = "172.16.6.58",username ='username',password='some_password',shell_type=spur.ssh.ShellTypes.minimal,missing_host_key=spur.ssh.MissingHostKey.accept)
res = ss.run(['python','clientsock.py'])
I'm trying to start the clientsock.py file in one of the client machine (server is already running in the current machine) but, it hangs there nothing happens. what am i missing here?
Related
I have an application to stream a video on web using flask. Something like an example. But sometimes when the user closes its connection flask does not recognize the disconnection and keeps the socket open. Each socket in Linux is a file-descriptor and the maximum number of open file-descriptors in Linux is 1024 by default. After a while (e.g. 24 hours) new users cannot see the video stream because flask cannot create new socket (which is a file-descriptor).
The same happens when I use flask sockets: from flask_sockets import Sockets. I dont know what happens to these sockets. Most of the time when user refreshes the browser or closes it normally, the socket get closed on server.
I made a test and removed the network cable from my laptop (as a client) and realized that in this case sockets continue to be open and flask does not recognize this kind of disconnection.
Any help will be appreciated.
Update 1:
I put this code on top of main script and it can recognize some of disconnections. But the main problem still is there.
import socket
socket.setdefaulttimeout(1)
Update 2:
Seems duplicated with this post but no solution. I checked the sockets status(by sudo lsof -p your_process_id | egrep 'CLOSE_WAIT') and all of them are "CLOSE_WAIT".
The following is the skeleton code for a script that addresses servers that are down on the network. The script does the job, but I would like it to operate faster/better.
The script does the following:
Determines if machine is reachable via ssh.
If not reachable install a recovery image.
If reachable then send a script that take server name as a command line argument and does a quick diagnosis to determine why the server went down.
Problems:
Some servers that are reachable over the network get stuck when is_reachable() is called. The diagnosis_script.py uses linux commands to find hardware issues and logging errors. The script hangs for up to 30 mins until the ssh connection is terminated. It will continue to the next reachable server in the for loop, but this is a huge time sink.
Is there a way to put a timer on this? To exit the ssh connection and continue to the next server if the current server takes too long?
I believe a queue based multiprocessing algorithm could also expedite this script as well. Does anyone have exp or an have an example of how to implement something like this?
Example Skeleton Code:
import os
server_list = [machine1, machine2, machine3, machine4, machine5, machine6, ... , machine100]
reachable = []
unreachable = []
def is_sshable(server_list):
for server in server_list:
ssh_tester = 'ssh -o ConnectTimeout=3 -T root#{}'.format(server)
ssh = os.popen(ssh_tester).read()
if "0" not in ssh:
unreachable.append(server)
else:
reachable.append(server)
def is_unreachable(servername):
# Recover server is an internal linux command
for server in unreachable:
os.system('recover server {}'.format(servername))
def is_reachable(servername):
for server in reachable:
os.system('python3 diagnosis_script.py {}'.format(server))
I have a client written in python communicate with a server written in Go via TCP Socket. Currently, I create a new socket object and then connect to Go server every time. More specifically, suppose my go server listens on localhost:4040, whenever I connect to from python server I has a different source address (localhost:6379, 6378, ...) . I wonder if there is a way to leverage on old connections (like a connection pool) rather than creating the new one every time. If so, how to determine the connection has finished and became idle , do I need an extra ACK message for that? Thanks.
I have a client (on windows) that runs in the background; the client connects to the server (on linux) and waits for instructions.
The problem is that the client can receive data at a 10 minutes time frame. After that the data isn't received at the client side.
In the server side it look like the client still connected but the message don't reach it.
I don't know why, but I can't find the cause.
Any suggestions?
I am running a Graphite server to monitor instruments at remote locations. I have a "perpetual" ssh tunnel to the machines from my server (loving autossh) to map their local ports to my server's local port. This works well, data comes through with no hasstles. However we use a flaky satellite connection to the sites, which goes down rather regularly. I am running a "data crawler" on the instrument that is running python and using socket to send packets to the Graphite server. The problem is, if the link goes down temporarily (or the server gets rebooted, for testing mostly), I cannot re-establish the connection to the server. I trap the error, and then run socket.close(), and then re-open, but I just can't re-establish the connection. If I quit the python program and restart it, the connection comes up just fine. Any ideas how I can "refresh" my socket connection?
It's hard to answer this correctly without a code sample. However, it sounds like you might be trying to reuse a closed socket, which is not possible.
If the socket has been closed (or has experienced an error), you must re-create a new connection using a new socket object. For this to work, the remote server must be able to handle multiple client connections in its accept() loop.