I have a client written in python communicate with a server written in Go via TCP Socket. Currently, I create a new socket object and then connect to Go server every time. More specifically, suppose my go server listens on localhost:4040, whenever I connect to from python server I has a different source address (localhost:6379, 6378, ...) . I wonder if there is a way to leverage on old connections (like a connection pool) rather than creating the new one every time. If so, how to determine the connection has finished and became idle , do I need an extra ACK message for that? Thanks.
Related
I have hardware which has the ability to receive data/commands via ethernet or serial.
I am doing socket programming in python to send commands to the hardware. Everything works fine, but once I close the socket (it closes successfully) and then when I try to reinit and create the socket in a different program, it throws me CONNECTION REFUSED
The only workaround for now is to remove the ethernet cable from the network switch and plug back in. and then it works and again once socket is closed and then want to reopen it, Connection refused error pops up.
Since the server code is running on proprietary hardware, I don't have access to it. I can only configure the port and ip address of the hardware.
Here is the snapshot of the program with the error message
and also the wireshark snapshot
and when I removed the ethernet wire and reconnected again , it can connect properly
see this snapshot.. so not sure where is gng wrong
Please let me know if you have any questions
This happens because the server is not running on that ip and or port.
This error is common. Try check through this:
Ensure that there are no other identical addresses. This is
important.
Make sure that the server is running before booting up the client.
Make sure the client has access to the server and the server can accept connections.
Make sure that the maximum connection setting is high enough to allow an ideal amount of connections. If this is not enabled then all
other connections get booted
Also when you said that the only way for you to get it to work is to reconnect your Ethernet cable, this is probably because you have a closed connection. You must set a loop so that the connection can be kept open
I'm trying to make a python function that scans a range of addresses. I started a socket and pass the socket as an argument to the function that connects to it:
def scan(socket, address, port):
c = socket.connect_ex((address, port))
print(c)
then I call scan for each address, each in its own thread. I'm getting Error 114: Operation already in progress..
Do I need to start a new socket for each connection? I'm trying to read about socket reusage, and I found that there exists flags like SO_ADDREUSE or something like that. I tried to insert but it didn't work.
I'm trying to think how a socket works. I think the moment I create one, it choses a tcp source port, and then when I create a connection, it sends to a destination port. I think I can't reuse the same socket because the source port would be the same for all destination ports, so the clients would answer to the same port and would cause confusion.
So do I need to create a new socket for each connection?
You can not connect stream socket multiple times.
One of the connect possible errors is EISCONN.
The socket is already connected.
This goes for stream sockets.
man bind also has this:
[EINVAL] The socket is already bound to an address, and the
protocol does not support binding to a new address; or
the socket has been shut down.
Again, this goes for stream sockets.
From the man connect:
Generally, stream sockets may successfully connect() only once; datagram sockets may use connect() multiple times to change their association.
I made emphasis on the important line.
stream sockets can not be connected multiple times. datagram sockets can be connected multiple times. Generally speaking, BSD sockets have multiple protocols, types, domains avaible. You shall read documentation for your particular case.
P.S Get yourself familiar with the readings that were suggested in the comment to your question. That will explain enough to manipulate socket family of functions.
Do I need to start a new socket for each connection?
Yes.
I'm trying to read about socket reusage
There is no such thing as 'socket reusage'. There is port reuse. Not the same thing. You cannot reconnect an existing socket once you've tried to connect it, even if the connect attempt failed.
I found that there exists flags like SO_ADDREUSE or something like that
SO_REUSEADDR means to reuse the port. Not the socket.
I'm trying to think how a socket works. I think the moment I create one, it choses a tcp source port,
Between creating a socket using the socket() system call and using it to create an outgoing connection with the connect() system call, there is an opportunity to optionally use the bind() system call to set source IP address and/or port if you want to. If you don't use bind(), the operating system will automatically bind the socket to the first available port in the appropriate range when you use the connect() system call. In this case, the source IP address is normally selected to match the network interface that provides the shortest route to the specified destination according to the routing table.
At least, that's how it works at the system call level. Some programming languages or libraries may choose to combine some of these operations into one.
To your actual question, man 7 ip says:
A TCP local socket address that has been bound is unavailable for some
time after closing, unless the SO_REUSEADDR flag has been set. Care
should be taken when using this flag as it makes TCP less reliable.
The idea is to delay the re-use of a port until any possible re-sent copies of packages that belonged to the closed connection have for sure expired on the network.
According to the bind() man page, trying to re-bind a socket that is already bound to an address will result in an EINVAL error. So "recycling" a socket using bind(socket, INADDR_ANY, 0) (after ending a connection that used SO_REUSEADDR) does not seem to be possible.
And even if that would be possible, when you're using multiple threads on a modern multi-core system, you end up (very probably) doing multiple things in parallel. A socket can be used for just one outgoing connection at a time. Each of your scan threads will need its own socket.
I am using a library (ShareDB) for operational transformation, and the server and client side use a websocket-json-stream to communicate. However this ShareDB is being run on nodejs as a service (I'm using zerorpc to control my node processes), as my main web framework is Tornado (python). I understand from this thread that with a stateful protocol such as TCP, the connections are differentiated by the client port (so only one server port is required). And according to this response regarding how websockets handle multiple incoming requests, there is no difference in the underlying transport channel between tcp and websockets.
So my question is, if I create a websocket from the client to the python server, and then also from the client to my nodejs code (the ShareDB service) how can the server differentiate which socket goes with which? Is it the servers responsibility to only have a single socket 'listening' for a connection a given time (i.e. to first establish communication with the Python server and then to start listening for the second websocket?)
The simplest way to run two server processes on the same physical server box is to have each of them listen on a different port and then the client connects to the appropriate port on that server to indicate which server it is trying to connect to.
If you can only have one incoming port due to your server environment, then you can use something like a proxy. You still have your two servers listening on different ports, but neither one is listening on the port that is open to the outside world. The proxy listens on the one incoming port that is open to the outside world and then based on some characteristics of the incoming connection, the proxy directs that incoming connection to the appropriate server process.
The proxy can be configured to identify which process you are trying to connect to either via the URL or the DNS hostname.
I am developing a group chat application to learn how to use sockets, threads (maybe), and asycore module(maybe).
What my thought was have a client-server architecture so that when a client connects to the server the server sends the client a list of other connects (other client 'user name', ip addres) and then a person can connect to one or more people at a time and the server would set up a P2P connection between the client(s). I have the socket part working, but the server can only handle one client connection at a time.
What would be the best, most common, practical way to go about handling multiple connections?
Do I create a new process/thread whenever I new connection comes into the server and then connect the different client connections together, or use the asycore module which from what I understand makes the server send the same data to multiple sockets(connection) and I just have to regulate where the data goes.
Any help/thoughts/advice would be appreciated.
For a group chat application, the general approach will be:
Server side (accept process):
Create the socket, bind it to a well known port (and on appropriate interface) and listen
While (app_running)
Client_socket = accept (using serverSocket)
Spawn a new thread and pass this socket to the thread. That thread handles the client that just connected.
Continue, so that server can continue to accept more connections.
Server-side client mgmt Thread:
while app_running:
read the incoming message, and store to a queue or something.
continue
Server side (group chat processing):
For all connected clients:
check their queues. If any message present, send that to ALL the connected clients (including the client that sent this message -- serves as ACK sort of)
Client side:
create a socket
connect to server via IP-address, and port
do send/receive.
There can be lots of improvement on the above. Like the server could poll the sockets or use "select" operation on a group of sockets. That would make it efficient in the sense that having a separate thread for each connected client will be an overdose when there are many. (Think ~1MB per thread for stack).
PS: I haven't really used asyncore module. But I am just guessing that you would notice some performance improvement when you have lots of connected clients and very less processing.
I am running a Graphite server to monitor instruments at remote locations. I have a "perpetual" ssh tunnel to the machines from my server (loving autossh) to map their local ports to my server's local port. This works well, data comes through with no hasstles. However we use a flaky satellite connection to the sites, which goes down rather regularly. I am running a "data crawler" on the instrument that is running python and using socket to send packets to the Graphite server. The problem is, if the link goes down temporarily (or the server gets rebooted, for testing mostly), I cannot re-establish the connection to the server. I trap the error, and then run socket.close(), and then re-open, but I just can't re-establish the connection. If I quit the python program and restart it, the connection comes up just fine. Any ideas how I can "refresh" my socket connection?
It's hard to answer this correctly without a code sample. However, it sounds like you might be trying to reuse a closed socket, which is not possible.
If the socket has been closed (or has experienced an error), you must re-create a new connection using a new socket object. For this to work, the remote server must be able to handle multiple client connections in its accept() loop.