sending data across two programs in python - python

This is my code:
socketcheck.py
import time
import subprocess
subprocess.Popen(["python", "server.py"])
for i in range(10):
time.sleep(2)
print i
def print_from_server(data):
print data
server.py
import socket
from socketcheck import print_from_server
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind(('localhost',3005))
client_connected = 1
while 1:
s.listen(1)
conn, addr = s.accept()
data = conn.recv(1024)
if data:
client_connected = 0
else: break
if client_connected == 0:
print 'data received'
print_from_server(data)
client_connected = 1
conn.sendall(data)
client.py
import socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(('localhost',3005))
s.sendall('Hello, world')
data = s.recv(1024)
#s.close()
print 'Received', repr(data)
What I am trying to do here is, run socketcheck.py which runs server.py in background and listens for a client connection. So whatever data the client sends, I want to pass it on to socketcheck.py. Is this valid? If so, then how do I achieve it?
Now when I try and run socketcheck.py, the for loop is running indefinitely.
Thanks :)
EDIT:
This initially I tried as a single program, but until the client gets connected, the rest of the program doesn't execute(blocking), with the setblocking(0) the program flow wouldn't stop but when the client connects to server it doesn't print(do anything). The server code looked something like this:
import socket
from socketcheck import print_from_server
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind(('localhost',3005))
s.setblocking(0)
while 1:
try:
s.listen(1)
conn, addr = s.accept()
conn.setblocking(0)
data = conn.recv(1024)
if not data: break
print 'data received'
conn.sendall(data)
except:
print 'non blocking'
print 'the lengthy program continues from here'

The reason why your program crashes your computer is simple:
You have a while loop which calls print_from_server(data), and each time it calls it, a new subprocess gets created via subprocess.Popen(["python", "server.py"]).
The reason for creating a new popen each time is a bit more complicated: You open a new server.py program in socketcheck.py. If this new server.py program calls print_from_server(data), this is the first time print_from_server(data) gets called (for the new server.py program). So the global commands (such as popen) are executed, since they are always executed once.
The number of processes running will explode quickly and you computer crashes.
One additional remark: You cannot print to console with a print command in a subprocess, since there is no console attached to that subprocess, you can only print to file. If you do that, you'll see that this output explodes quickly from all the processes.
Put socketcheck.py and server.py into one program and everything works fine, or explain why you need two programs.

The functionality can be easily achieved with multithreading :)

Related

Python code to stream file contents through socket produces incomplete data

I've written a simple Python script that reads a file and streams the contents over a socket. The application I've connected to currently just reads the data from the socket and writes it to a file. However the data on the receiving end is incomplete. The first ~150 lines of the file do not get received, nor does the last line. I don't see anything glaringly wrong with my Python code, but if someone can point out what I've done wrong I would appreciate it. If there's an alternative method that can accomplish this task that may be helpful as well. Thanks.
EDIT: I'm pretty sure it's an issue with this code and not the receiving side because I have a C++ version of this Python code that the receiving end works fine with. However, I don't know what could be wrong here.
import socket
import sys
import time
__all__=['Stream_File_to_Socket']
def Stream_File_to_Socket(port,input_file):
# create socket
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_address = ('localhost',port)
sock.bind(server_address)
sock.listen(1)
# open file and send data
f = open(input_file,"r")
while True:
#print('waiting for a connection')
connection, client_address = sock.accept()
if connection.fileno() != -1 :
break
#print("no connection!")
return(-1)
time.sleep(0.5)
buffer_size = 1024
while True:
data = f.readline()
if not data:
connection.send(data)
break
connection.send(data)
time.sleep(0.01)
f.close()
return(0)

Telnet reads double characters when running python server script

So I am new to python and I'm trying to learn some socket programming and the following script, when ran and connected to the server via telnet, returns me something like "hheelllloo wwoorrlldd" instead of letting me write "hello world" and then send the data. I've looked online and I've already tried to change the localecho setting in telnet and that didn't work either.
The servers script is:
import socket
import sys
import threading
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.bind(('127.0.0.1', 10000))
sock.listen(1)
connections = []
def handler(c, a):
global connections
while True:
data = c.recv(1024)
for connection in connections:
connection.send(bytes(data))
if not data:
connections.remove(c)
c.close()
break
while True:
c, a = sock.accept()
conn_thread = threading.Thread(target = handler, args = (c, a))
conn_thread.daemon = True
conn_thread.start()
connections.append(c)
The code when ran should return the sender the text he sent. I think mine does it character by character, without pressing enter to send and I don't know why. I might be wrong though.
Also, I'm running Windows 10, if this matters.

Pause thread and wake it up from another script

I want a thread to wait for a message from another script.
I don't want to use time.sleep() as it creates time gaps and if I need my thread to wake up and continue running, it might delay too much and I'm aiming for fastest performance. I don't won't to use while(NOT_BEING_CALLED_BY_THE_OTHER_THREAD) because it will eat up my CPU and I'm also aiming to keep my CPU usage as low as possible (as there will be more thread doing the same at the same time).
In Pseudo-code it should look like this:
do_stuff()
wait_for_being_called() #Rise immediately after being called (or as soon as possible)
do_more_stuff()
The purpose of this is to use data that wasn't available before being called, there is a script that checks for the data availability (a single thread running) and many which await for the data they need to be available (the single script checks it, and should call them if the data is available). It's kind of like std::condition_variable in c++, only I want my other, external script to be able to wake the awaiting script.
How can I achieve something like this? What should check_for_events.py contain?
#check_for_events.py
for data_node in data_list:
"""
What do I do here, assuming I have the thread id?
"""
If you have two different scripts, probably the best thing to use is select. Here's an example of what I mean:
from __future__ import print_function
import select
import socket
import sys
import time
from random import randint
def serve():
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
port = randint(10000, 50000)
with open('.test_port', 'w') as f:
f.write('{}'.format(port))
sock.bind(('127.0.0.1', port))
sock.listen(1)
not_finished = True
while not_finished:
try:
print('*'*40)
print('Waiting for connection...')
conn, addr = sock.accept()
print('Waiting forever for data')
select.select([conn], [], [])
data = conn.recv(2048)
print('got some data, so now I can go to work!')
print('-'*40)
print('Doing some work, doo da doo...')
print('Counting to 20!')
for x in range(20):
print(x, end='\r')
time.sleep(0.5)
print('** Done with work! **')
print('-'*40)
conn.close()
except KeyboardInterrupt:
print('^C caught, quitting!')
not_finished = False
def call():
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
print('Connecting')
with open('.test_port') as f:
port = int(f.read())
sock.connect(('127.0.0.1', port))
sock.sendall(b'This is a message')
sock.close()
print('Done')
if __name__ == '__main__':
if 'serve' in sys.argv:
serve()
elif 'call' in sys.argv:
call()
This allows the caller to actually communicate information with the runner. You could also set it up to listen for multiple incoming connections and toss them in the pool to select from, if that's something that you need.
But if you really just want to block until another program calls you, then you could make this even more simple by removing the parts between conn, add = sock.accept() and conn.close() (other than your own work, of course).

Socket programming for multiple clients

I'm trying to write code for a chat server using sockets for multiple clients. But it is working for only a single client. Why is it not working for multiple clients?
I have to perform this program using Beaglebone Black. My server program will be running on beaglebone and normal clients on gcc or terminal. So I can't use multithreading.
#SERVER
import socket
import sys
s=socket.socket()
s.bind(("127.0.0.1",9998))
s.listen(10)
while True:
sc,address = s.accept()
print address
while True:
msg = sc.recv(1024)
if not msg:break
print "Client says:",msg
reply = raw_input("enter the msg::")
sc.send(reply)
sc.close()
s.close()
#CLIENT
import socket
import sys
s= socket.socket()
s.connect(("127.0.0.1",9998))
while (1):
msg = raw_input("enter the msg")
s.send(msg)
reply = s.recv(1024)
print "Server says::",reply
s.close()
Use an event loop.
Integrated in python like asyncio : Echo server example
or use an external library that provides the event loop like libuv: Echo server example.
Note: Your code is not working for multiple clients simultaneously beacause you are blocked in the receive operation and you are not handling new accept operations.

How do I abort a socket.recv() from another thread in Python

I have a main thread that waits for connection. It spawns client threads that will echo the response from the client (telnet in this case). But say that I want to close down all sockets and all threads after some time, like after 1 connection.
How would I do it? If I do clientSocket.close() from the main thread, it won't stop doing the recv. It will only stop if I first send something through telnet, then it will fail doing further sends and recvs.
My code looks like this:
# Echo server program
import socket
from threading import Thread
import time
class ClientThread(Thread):
def __init__(self, clientSocket):
Thread.__init__(self)
self.clientSocket = clientSocket
def run(self):
while 1:
try:
# It will hang here, even if I do close on the socket
data = self.clientSocket.recv(1024)
print "Got data: ", data
self.clientSocket.send(data)
except:
break
self.clientSocket.close()
HOST = ''
PORT = 6000
serverSocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
serverSocket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
serverSocket.bind((HOST, PORT))
serverSocket.listen(1)
clientSocket, addr = serverSocket.accept()
print 'Got a new connection from: ', addr
clientThread = ClientThread(clientSocket)
clientThread.start()
time.sleep(1)
# This won't make the recv in the clientThread to stop immediately,
# nor will it generate an exception
clientSocket.close()
I know this is an old thread and that Samuel probably fixed his issue a long time ago. However, I had the same problem and came across this post while google'ing. Found a solution and think it is worthwhile to add.
You can use the shutdown method on the socket class. It can prevent further sends, receives or both.
socket.shutdown(socket.SHUT_WR)
The above prevents future sends, as an example.
See Python docs for more info.
I don't know if it's possible to do what you're asking, but it shouldn't be necessary. Just don't read from the socket if there is nothing to read; use select.select to check the socket for data.
change:
data = self.clientSocket.recv(1024)
print "Got data: ", data
self.clientSocket.send(data)
to something more like this:
r, _, _ = select.select([self.clientSocket], [], [])
if r:
data = self.clientSocket.recv(1024)
print "Got data: ", data
self.clientSocket.send(data)
EDIT: If you want to guard against the possibility that the socket has been closed, catch socket.error.
do_read = False
try:
r, _, _ = select.select([self.clientSocket], [], [])
do_read = bool(r)
except socket.error:
pass
if do_read:
data = self.clientSocket.recv(1024)
print "Got data: ", data
self.clientSocket.send(data)
I found a solution using timeouts. That will interrupt the recv (actually before the timeout has expired which is nice):
# Echo server program
import socket
from threading import Thread
import time
class ClientThread(Thread):
def __init__(self, clientSocke):
Thread.__init__(self)
self.clientSocket = clientSocket
def run(self):
while 1:
try:
data = self.clientSocket.recv(1024)
print "Got data: ", data
self.clientSocket.send(data)
except socket.timeout:
# If it was a timeout, we want to continue with recv
continue
except:
break
self.clientSocket.close()
HOST = ''
PORT = 6000
serverSocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
serverSocket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
serverSocket.bind((HOST, PORT))
serverSocket.listen(1)
clientSocket, addr = serverSocket.accept()
clientSocket.settimeout(1)
print 'Got a new connection from: ', addr
clientThread = ClientThread(clientSocket)
clientThread.start()
# Close it down immediatly
clientSocket.close()
I must apologize for the comments below. The earlier comment by #Matt Anderson works. I had made a mistake when trying it out which led to my post below.
Using timeout is not a very good solution. It may seem that waking up for an instant and then going back to sleep is no big deal, but I have seen it greatly affect the performance of an application. You have an operation that for the most part wants to block until data is available and thus sleep forever. However, if you want to abort for some reason, like shutting down your application, then the trick is how to get out. For sockets, you can use select and listen on two sockets. Your primary one, and a special shutdown one. Creating the shutdown one though is a bit of a pain. You have to create it. You have to get the listening socket to accept it. You have to keep track of both ends of this pipe. I have the same issue with the Synchronized Queue class. There however, you can at least insert a dummy object into the queue to wake up the get(). This requires that the dummy object not look like your normal data though. I sometimes wish Python had something like the Windows API WaitForMultipleObjects.

Categories