Python OSC, Query / Close active thread - python

I'm using a relatively simple python execution, using OSC modules, in order to 'Send' code, from an application to an other.
import OSC
import threading
#------OSC Server-------------------------------------#
receive_address = '127.0.0.1', 9002
# OSC Server. there are three different types of server.
s = OSC.ThreadingOSCServer(receive_address)
# define a message-handler function for the server to call.
def printing_handler(addr, tags, stuff, source):
if addr=='/coordinates':
print "Test", stuff
s.addMsgHandler("/coordinates", printing_handler)
def main():
# Start OSCServer
print "Starting OSCServer"
st = threading.Thread(target=s.serve_forever)
st.start()
main()
Runned once, will work just fine, an listen on port 9002.
But, runned twice, will return ERROR:
socket.error: [Errno 10048] Only one usage of each socket address (protocol/network address/port) is normally permitted
My goal is to:
Be able to query on active thread's port
Close them
I've tried the following...
import socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
result = s.connect_ex(('127.0.0.1', 900socket2))
print 'RESULT: ', result
s.close()
But giving me unsuccessful result. (Returns 10061 both for active and unactive port's thread)

Python » Documentation socketserver.BaseServer:
shutdown()
Tell the serve_forever() loop to stop and wait until it does.
server_close()
Clean up the server. May be overridden.

Related

conn.send('Hi'.encode()) BrokenPipeError: [Errno 32] Broken pipe (SOCKET)

hi i make model server client which works fine and i also create separate GUI which need to two input server IP and port it only check whether server is up or not. But when i run server and then run my GUI and enter server IP and port it display connected on GUI but on server side it throw this error. The Server Client working fine but integration of GUI with server throw below error on server side.
conn.send('Hi'.encode()) # send only takes string BrokenPipeError: [Errno 32] Broken pip
This is server Code:
from socket import *
# Importing all from thread
import threading
# Defining server address and port
host = 'localhost'
port = 52000
data = " "
# Creating socket object
sock = socket()
# Binding socket to a address. bind() takes tuple of host and port.
sock.bind((host, port))
# Listening at the address
sock.listen(5) # 5 denotes the number of clients can queue
def clientthread(conn):
# infinite loop so that function do not terminate and thread do not end.
while True:
# Sending message to connected client
conn.send('Hi'.encode('utf-8')) # send only takes string
data =conn.recv(1024)
print (data.decode())
while True:
# Accepting incoming connections
conn, addr = sock.accept()
# Creating new thread. Calling clientthread function for this function and passing conn as argument.
thread = threading.Thread(target=clientthread, args=(conn,))
thread.start()
conn.close()
sock.close()
This is part of Gui Code which cause problem:
def isOpen(self, ip, port):
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
s.connect((ip, int(port)))
data=s.recv(1024)
if data== b'Hi':
print("connected")
return True
except:
print("not connected")
return False
def check_password(self):
self.isOpen('localhost', 52000)
Your problem is simple.
Your client connects to the server
The server is creating a new thread with an infinite loop
The server sends a simple message
The client receives the message
The client closes the connection by default (!!!), since you returned from its method (no more references)
The server tries to receive a message, then proceeds (Error lies here)
Since the connection has been closed by the client, the server cannot send nor receive the next message inside the loop, since it is infinite. That is the cause of the error! Also there is no error handling in case of closing the connection, nor a protocol for closing on each side.
If you need a function that checks whether the server is online or not, you should create a function, (but I'm sure a simple connect is enough), that works like a ping. Example:
Client function:
def isOpen(self, ip, port):
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
s.connect((str(ip), int(port)))
s.send("ping".encode('utf-8'))
return s.recv(1024).decode('utf-8') == "pong" # return whether the response match or not
except:
return False # cant connect
Server function:
def clientthread(conn):
while True:
msg = conn.recv(1024).decode('utf-8') #receiving a message
if msg == "ping":
conn.send("pong".encode('utf-8')) # sending the response
conn.close() # closing the connection on both sides
break # since we only need to check whether the server is online, we break
From your previous questions I can tell you have some problems understanding how TCP socket communication works. Please take a moment and read a few articles about how to communicate through sockets. If you don't need live communications (continous data stream, like a video, game server, etc), only login forms for example, please stick with well-known protocols, like HTTP. Creating your own reliable protocol might be a little complicated if you just got into socket programming.
You could use flask for an HTTP back-end.

Python sockets: Server waits for nothing when asked to 'recv' and then 'sendall'

I am experimenting with python sockets to try to understand the whole concept better, but I have run into a problem. I have a simple server and a client, where the client sends a list to the server, and then waits for the server to send a string signaling the process is complete.
This is the client file:
import socket
import json
host = '192.168.1.102'
port = 14314
def request():
print 'Connecting'
clientsocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
clientsocket.connect((host, port))
print 'Sending request'
clientsocket.sendall(json.dumps([1, 2, 3, 4, 5, 6, 7, 8, 9]))
print 'Receiving data'
data = clientsocket.recv(512)
print 'Received: {}'.format(data)
request()
and here is the server file:
import socket
import json
host = '192.168.1.102'
port = 14314
def run():
print 'Binding socket'
serversocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
serversocket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
serversocket.bind((host, port))
print 'Waiting for client'
serversocket.listen(1)
clientsocket, addr = serversocket.accept()
print 'Receiving data'
raw_data = ''
while True:
tmp = clientsocket.recv(1024)
if not tmp:
break
raw_data += tmp
data = json.loads(raw_data)
print 'Received: {}'.format(data)
print 'Sending data'
clientsocket.sendall('done')
run()
The problem is that while the client is done sending the list, the server is stuck in the recv loop, waiting for nothing. The whole data has been received in the first iteration and in the second iteration there is nothing to be received because the client has moved on to the receiving part.
The weird part is that if I comment out the receive part from the client and the send part from the server, the process completes successfully. So, what am I doing wrong? Why is this not working?
Thanks.
The Docs for socket.recv talk about additional flags being able to be passed in to the recv function described in the unix documentation. So turning to that documentation, I found the following message:
If no messages are available at the socket, the receive calls wait for
a message to arrive, unless the socket is nonblocking (see fcntl(2)),
in which case the value -1 is returned
So once again, we're directed to another page. The documentation for fcntl says
Performs one of the operations described below on the open file
descriptor
So, normally the socket.recv function is blocking (it will wait indefinitely for new data), unless we use a file descriptor. How do we do that? Well there is a socket.makefile function that gives us a file descriptor attached to the socket. Cool. This SO question gives us an example of how we can read and write to a socket, using a file descriptor.
Well what if we don't want to use a file descriptor. Reading further into the unix documentation for the recv function, I see that I can use the MSG_DONTWAIT flag. This doesn't work in Windows, but I did find out that we can use socket.setbocking(False) to permamently change the socket to non-blocking mode. You would then need to ignore any "A non-blocking socket operation could not be completed immediately" errors. Those are normal and non-fatal(error #10035 of this page mentions it is non-fatal).
Another possible implementation would be to multi-thread your program, you can implement a receiving and a sending thread for your socket. This might give you the best performance, but it would be a lot of work to setup.
Python is awesome. I just found some libraries Python has that does asynchronous sockets too. There's asyncore, asynchat which have both been deprecated in favor of asyncio if that is available in the version of Python you are using.
Sorry for throwing so much out there. I don't know a whole lot about sockets. I used them once with the Paramiko library, and that was it. But it looks like there are a lot of ways of implementing them.

Port Scanner python script

I'm a beginner to python and i'm learning the socket objects in python. I found out a script on the internet that is:
import socket
s = socket.socket()
socket.setdefaulttimeout(2)
try:
s = s.connect(("IP_ADD", PORT_NUM))
print "[+] connection successful"
except Exception, e:
print "[+] Port closed"
I just wanted to ask, that whether this script can work as a port scanner? Thanks alot!
Just change your code, it can be used as a TCP port scanner for localhost :
import socket
def scan_port(port_num, host):
s = socket.socket()
socket.setdefaulttimeout(2)
try:
s = s.connect((host, port_num))
print port_num, "[+] connection successful"
except Exception, e:
print port_num, "[+] Port closed"
host = 'localhost'
for i in xrange(1024):
scan_port(i, host)
But it is just a toy, you can not use it for something real, if you want scan the ports of other's computer,
try nmap.
Here is my version of your port scanner. I tried to explain how everything works in the comments.
#-*-coding:utf8;-*-
#qpy:3
#qpy:console
import socket
import os
# This is used to set a default timeout on socket
# objects.
DEFAULT_TIMEOUT = 0.5
# This is used for checking if a call to socket.connect_ex
# was successful.
SUCCESS = 0
def check_port(*host_port, timeout=DEFAULT_TIMEOUT):
''' Try to connect to a specified host on a specified port.
If the connection takes longer then the TIMEOUT we set we assume
the host is down. If the connection is a success we can safely assume
the host is up and listing on port x. If the connection fails for any
other reason we assume the host is down and the port is closed.'''
# Create and configure the socket.
sock = socket.socket()
sock.settimeout(timeout)
# the SO_REUSEADDR flag tells the kernel to reuse a local
# socket in TIME_WAIT state, without waiting for its natural
# timeout to expire.
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
# Like connect(address), but return an error indicator instead
# of raising an exception for errors returned by the C-level connect() 
# call (other problems, such as “host not found,” can still raise exceptions).
# The error indicator is 0 if the operation succeeded, otherwise the value of
# the errnovariable. This is useful to support, for example, asynchronous connects.
connected = sock.connect_ex(host_port) is SUCCESS
# Mark the socket closed.
# The underlying system resource (e.g. a file descriptor)
# is also closed when all file objects from makefile() are closed.
# Once that happens, all future operations on the socket object will fail.
# The remote end will receive no more data (after queued data is flushed).
sock.close()
# return True if port is open or False if port is closed.
return connected
con = check_port('www.google.com', 83)
print(con)

Python sctp module - server side

I have been trying to test SCTP for a network deployment.
I do not have an SCTP server or client and was hoping to be able use pysctp.
I am fairly certain that I have the client side code working.
def sctp_client ():
print("SCTP client")
sock = sctp.sctpsocket_tcp(socket.AF_INET)
#sock.connect(('10.10.10.70',int(20003)))
sock.connect(('10.10.10.41',int(21000)))
print("Sending message")
sock.sctp_send(msg='allowed')
sock.shutdown(0)
sock.close()
Has anybody had luck with using the python sctp module for the server side?
Thank you in Advance!
I know that this topic's a bit dated, but I figured I would respond to it anyway to help out the community.
In a nutshell:
you are using pysctp with the sockets package to create either a client or a server;
you can therefore create your server connection as you normally would with a regular TCP connection.
Here's some code to get you started, it's a bit verbose, but it illustrates a full connection, sending, receiving, and closing the connection.
You can run it on your dev computer and then use a tool like ncat (nmap's implementation of netcat) to connect, i.e.: ncat --sctp localhost 80.
Without further ado, here's the code... HTH:
# Here are the packages that we need for our SCTP server
import socket
import sctp
from sctp import *
import threading
# Let's create a socket:
my_tcp_socket = sctpsocket_tcp(socket.AF_INET)
my_tcp_port = 80
# Here are a couple of parameters for the server
server_ip = "0.0.0.0"
backlog_conns = 3
# Let's set up a connection:
my_tcp_socket.events.clear()
my_tcp_socket.bind((server_ip, my_tcp_port))
my_tcp_socket.listen(backlog_conns)
# Here's a method for handling a connection:
def handle_client(client_socket):
client_socket.send("Howdy! What's your name?\n")
name = client_socket.recv(1024) # This might be a problem for someone with a reaaallly long name.
name = name.strip()
print "His name was Robert Paulson. Er, scratch that. It was {0}.".format(name)
client_socket.send("Thanks for calling, {0}. Bye, now.".format(name))
client_socket.close()
# Now, let's handle an actual connection:
while True:
client, addr = my_tcp_socket.accept()
print "Call from {0}:{1}".format(addr[0], addr[1])
client_handler = threading.Thread(target = handle_client,
args = (client,))
client_handler.start()
Unless you need the special sctp_ functions you don't need an sctp module at all.
Just use protocol 132 as IPPROTO_SCTP (is defined on my python3 socket module but not on my python2 socket module) and you can use the socket,bind,listen,connect,send,recv,sendto,recvfrom,close from the standard socket module.
I'm doing some SCTP C development and I used python to better understand SCTP behavior without the SCTP module.

Why does my socket connection between two python scripts break if one of them is launched with Popen?

So I have two very simple python scripts that communicate over a socket. Right now they are both running on the same windows PC.
Here's controller.py:
import socket
import time
import sys
from subprocess import Popen, CREATE_NEW_CONSOLE
HOST = '192.168.1.107' # The remote host
PORT = 50557 # The same port as used by the server
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
s.connect((HOST, PORT))
except:
Popen([sys.executable, 'driver.py'], creationflags=CREATE_NEW_CONSOLE)
time.sleep(0.2);
s.connect((HOST, PORT))
s.send(sys.argv[1])
data = s.recv(1024)
s.close()
print 'Received', repr(data)
And here's driver.py:
import socket
HOST = '' # Symbolic name meaning the local host
PORT = 50557 # Arbitrary non-privileged port
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((HOST, PORT))
s.listen(1)
while 1:
print 'Waiting for controller connection.'
conn, addr = s.accept()
print 'Connected to ', addr
while 1:
data = conn.recv(1024)
print 'Recieved ', data
if not data: break
if data == 'Q': quit()
print 'A.'
conn.send(data[::-1])
print 'B.'
print 'C.'
conn.close()
If I open two cmd windows and python <filename>.py <arg> them both everything works fine. I can leave driver.py running and run controller.py over and over again. Until I kill the driver by sending a "Q".
The try/except statement opens up a new window and runs driver.py if the connection can't be made. Theoretically this just makes sure that the receiver is running before it sends anything. This ALMOST works but for some reason driver.py hangs inside the second while loop for reasons I cannot explain. Here's the output from sequential controller.py calls:
Microsoft Windows [Version 6.1.7601]
Copyright (c) 2009 Microsoft Corporation. All rights reserved.
>python controller.py FirstMsg
Received 'gsMtsriF'
>python controller.py SecondMsg
Received 'gsMdnoceS'
>python controller.py Q
Received ''
>python controller.py ThirdMsg
Received 'gsMdrihT'
>python controller.py FourthMsg
(After FourthMsg controller.py hangs forever)
Here's the output from driver.py
Microsoft Windows [Version 6.1.7601]
Copyright (c) 2009 Microsoft Corporation. All rights reserved.
>python driver.py
Waiting for controller connection.
Connected to ('192.168.1.107', 49915)
Recieved FirstMsg
A.
B.
Recieved
C.
Waiting for controller connection.
Connected to ('192.168.1.107', 49916)
Recieved SecondMsg
A.
B.
Recieved
C.
Waiting for controller connection.
Connected to ('192.168.1.107', 49917)
Recieved Q
(Whereupon it returns to the regular command prompt.) The new window created with Popen contains this:
Waiting for controller connection.
Connected to ('192.168.1.107', 49918)
Recieved ThirdMsg
A.
B.
So in the weird new window (which I just noticed does not look or act like a regular cmd window) the program seems to hang on data = conn.recv(1024).
Why does driver.py act differently in the new window, and how can I fix it?
You asked for sockets and Popen, my answer will ignore this and will try to propose solution for the use case, you have shown - remote communication over network.
With sockets it is very easy to run into never ending problems. If you have both endpoints under your control, zeromq communication will turn your programming back to having fun.
You may either use zeromq directly (if you check examples for pyzmq, you will find they are really short and easy while serving very well in many difficult scenarios), but today I will show use of zerorpc library, which makes remote calls even simpler.
Install zerorpc
$ pip install zerorpc
Write your worker.py
def reverse(text):
""" return reversed argument """
return text[::-1]
def makebig(text):
""" turn text to uppercase letters """
return text.upper()
Call worker methods from command line
Start server serving worker.py functions
$ zerorpc --server --bind tcp://*:5555 worker
binding to "tcp://*:5555"
serving "worker"
Consume the services (from command line)
Simple call, reporting available functions to call
$ zerorpc tcp://localhost:5555
connecting to "tcp://localhost:5555"
makebig turn text to uppercase letters
reverse return reversed argument
Then call these functions:
$ zerorpc tcp://localhost:5555 reverse "alfabeta"
connecting to "tcp://localhost:5555"
'atebafla'
$ zerorpc tcp://localhost:5555 makebig "alfabeta"
connecting to "tcp://localhost:5555"
'ALFABETA'
Note: so far we have wrote 7 lines of code and are already able calling it remotely.
Using from Python code
Command line utility zerorpc is just handy utility, but you are free to integrate without it, using pure Python.
Consuming from python code client.py
import zerorpc
client = zerorpc.Client()
url = "tcp://localhost:5555"
client.connect(url)
print client.reverse("alfabeta")
print client.makebig("alfabeta")
and run it
$ python client.py
atebafla
ALFABETA
Run remote server from python code server.py
import zerorpc
import worker
url = "tcp://*:5555"
srv = zerorpc.Server(worker)
srv.bind(url)
srv.run()
Stop older server started by command zerorpc, if it still runs.
Then start out python version of the server:
$ python server.py
(it does not print anything, just waits to serve).
Finally, consume it from your Python code
$ python client.py
atebafla
ALFABETA
Conclusions
Using zeromq or zerorpc simplifies the situation a lot
Play with starting client.py and server.py in various orders
try to run multiple clients
try to consume services from another computer (over network)
zerorpc provides more patterns (streamer, publish/subscribe), see zerorpc test suite
zerorpc solves the integration task and let you focus on coding real functions

Categories