I have following code from wesley chun's core python application programming book
#!/usr/bin/env python
"""
tcp server
"""
from socket import AF_INET, SOCK_STREAM, socket
from time import ctime
HOST = ''
PORT = 21567
BUFSIZE = 1024
ADDR = (HOST, PORT)
tcp_server_socket = socket(AF_INET, SOCK_STREAM)
tcp_server_socket.bind(ADDR)
#maximum number of incoming connection before connection is refused
tcp_server_socket.listen(5)
while True:
print "waiting for connection"
tcpCliSock, addr = tcp_server_socket.accept()
print "... connected from:" , addr
while True:
DATA = tcpCliSock.recv(BUFSIZE)
if not DATA:
break
tcpCliSock.send('[%s] %s' % (ctime(), DATA))
tcpCliSock.close()
tcp_server_socket.close()
I did some modifications to the original code however I am still confused how best to modify it to be more compliant
here are all the messages I am getting
C: 14,0: Invalid name "tcp_server_socket" (should match (([A-Z_][A-Z0-9_]*)|(__.*__))$)
C: 21,4: Invalid name "tcpCliSock" (should match (([A-Z_][A-Z0-9_]*)|(__.*__))$)
C: 21,16: Invalid name "addr" (should match (([A-Z_][A-Z0-9_]*)|(__.*__))$)
E: 25,15: Instance of '_socketobject' has no 'recv' member
E: 28,8: Instance of '_socketobject' has no 'send' member
I suppose first three just want me to use all caps variable names, is that the standard practice for these type of scripts, I don't see the code becoming more readable by using this convention, on the contrary it will look less readable, what are the motivation behind such rule in pylint and how to make code more compliant, I hardly think writer of such stature would write code like this without reason, be it readablity, beginner friendliness or anything else.
The two errors about _socketobject you are seeing are a quirk of how the socket module works. This issue has come up on StackOverflow once before, and the linked question provides a couple of answers to help you get rid of those errors.
The first three messages you are getting are convention warnings. They are complaining that the names tcp_server_socket, tcpCliSock and addr do not match the regular expression for constant members. Because your code is at 'top-level' (i.e. outside of any functions or classes), members are expected to be constant, and names of constants should match the regular expression given.
Suppose your Python script was saved in a file tcp_server.py. If you then write import tcp_server either from the Python interpreter or from another Python script, your TCP server will start. This isn't typically what you would want to happen. If you import a module, it can define functions, classes and constants, but it shouldn't run any code.
I'd recommend moving all of the code from the line tcp_server_socket = socket(....) downwards into a separate function, (let's call it start_server), and then adding to the bottom of your script the following lines:
if __name__ == "__main__":
start_server()
These two lines will then start your server if you run the script directly, but not if you import tcp_server from somewhere else.
Once you've done that, the warnings about variable names will go, but you will get some further convention warnings. Two of them will complain about DATA and tcpCliSock not matching the naming convention for variable names, and the other will nag you that your start_server function doesn't have a docstring.
Related
I'm new to socket programming in python. Here is an example of opening a TCP socket in a Mininet host and sending a photo from one host to another. In fact I changed the code that I had used to send a simple message to another host (writing the received data to a text file) in order to meet my requirements. Although when I implement this revised code, there is no error and it seems to transfer correctly, I am not sure whether this is a correct way to do this transmission or not. Since I'm running both hosts on the same machine, I thought it may have an influence on the result. I wanted to ask you to check whether this is a correct way to transfer or I should add or remove something.
mininetSocketTest.py
#!/usr/bin/python
from mininet.topo import Topo, SingleSwitchTopo
from mininet.net import Mininet
from mininet.log import lg, info
from mininet.cli import CLI
def main():
lg.setLogLevel('info')
net = Mininet(SingleSwitchTopo(k=2))
net.start()
h1 = net.get('h1')
p1 = h1.popen('python myClient2.py')
h2 = net.get('h2')
h2.cmd('python myServer2.py')
CLI( net )
#p1.terminate()
net.stop()
if __name__ == '__main__':
main()
myServer2.py
import socket
import sys
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.bind(('10.0.0.1', 12345))
buf = 1024
f = open("2.jpg",'wb')
s.listen(1)
conn , addr = s.accept()
while 1:
data = conn.recv(buf)
print(data[:10])
#print "PACKAGE RECEIVED..."
f.write(data)
if not data: break
#conn.send(data)
conn.close()
s.close()
myClient2.py:
import socket
import sys
f=open ("1.jpg", "rb")
print sys.getsizeof(f)
buf = 1024
data = f.read(buf)
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(('10.0.0.1',12345))
while (data):
if(s.sendall(data)):
#print "sending ..."
data = f.read(buf)
print(f.tell(), data[:10])
else:
s.close()
s.close()
This loop in client2 is wrong:
while (data):
if(s.send(data)):
print "sending ..."
data = f.read(buf)
As the send
docs say:
Returns the number of bytes sent. Applications are responsible for checking that all data has been sent; if only some of the data was transmitted, the application needs to attempt delivery of the remaining data. For further information on this topic, consult the Socket Programming HOWTO.
You're not even attempting to do this. So, while it probably works on localhost, on a lightly-loaded machine, with smallish files, it's going to break as soon as you try to use it for real.
As the help says, you need to do something to deliver the rest of the buffer. Since there's probably no good reason you can't just block until it's all sent, the simplest thing to do is to call sendall:
Unlike send(), this method continues to send data from bytes until either all data has been sent or an error occurs. None is returned on success. On error, an exception is raised…
And this brings up the next problem: You're not doing any exception handling anywhere. Maybe that's OK, but usually it isn't. For example, if one of your sockets goes down, but the other one is still up, do you want to abort the whole program and hard-drop your connection, or do you maybe want to finish sending whatever you have first?
You should at least probably use a with clause of a finally, to make sure you close your sockets cleanly, so the other side will get a nice EOF instead of an exception.
Also, your server code just serves a single client and then quits. Is that actually what you wanted? Usually, even if you don't need concurrent clients, you at least want to loop around accepting and servicing them one by one.
Finally, a server almost always wants to do this:
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
Without this, if you try to run the server again within a few seconds after it finished (a platform-specific number of seconds, which may even depend whether it finished with an exception instead of a clean shutdown), the bind will fail, in the same way as if you tried to bind a socket that's actually in use by another program.
First of all, you should use TCP and not UDP. TCP will ensure that your client/server has received the whole photo properly. UDP is more used for content streaming.
Absolutely not your use case.
def mp_worker(row):
ip = row[0]
ip_address = ip
tcp_port = 2112
buffer_size = 1024
# Read the reset message sent from the sign when a new connection is established
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
print('Connecting to terminal: {0}'.format(ip_address))
s.connect((ip_address, tcp_port))
#Putting a breakpoint on this call in debug makes the script work
s.send(":08a8RV;")
#data = recv_timeout(s)
data = s.recv(buffer_size)
strip = data.split("$", 1)[-1].rstrip()
strip = strip[:-1]
print(strip)
termStat = [ip_address, strip]
terminals.append(termStat)
except Exception as exc:
print("Exception connecting to: " + ip_address)
print(exc)
The above code is the section of the script that is causing the problem. It's a pretty simple function that connects to a socket based on a passed in IP from a DB query and receives a response that indicates the hardware's firmware version.
Now, the issue is that when I run it in debug with a breakpoint on the socket I get the entire expected response from the hardware, but if I don't have a breakpoint in there or I full on Run the script it only responds with part of the expected message. I tried both putting a time.sleep() in after the send to see if it would get the entire response and I tried using the commented out recv_timeout() method in there which uses a non-blocking socket and timeout to try to get an entire response, both with the exact same results.
As another note, this works in a script with everything in one main code block, but I need this part separated into a function so I can use it with the multiprocessing library. I've tried running it on both my local Windows 7 machine and on a Unix server with the same results.
I'll expands and reiterate on what I've put into a comment moment ago. I am still not entirely sure what is behind the different behavior in either scenario (apart from timing guess apparently disproved by an attempt to include sleep.
However, it's somewhat immaterial as stream sockets do not guarantee you get all the requested data at once and in chunks as requested. This is up for an application to deal with. If the server closes the socket after full response was sent, you could replace:
data = s.recv(buffer_size)
with recv() until zero bytes were received, this would be equivalent of getting 0 (EOF) from from the syscall:
data = ''
while True:
received = s.recv(buffer_size)
if len(received) == 0:
break
data += received
If that is not the case, you would have to rely on fixed or known (sent in the beginning) size you want to consider together. Or deal with this on protocol level (look for characters, sequences used to signal message boundaries.
I just recently found out a solution here, and thought I'd post it in case anyone else has issue, I just decided to try and call socket.recv() before calling socket.send() and then calling socket.recv() again afterwards and it seems to have fixed the issue; I couldn't really tell you why it works though.
data = s.recv(buffer_size)
s.send(":08a8RV;")
data = s.recv(buffer_size)
The problem statement is as follows:
I am working with Abaqus, a program for analyzing mechanical problems. It is basically a standalone Python interpreter with its own objects etc. Within this program, I run a python script to set up my analysis (so this script can be modified). It also contains a method which has to be executed when an external signal is received. These signals come from the main script that I am running in my own Python engine.
For now, I have the following workflow:
The main script sets a boolean to True when the Abaqus script has to execute a specific function, and pickles this boolean into a file. The Abaqus script regularly checks this file to see whether the boolean has been set to true. If so, it does an analysis and pickles the output, so that the main script can read this output and act on it.
I am looking for a more efficient way to signal the other process to start the analysis, since there is a lot of unnecessary checking going on right know. Data exchange via pickle is not an issue for me, but a more efficient solution is certainly welcome.
Search results always give me solutions with subprocess or the like, which is for two processes started within the same interpreter. I have also looked at ZeroMQ since this is supposed to achieve things like this, but I think this is overkill and would like a solution in python. Both interpreters are running python 2.7 (although different versions)
Edit:
Like #MattP, I'll add this statement of my understanding:
Background
I believe that you are running a product called abaqus. The abaqus product includes a linked-in python interpreter that you can access somehow (possibly by running abaqus python foo.py on the command line).
You also have a separate python installation, on the same machine. You are developing code, possibly including numpy/scipy, to run on that python installation.
These two installations are different: they have different binary interpreters, different libraries, different install paths, etc. But they live on the same physical host.
Your objective is to enable the "plain python" programs, written by you, to communicate with one or more scripts running in the "Abaqus python" environment, so that those scripts can perform work inside the Abaqus system, and return results.
Solution
Here is a socket based solution. There are two parts, abqlistener.py and abqclient.py. This approach has the advantage that it uses a well-defined mechanism for "waiting for work." No polling of files, etc. And it is a "hard" API. You can connect to a listener process from a process on the same machine, running the same version of python, or from a different machine, or from a different version of python, or from ruby or C or perl or even COBOL. It allows you to put a real "air gap" into your system, so you can develop the two parts with minimal coupling.
The server part is abqlistener. The intent is that you would copy some of this code into your Abaqus script. The abq process would then become a server, listening for connections on a specific port number, and doing work in response. Sending back a reply, or not. Et cetera.
I am not sure if you need to do setup work for each job. If so, that would have to be part of the connection. This would just start ABQ, listen on a port (forever), and deal with requests. Any job-specific setup would have to be part of the work process. (Maybe send in a parameter string, or the name of a config file, or whatever.)
The client part is abqclient. This could be moved into a module, or just copy/pasted into your existing non-ABQ program code. Basically, you open a connection to the right host:port combination, and you're talking to the server. Send in some data, get some data back, etc.
This stuff is mostly scraped from example code on-line. So it should look real familiar if you start digging into anything.
Here's abqlistener.py:
# The below usage example is completely bogus. I don't have abaqus, so
# I'm just running python2.7 abqlistener.py [options]
usage = """
abacus python abqlistener.py [--host 127.0.0.1 | --host mypc.example.com ] \\
[ --port 2525 ]
Sets up a socket listener on the host interface specified (default: all
interfaces), on the given port number (default: 2525). When a connection
is made to the socket, begins processing data.
"""
import argparse
parser = argparse.ArgumentParser(description='Abacus listener',
add_help=True,
usage=usage)
parser.add_argument('-H', '--host', metavar='INTERFACE', default='',
help='Interface IP address or name, or (default: empty string)')
parser.add_argument('-P', '--port', metavar='PORTNUM', type=int, default=2525,
help='port number of listener (default: 2525)')
args = parser.parse_args()
import SocketServer
import json
class AbqRequestHandler(SocketServer.BaseRequestHandler):
"""Request handler for our socket server.
This class is instantiated whenever a new connection is made, and
must override `handle(self)` in order to handle communicating with
the client.
"""
def do_work(self, data):
"Do some work here. Call abaqus, whatever."
print "DO_WORK: Doing work with data!"
print data
return { 'desc': 'low-precision natural constants','pi': 3, 'e': 3 }
def handle(self):
# Allow the client to send a 1kb message (file path?)
self.data = self.request.recv(1024).strip()
print "SERVER: {} wrote:".format(self.client_address[0])
print self.data
result = self.do_work(self.data)
self.response = json.dumps(result)
print "SERVER: response to {}:".format(self.client_address[0])
print self.response
self.request.sendall(self.response)
if __name__ == '__main__':
print args
server = SocketServer.TCPServer((args.host, args.port), AbqRequestHandler)
print "Server starting. Press Ctrl+C to interrupt..."
server.serve_forever()
And here's abqclient.py:
usage = """
python2.7 abqclient.py [--host HOST] [--port PORT]
Connect to abqlistener on HOST:PORT, send a message, wait for reply.
"""
import argparse
parser = argparse.ArgumentParser(description='Abacus listener',
add_help=True,
usage=usage)
parser.add_argument('-H', '--host', metavar='INTERFACE', default='',
help='Interface IP address or name, or (default: empty string)')
parser.add_argument('-P', '--port', metavar='PORTNUM', type=int, default=2525,
help='port number of listener (default: 2525)')
args = parser.parse_args()
import json
import socket
message = "I get all the best code from stackoverflow!"
print "CLIENT: Creating socket..."
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
print "CLIENT: Connecting to {}:{}.".format(args.host, args.port)
s.connect((args.host, args.port))
print "CLIENT: Sending message:", message
s.send(message)
print "CLIENT: Waiting for reply..."
data = s.recv(1024)
print "CLIENT: Got response:"
print json.loads(data)
print "CLIENT: Closing socket..."
s.close()
And here's what they print when I run them together:
$ python2.7 abqlistener.py --port 3434 &
[2] 44088
$ Namespace(host='', port=3434)
Server starting. Press Ctrl+C to interrupt...
$ python2.7 abqclient.py --port 3434
CLIENT: Creating socket...
CLIENT: Connecting to :3434.
CLIENT: Sending message: I get all the best code from stackoverflow!
CLIENT: Waiting for reply...
SERVER: 127.0.0.1 wrote:
I get all the best code from stackoverflow!
DO_WORK: Doing work with data!
I get all the best code from stackoverflow!
SERVER: response to 127.0.0.1:
{"pi": 3, "e": 3, "desc": "low-precision natural constants"}
CLIENT: Got response:
{u'pi': 3, u'e': 3, u'desc': u'low-precision natural constants'}
CLIENT: Closing socket...
References:
argparse, SocketServer, json, socket are all "standard" Python libraries.
To be clear, my understanding is that you are running Abaqus/CAE via a Python script as an independent process (let's call it abq.py), which checks for, opens, and reads a trigger file to determine if it should run an analysis. The trigger file is created by a second Python process (let's call it main.py). Finally, main.py waits to read the output file created by abq.py. You want a more efficient way to signal abq.py to run an analysis, and you're open to different techniques to exchange data.
As you mentioned, subprocess or multiprocessing might be an option. However, I think a simpler solution is to combine your two scripts, and optionally use a callback function to monitor the solution and process your output. I'll assume there is no need to have abq.py constantly running as a separate process, and that all analyses can be started from main.py whenever it is appropriate.
Let main.py have access to the Abaqus Mdb. If it's already built, you open it with:
mdb = openMdb(FileName)
A trigger file is not needed if main.py starts all analyses. For example:
if SomeCondition:
j = mdb.Job(name=MyJobName, model=MyModelName)
j.submit()
j.waitForCompletion()
Once complete, main.py can read the output file and continue. This is straightforward if the data file was generated by the analysis itself (e.g. .dat or .odb files). OTH, if the output file is generated by some code in your current abq.py, then you can probably just include it in main.py instead.
If that doesn't provide enough control, instead of the waitForCompletion method you can add a callback function to the monitorManager object (which is automatically created when you import the abaqus module: from abaqus import *). This allows you to monitor and respond to various messages from the solver, such as COMPLETED, ITERATION, etc. The callback function is defined like:
def onMessage(jobName, messageType, data, userData):
if messageType == COMPLETED:
# do stuff
else:
# other stuff
Which is then added to the monitorManager and the job is called :
monitorManager.addMessageCallback(jobName=MyJobName,
messageType=ANY_MESSAGE_TYPE, callback=onMessage, userData=MyDataObj)
j = mdb.Job(name=MyJobName, model=MyModelName)
j.submit()
One of the benefits to this approach is that you can pass in a Python object as the userData argument. This could potentially be your output file, or some other data container. You could probably figure out how to process the output data within the callback function - for example, access the Odb and get the data, then do any manipulations as needed without needing the external file at all.
I agree with the answer, except for some minor syntax problems.
defining instance variables inside the handler is a no no. not to mention they are not being defined in any sort of init() method. Subclass TCPServer and define your instance variables in TCPServer.init(). Everything else will work the same.
I'm having a bizarre issue. Basically, the problem I have right now is dealing with two different LineReceiver servers that are connected to each other. Essentially, if I were to input something into server A, then I want some output to appear in server B. And I would like to do this vice versa. I am running two servers on two different source files (also running them on different processes via & shellscript) ServerA.py and ServerB.py where the ports are (12650 and 12651) respectively. I am also connecting to each server using telnet.
from twisted.internet import protocol, reactor
from twisted.protocols.basic import LineReceiver
class ServerA(LineReceiver);
def connectionMade(self):
self.transport.write("Is Server A\n")
def dataReceived(self, data):
self.sendLine(data)
def lineReceived(self, line):
self.transport.write(line)
def main():
client = protocol.ClientFactory()
client.protocol = ServerA
reactor.connectTCP("localhost", 12650, client)
server = protocol.ServerFactory()
server.protocol = ServerA
reactor.listenTCP(12651, server)
reactor.run()
if __name__ == '__main__':
main()
My issue is the use of sendLine. When I try to do a sendLine call from serverA with some arbitrary string, serverA ends up spitting out the exact string instead of sending it down the connection which was done in main(). Exactly why is this happening? I've been looking around and tried each solution I came across and I can't seem to get it to work properly. The bizarre thing is my friend is essentially doing the same thing and he gets some working results but this is the simplest program I could think of to try to figure out the cause for this strange phenomenon.
In any case, the gist is, I'm expecting to get the input I put into serverA to appear in serverB.
Note: Server A and Server B have the exact same source code save for the class names and ports.
You have overridden dataReceived. That means that lineReceived will never be called, because it is LineReceiver's dataReceived implementation that eventually calls lineReceived, and you're never calling up to it.
You should only need to override lineReceived and then things should work as you expect.
I am trying to run the following python server under windows:
"""
An echo server that uses select to handle multiple clients at a time.
Entering any line of input at the terminal will exit the server.
"""
import select
import socket
import sys
host = ''
port = 50000
backlog = 5
size = 1024
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server.bind((host,port))
server.listen(backlog)
input = [server,sys.stdin]
running = 1
while running:
inputready,outputready,exceptready = select.select(input,[],[])
for s in inputready:
if s == server:
# handle the server socket
client, address = server.accept()
input.append(client)
elif s == sys.stdin:
# handle standard input
junk = sys.stdin.readline()
running = 0
else:
# handle all other sockets
data = s.recv(size)
if data:
s.send(data)
else:
s.close()
input.remove(s)
server.close()
I get the error message (10038, 'An operation was attempted on something that is not a socket'). This probably relates back to the remark in the python documentation that "File objects on Windows are not acceptable, but sockets are. On Windows, the underlying select() function is provided by the WinSock library, and does not handle file descriptors that don’t originate from WinSock.". On internet there are quite some posts on this topic, but they are either too technical for me or simply not clear. So my question is: is there any way the select() statement in python can be used under windows? Please add a little example or modify my code above. Thanks!
Look like it does not like sys.stdin
If you change input to this
input = [server]
the exception will go away.
This is from the doc
Note:
File objects on Windows are not acceptable, but sockets are. On Windows, the
underlying select() function is provided by the WinSock library, and does not
handle file descriptors that don’t originate from WinSock.
I don't know if your code has other problems, but the error you're getting is because of passing input to select.select(), the problem is that it contains sys.stdin which is not a socket. Under Windows, select only works with sockets.
As a side note, input is a python function, it's not a good idea to use it as a variable.
Of course and the answers given are right...
you just have to remove the sys.stdin from the input but still use it in the iteration:
for s in inputready+[sys.stdin]: