I feel like this should be an easy solution but it's the end of the day and I'm brain-dead.
I am currently spawning a couple of processes, one process is receiving and storing data to a file. Another is parsing the data and third is waiting for user input to know when to stop the storing of data.
What I need to know how to do is breakout of my while loop. I'd like to not use global variables set by the parent process but if that is required I can do that.
Right now my code looks something like this:
while(packetReceived < totalToReceive):
data, addr = sock.recvfrom(packetSize)
My thoughts were something like this:
breakout = 0
while(packetReceived < totalToReceive || breakout != 0):
data, addr = sock.recvfrom(packetSize)
but then I need to set breakout somehow. Any help would be greatly appreciated.
You can't share state just by having a global variable in the parent process. This may appear to work, but it only works sometimes; it's neither reliable nor predictable. Except that on Windows, it reliably and predictably never works; each child will always have its own independent copy of the flag, and therefore you will never quit.
If you really want to do this by sharing a variable, see Sharing state between processes in the docs, but the short version is: You create a multiprocessing.Value. And then you use a multiprocessing.Condition to protect that value against races, because otherwise, there's no guarantee that the child processes will ever see a change from the parent.
Of course you can fake this by, e.g. creating an mmap of minimum size and just using m[0] as a flag and m.flush() instead of the condition, but that's not really any simpler.
The alternative way to do this is to use a multiprocessing.Pipe or similar to pass a "shut down now" message. The child processes can each spawn a thread to block on the pipe, or you can toss the pipe and your socket into a select together, or all the other usual tricks.
There may be another, simpler option in this case: don't use multiprocessing in the first place. Clearly your background task is not CPU-bound, since it's just looping around reading from a socket, so why not just threading?
Also, it strikes me that you might be able to simplify your design in other ways, which could remove this problem entirely. Do you need a file between the reading and processing jobs instead of, say, a queue, or even just a direct sequential pipeline? Can you toss the user input and the socket into the same event loop (plain old select if user input is stdin and you don't care about Windows; use a QSocket instead of a socket.socket if user input is a Qt GUI; twisted if you're willing to learn twisted; etc.). Or, is there real user input, or just "quit now" (or "shut down the socket and process the remaining messages now"), which you could handle with ^C?
Instead of using a multiprocessing variable, consider checking for the presence of "poison pill" to break out of your loop.
For example, change:
data, addr = sock.recvfrom(packetSize)
to something like:
received = sock.recvfrom(packetSize)
if received is None:
break
data, addr = received
You can signal the process to break out of its loop by sending it a None value. I'm not sure if your sock can send/receive None, but the general idea is the same.
Related
My script has to run over a day and its core cycle runs 2-3 times per a minute. I used multiprocessing to give a command simultaneously and each of them will be terminated/join within one cycle.
But in reality I found the software end up out of swap memory or computer freezing situation, I guess this is caused by accumulated processes. I can see on another session while running program, python PID abnormally increasing by time. So I just assume this must be something process thing. What I don't understand is how it happens though I made sure each cycle's process has to be finished on that cycle before proceed the next one.
so I am guessing, actual computing needs more time to progress 'terminate()/join()' job, so I should not "reuse" same object name. Is this proper guessing or is there other possibility?
def function(a,b):
try:
#do stuff # audio / serial things
except:
return
flag_for_2nd_cycle=0
for i in range (1500): # main for running long time
#do something
if flag_for_2nd_cycle==1:
while my_process.is_alive():
if (timecondition) < 30: # kill process if it still alive
my_process.terminate()
my_process.join()
flag_for_2nd_cycle=1
my_process=multiprocessing.process(target=function, args=[c,d])
my_process.start()
#do something and other process jobs going on, for example
my_process2 = multiprocessing.process() ##*stuff
my_process2.terminate()
my_process2.join()
Based on your comment, you are controlling three projectors over serial ports.
The simplest way to do that would be to simply open three serial connections (using pySerial). Then run a loop where you check for available data each of the connections and if so, read and process it. Then you send commands to each of the projectors in turn.
Depending on the speed of the serial link you might not need more than this.
I am unable to grasp this with the help of Programming concepts in general with the following scenario:
Note: All Data transmission in this scenario is done via UDP packets using socket module of Python3
I have a Server which sends some certain amount of data, assume 300 Packets over a WiFi Channel
At the other end, I have a receiver which works on a certain Decoding process to decode the data. This Decoding Process is kind of Infinite Loop which returns Boolean Value true or false at every iteration depending on certain aspects which can be neglected as of now
a Rough Code Snippet is as follows:Python3
incomingPacket = next(bringNextFromBuffer)
if decoder.consume_data(incomingPacket):
# this if condition is inside an infinite loop
# unless the if condition becomes True keep
# keep consuming data in a forever for loop
print("Data has been received")
Everything as of moment works since the Server and Client are in proximity and the data can be decoded. But in practical scenarios I want to check the loop that is mentioned above. For instance, after a certain amount of time, if the above loop is still in the Forever (Infinite) state I would like to send out something back to the server to start the data sending again.
I am not much clear with multithreading concept, but can I use a thread over here in this scenario?
For Example:
Thread a Process for a certain amount of time and keep checking the decoder.consume_data() function and if the time expires and the output is still False can I then send out a kind of Feedback to the server using struct.pack() over sockets.
Of course the networking logic, need NOT be addressed as of now. But is python capable of MONITORING THIS INFINITE LOOP VIA A PARALLEL THREAD OR OTHER CONCEPT OF PROGRAMMING?
Caveats
Unfortunately the Receiver in question is a dumb receiver i.e. No user control is specified. Only thing Receiver can do is decode the data and perhaps send a Feedback to the Server stating whether the data is received or not and that is possible only when the above mentioned LOOP is completed.
What is a possible solution here?
(Would be happy to share more information on request)
Yes you can do this. Roughly it'll look like this:
from threading import Thread
from time import sleep
state = 'running'
def monitor():
while True:
if state == 'running':
tell_client()
sleep(1) # to prevent too much happening here
Thread(target=monitor).start()
while state == 'running':
receive_data()
I have a small software where I have a separate thread which is waiting for ZeroMQ messages. I am using the PUB/SUB communication protocol of ZeroMQ.
Currently I am aborting that thread by setting a variable "cont_loop" to False.
But I discovered that, when no messages arrive to the ZeroMQ subscriber I cannot exit the thread (without taking down the whole program).
def __init__(self):
Thread.__init__(self)
self.cont_loop = True
def abort(self):
self.continue_loop = False
def run(self):
zmq_context = zmq.Context()
zmq_socket = zmq_context.socket(zmq.SUB)
zmq_socket.bind("tcp://*:%s" % *(5556))
zmq_socket.setsockopt(zmq.SUBSCRIBE, "")
while self.cont_loop:
data = zmq_socket.recv()
print "Message: " + data
zmq_socket.close()
zmq_context.term()
print "exit"
I tried to move socket.close() and context.term() to abort-method. So that it shuts down the subscriber but this killed the whole program.
What is the correct way to shut down the above program?
Q: What is the correct way to ... ?
A: There are many ways to achieve the set goal. Let me pick just one, as a mock-up example on how to handle distributed process-to-process messaging.
First. Assume, there are more priorities in typical software design task. Some higher, some lower, some even so low, that one can defer an execution of these low-priority sub-tasks, so that there remains more time in the scheduler, to execute those sub-tasks, that cannot handle waiting.
This said, let's view your code. The SUB-side instruction to .recv() as was being used, causes two things. One visible - it performs a RECEIVE operation on a ZeroMQ-socket with a SUB-behaviour. The second, lesser visible is, it remains hanging, until it gets something "compatible" with a current state of the SUB-behaviour ( more on setting this later ).
This means, it also BLOCKS all the time since such .recv() method call UNTIL some unknown, locally uncontrollable coincidence of states/events makes it to deliver a ZeroMQ-message, with it's content being "compatible" with the locally pre-set state of this (still blocking) SUB-behaviour instance.
That may take ages.
This is exactly why .recv() is being rather used inside a control-loop, where external handling gets both the chance & the responsibility to do what you want ( including abort-related operations & a fair / graceful termination with proper resources' release(s) ).
Receive process becomes .recv( flags = zmq.NOBLOCK ) in rather a try: except: episode. Such a way your local process does not lose it's control over the stream-of-events ( incl. the NOP being one such ).
The best next step?
Take your time and get through a great book of gems, "Code Connected, Volume 1", Pieter HINTJENS, co-father of the ZeroMQ, has published ( also as PDF ).
Many his thoughts & errors to be avoided that he had shared with us is indeed worth your time.
Enjoy the powers of ZeroMQ. It's very powerful & worth getting mastered top-down.
So here's the problem, I have a small server script in Python that is supposed to accept multiple clients and based on the message they are sending, receiving a certain command back to them. It's a simple concept and it's working like I want to, with one really big problem: I put each connection on hold and in separate thread, and I want when a certain connected users puts EXIT to close the connection...Which works, with one really big problem - the thread is kept alive and there is no way to kill it and that really bothers me.
sock = socket()
sock.bind((host,port))
sock.listen(50)
def clientthread(conn):
while True:
data = conn.recv(1024).strip()
if(data == "HELO"):
conn.send("HELO")
elif(data == "EXIT"):
conn.close()
break
return
while True:
conn,addr = sock.accept()
start_new_thread(clientthread, (conn,))
conn.close()
sock.close()
I searched of a way to terminate a thread but just couldn't find it, .join() is not working here since it detects the thread as "dummy", it does not recognize the __stop() and since a couple of searches on google for this topic I'm really out of options. Any idea? I'll be really grateful, thanks.
AFAIK, you can't kill a thread from another - you have to arrange for the thread-to-be-killed to notice some flag has changed, and terminate itself.
BTW, your socket code looks a little off - you need a loop around your send's and recv's unless you use something like twisted or bufsock. IMO, bufsock is much easier and less error prone than twisted, but I may be biased because I wrote bufsock. http://stromberg.dnsalias.org/~strombrg/bufsock.html
The problem with what I'm seeing is that TCP reserves the right to split or aggregate transmission units. Usually it won't, but under high load, or with a changing Path MTU, or even just Nagle, it probably will.
Assuming you're using Python v2.4+, you should be using the newer Threading module. Check out a tutorial on it here - It explains the use of the threading module you're using now and how and why you should use the newer Threading module.
I made an IRC bot which uses a while true loop to receive whatever is said.
To receive I use recv(500), but that stops the loop if there isn't anything to receive, but i need the loop to continue even if there isn't anything to receive.
I need a makeshift timer to continue running.
Example code:
/A lot of stuff/
timer=0
while 1:
timer=timer+1
line=s.recv(500) #If there is nothing to receive, the loop and thus the timer stop.
/A lot of stuff/
So either I need a way to stop it stopping the loop, or I need a better timer.
You can settimeout on the socket so that the call returns promptly (with a suitable exception, so you'll need a try/except around it) if nothing's there -- a timeout of 0.1 seconds actually works better than non-blocking sockets in most conditions.
This is going to prove a bad way to design a network application. I recommend looking into twisted, a networking library with an excellent implementation of the IRC protocol for making a client (like your bot) in twisted.words.protocols.irc.
http://www.habnabit.org/twistedex.html is an example of a very basic IRC bot written using twisted. With very little code, you are able to access a whole, correct, efficient, reconnecting implementation of IRC.
If you are intent on writing this from a socket level yourself, I still recommend studying a networking library like twisted to learn about how to effectively implement network apps. Your current technique will prove less effective than desired.
I usually use irclib which takes care of this sort of detail for you.
If you want to do this with low-level python, consider using the ready_sockets = select.select([s.fileno()], [], [], 0.1) -- this will test the socket s for readability. If your socket's file number is not returned in ready_sockets, then there is no data to read.
Be careful not to use the timout of "0" if you are going to call select repeatedly in a loop that does not otherwise yield the CPU -- that would consume 100% of the CPU as the loop executes. I gave 0.1 seconds timeout as an example; in this case, your timer variable would be counting tenths of a second.
Here's an example:
timer=0
sockets_to_check = [s.fileno()]
while 1:
ready_sockets = select.select(sockets_to_check, [], sockets_to_check, 0.1)
if (len(ready_sockets[2]) > 0):
# Handle socket error or closed connection here -- our socket appeared
# in the 'exceptional sockets' return value so something has happened to
# it.
elif (len(ready_sockets[0]) > 0):
line = s.recv(500)
else:
timer=timer+1 # Note that timer is not incremented if the select did not
# incur a full 0.1 second delay. Although we may have just
# waited for 0.09999 seconds without accounting for that. If
# your timer must be perfect, you will need to implement it
# differently. If it is used only for time-out testing, this
# is fine.
Note that the above code takes advantage of the fact that your input lists contain only one socket. If you were to use this approach with multiple sockets, which select.select does support, the len(ready_sockets[x]) > 0 test would not reveal which socket is ready for reading or has an exception.