I have a python script. It is a test by the way for some hardware (automotive).
The hardware I test has 2 processors, which gives a lot of logs (output on its own console that I can connect via Linux minicom) while it works.
Firstly, I needed to record how much time pass from my trigger activity to the processors's response messages. Let's name it- processor A and processor B. The messages have been coming up in different order. So, for not being stopped by waiting for subsequent signal (while the other could come first from the other processor) and for not making my second signal's time incorrectly higher, I used threads. Thread1 waiting for message 1 on processor A and thread2 waiting for message 2 on processor B. That worked fine, regardless of the order that messages 1 and 2 have come. I put that threads to an array and in 'for' loop started them and then in 'for' loop joined them.
But now I need to make sth like this: on 'B', I am still waiting only for one line of message output. And that works fine, the thread for that operation reads log-line and calculate correct time form trigger to that log. But on processor 'B', I have to check times for new 7 logs. So I made 7 new functions for reading this 7 new messages and calculating time for them. Assigned that functions to 7 new threads. Put them, as previously in the array. Then started them in for-loop and joined them also in for-loop.
What is the result? The only one message from B-processor is read and time is recorded correctly. But no message from A-processor is detected (although they came for sure).
And here one info — there is a function for reading console messages-logs, line by line, from the device. But it works like a stack — once read line is 'popped out' from a buffer. So my 8 threads are popping out messages they aren't interested in, and there is almost no chance for detecting a message by proper thread, before it will be popped out by the other.
So my idea for solving the problem is to make 8 threads work in strict order : 1-2-3-4-5-6-7-8-1-2-3... over every next log line. However... I don't know how to do that. Please, share some ideas :)
Related
I am running a multi-process (and multi-threaded) python script on debian linux. One of the processes repeatedly crashes after 5 or 6 days. It is always the same, unique workload on the process that crashes. There are no entries in syslog about the crash - the process simply disappears silently. It also behaves completely normally and produces normal results, then suddenly stops.
How can I instrument the rogue process. Increasing the loglevel will produce large amounts of logs, so that's not my preferred option.
I used good-old log analysis to determine what happens when the process fails.
increased log level of the rogue process to INFO after 4 days
monitored the application for the rogue process failing
pin-pointed the point in time of the failure in syslog
analysed syslog at that time
I found following error at that time; first row is the last entry made by the rogue process (just before it fails), the 2nd row is the one pointing to the underlying error.
In this case there is a problem with pyzmq bindings or zeromq library. I'll open a ticket with them.
Aug 10 08:30:13 rpi6 python[16293]: 2021-08-10T08:30:13.045 WARNING w1m::pid 16325, tid 16415, taking reading from sensors with map {'000005ccbe8a': ['t-top'], '000005cc8eba': ['t-mid'], '00000676e5c3': ['t
Aug 10 08:30:14 rpi6 python[16293]: Too many open files (bundled/zeromq/src/ipc_listener.cpp:327)
A
Hope this helps someone in the future.
My script has to run over a day and its core cycle runs 2-3 times per a minute. I used multiprocessing to give a command simultaneously and each of them will be terminated/join within one cycle.
But in reality I found the software end up out of swap memory or computer freezing situation, I guess this is caused by accumulated processes. I can see on another session while running program, python PID abnormally increasing by time. So I just assume this must be something process thing. What I don't understand is how it happens though I made sure each cycle's process has to be finished on that cycle before proceed the next one.
so I am guessing, actual computing needs more time to progress 'terminate()/join()' job, so I should not "reuse" same object name. Is this proper guessing or is there other possibility?
def function(a,b):
try:
#do stuff # audio / serial things
except:
return
flag_for_2nd_cycle=0
for i in range (1500): # main for running long time
#do something
if flag_for_2nd_cycle==1:
while my_process.is_alive():
if (timecondition) < 30: # kill process if it still alive
my_process.terminate()
my_process.join()
flag_for_2nd_cycle=1
my_process=multiprocessing.process(target=function, args=[c,d])
my_process.start()
#do something and other process jobs going on, for example
my_process2 = multiprocessing.process() ##*stuff
my_process2.terminate()
my_process2.join()
Based on your comment, you are controlling three projectors over serial ports.
The simplest way to do that would be to simply open three serial connections (using pySerial). Then run a loop where you check for available data each of the connections and if so, read and process it. Then you send commands to each of the projectors in turn.
Depending on the speed of the serial link you might not need more than this.
I have a program that constantly runs if it receives an input, it'll do a task then go right back to awaiting input. I'm attempting to add a feature that will ping a gaming server every 5 minutes, and if the results every change, it will notify me. Problem is, if I attempt to implement this, the program halts at this function and won't go on to the part where I can then input. I believe I need multithreading/multiprocessing, but I have no experience with that, and after almost 2 hours of researching and wrestling with it, I haven't been able to figure it out.
I have tried to use the recursive program I found here but haven't been able to adapt it properly, but I feel this is where I was closest. I believe I can run this as two separate scripts, but then I have to pipe the data around and it would become messier. It would be best for the rest of the program to keep everything on one script.
'''python
def regular_ping(IP):
last_status = None
while True:
present_status = ping_status(IP) #ping_status(IP) being another
#program that will return info I
#need
if present_status != last_status:
notify_output(present_status) #notify_output(msg) being a
#program that will notify me of
# a change
last_status = present_status
time.sleep(300)
'''
I would like this bit of code to run on its own, notifying me of a change (if there is one) every 5 minutes, while the rest of my program also runs and accepts inputs. Instead, the program stops at this function and won't run past it. Any help would be much appreciated, thanks!
You can use a thread or a process for this. But since this is not a CPU bound operation, overhead of dedicating a process is not worth it. So a thread would be enough. You can implement it as follows:
import threading
thread = threading.Thread(target=regular_ping, args=(ip,))
thread.start()
# Rest of the program
thread.join()
I'm trying to implement asynchronous, distributed computation engine for python, which is compatible with jupyter notebook. The system is supposed to be based on 'push notification' approach what makes it (almost, I hope) impossible to allows user to wait for specific computation result (i.e. block execution of given notebook cell until message with expected result is delivered). To be precise, I'm trying to:
Add new task to jupyter notebook event loop (the task is periodically checking if specific msg has arrived in while loop, breaks when msg arrived)
Block current cell waiting for the task to be completed.
Still be able to process incoming messages (Using RabbitMQ, Pika, slightly modified code from http://pika.readthedocs.io/en/0.10.0/examples/asynchronous_consumer_example.html)
I have prepared notebooks presenting my problem: https://github.com/SLEEP-MAN/RabbitMQ_jupyterNotebook_asyncio
Any ideas? Is it possible (maybe some IPython/IpyKernel magic ;>?), or I have to change my approach by 180 degree?
Your issue is that you mixed two different loops in one. That is why it didn't work. You need to make few changes.
Use AsyncioConnection instead of TornadoConnection
return adapters.AsyncioConnection(pika.URLParameters(self._url),
self.on_connection_open)
Next you need to remove below line
self._connection.ioloop.start() #throws exception but not a problem...
Because your loop is already started in connect. Then you need use the below code for waiting
loop = asyncio.get_event_loop()
loop.run_until_complete(wait_for_eval())
And now it works
Please forgive me as I'm new to using the multiprocessing library in python and new to testing multiprocess/multi-threaded projects.
In some legacy code, someone created a pool of processes to execute multiple processes in parallel. I'm trying to debug the code by making the pool only have 1 process but the output looks like it's still using multiple processes.
Below is some sanitized example code. Hopefully I included all the important elements to demo what I'm experiencing.
def myTestFunc():
pool = multiprocessing.Pool(1) # should only use 1 process
for i in someListOfNames:
pool.apply_async(method1, args=(listA))
def method1(listA):
for i in listA:
print "this is the value of i: " + i
sys.stdout.flush()
What is happening is since I expect there should only be 1 process in the pool, I shouldn't have any output collision. What I see sometimes in the log msgs is this:
this is the value of i: Alpha
this is the value of i: Bravo
this is the this is the value of i: Mike # seems like 2 things trying to write at the same time
The two things writing at the same time seems to appear closer to the bottom of my debug log, rather than the top, which means the longer I run, the more likely I get these msgs overwriting each other. I haven't tested with a shorter list yet though.
I realize testing multi-process/multi-threaded programs is difficult but in this case, I think I've restricted it such that it should be a lot easier than normal to test. I'm confused why this is happening b/c
I set the pool to have only 1 process
(I think) I force the process to flush its write buffer so it should be writing w/o waiting/queuing and getting this situation.
Thanks in advance for any help you can give me.