The Code:
#!/usr/bin/env python
# coding: utf-8
from threading import Thread
from time import *
print strftime("%Y.%m.%d - %H:%M:%S;\n", localtime())
def pruefeStatusAlarmUndBetrieb():
sleep(3)
print "Alarm und Betrieb"
print strftime("%Y.%m.%d - %H:%M:%S;\n", localtime())
def pruefeStatusEingaenge():
sleep(4)
print "Eingänge"
print strftime("%Y.%m.%d - %H:%M:%S;\n", localtime())
def pruefeStatusAusgaenge():
sleep(6)
print "Ausgänge"
print strftime("%Y.%m.%d - %H:%M:%S;\n", localtime())
def pruefe():
s = Thread(target=pruefeStatusAlarmUndBetrieb())
s.start()
t = Thread(target=pruefeStatusEingaenge())
t.start()
u = Thread(target=pruefeStatusAusgaenge())
u.start()
pruefe()
what Python does:
/System/Library/Frameworks/Python.framework/Versions/2.7/bin/python2.7 /Users/????/Documents/smartsys/test/test_threading.py
2016.12.09 - 19:27:23;
Alarm und Betrieb
2016.12.09 - 19:27:26;
Eingänge
2016.12.09 - 19:27:30;
Ausgänge
2016.12.09 - 19:27:36;
Process finished with exit code 0
Sorry for all that german terms in it, hopefully it doesn't hinder anyone to understand that piece of code.
What i'm trying to solve:
I expect that code to run parallel, but its runing von after another thread, so its a serial operation, if a add an infinite loop with sleepcommand, execution will stay inside infinite loop, so the other threads will be still ignored.
What am i doing wrong, or have i missunderstood?
At the end all 3 threads should run parallel and do what ever i will supose them to do, after having slept a short period of thime.
You must no call the target in s = Thread(target=pruefeStatusAlarmUndBetrieb). Remove ().
Related
I use python multiprocessing to compute some sort of scores on DNA sequences from a large file.
For that I write and use the script below.
I use a Linux machine with 48 cpu in python 3.8 environment.
Th code work fine, and terminate the work correctly and print the processing time at the end.
Problem: when I use the htop command, I find that all 48 processes are still alive.
I don't know why, and I don't know what to add to my script to avoid this.
import csv
import sys
import concurrent.futures
from itertools import combinations
import psutil
import time
nb_cpu = psutil.cpu_count(logical=False)
def fun_job(seq_1, seq_2): # seq_i : (id, string)
start = time.time()
score_dist = compute_score_dist(seq_1[1], seq_2[1])
end = time.time()
return seq_1[0], seq_2[0], score_dist, end - start # id seq1, id seq2, score, time
def help_fun_job(nested_pair):
return fun_job(nested_pair[0], nested_pair[1])
def compute_using_multi_processing(list_comb_ids, dict_ids_seqs):
start = time.perf_counter()
with concurrent.futures.ProcessPoolExecutor(max_workers=nb_cpu) as executor:
results = executor.map(help_fun_job,
[((pair_ids[0], dict_ids_seqs[pair_ids[0]]), (pair_ids[1], dict_ids_seqs[pair_ids[1]]))
for pair_ids in list_comb_ids])
save_results_to_csv(results)
finish = time.perf_counter()
proccessing_time = str(datetime.timedelta(seconds=round(finish - start, 2)))
print(f' Processing time Finished in {proccessing_time} hh:mm:ss')
def main():
print("nb_cpu in this machine : ", nb_cpu)
file_path = sys.argv[1]
dict_ids_seqs = get_dict_ids_seqs(file_path)
list_ids = list(dict_ids_seqs) # This will convert the dict_keys to a list
list_combined_ids = list(combinations(list_ids, 2))
compute_using_multi_processing(list_combined_ids, dict_ids_seqs)
if __name__ == '__main__':
main()
Thank you for your help.
Edit : add the complete code for fun_job (after #Booboo answer)
from Bio import Align
def fun_job(seq_1, seq_2): # seq_i : (id, string)
start = time.time()
aligner = Align.PairwiseAligner()
aligner.mode = 'global'
score_dist = aligner.score(seq_1[1],seq_2[1])
end = time.time()
return seq_1[0], seq_2[0], score_dist, end - start # id seq1, id seq2, score, time
When the with ... as executor: block exits, there is an implicit call to executor.shutdown(wait=True). This will wait for all pending futures to to be done executing "and the resources associated with the executor have been freed", which presumably includes terminating the processes in the pool (if possible?). Why your program terminates (or does it?) or at least you say all the futures have completed executing, while the processes have not terminated is a bit of a mystery. But you haven't provided the code for fun_job, so who can say why this is so?
One thing you might try is to switch to using the multiprocessing.pool.Pool class from the multiprocessing module. It supports a terminate method, which is implicitly called when its context manager with block exits, that explicitly attempts to terminate all processes in the pool:
#import concurrent.futures
import multiprocessing
... # etc.
def compute_using_multi_processing(list_comb_ids, dict_ids_seqs):
start = time.perf_counter()
with multiprocessing.Pool(processes=nb_cpu) as executor:
results = executor.map(help_fun_job,
[((pair_ids[0], dict_ids_seqs[pair_ids[0]]), (pair_ids[1], dict_ids_seqs[pair_ids[1]]))
for pair_ids in list_comb_ids])
save_results_to_csv(results)
finish = time.perf_counter()
proccessing_time = str(datetime.timedelta(seconds=round(finish - start, 2)))
print(f' Processing time Finished in {proccessing_time} hh:mm:ss')
I've been trying to wrap my head around multiprocessing using an old python bitcoin mining program. Although relatively useless for mining, I figured this would be a great way to explore multiprocessing. However, I've hit a wall when it comes to stopping the processes when one of them achieves the goal they are all working towards.
I want to kill all multiprocessing pools when one of them finds the solution. Then allow the program to continue. I have tried terminate() and join(). I've attempted to include an Event(). I've tried using Process instead of Pool with the direction of a similar issue here: Killing a multiprocessing process when condition is met. However, same problem. How can I stop all processes after a condition is met without exiting the program with something like sys.exit() that would kill the entire program?
I tried also apply_sync with the direction from this post: Python Multiprocess Pool. How to exit the script when one of the worker process determines no more work needs to be done? However, it did not solve the problem of needing to continue executing the final functions of the program. In fact, it actually slowed the program significantly.
For clarity, I've included the code I tried based on the above mentioned link here:
from multiprocessing import Pool
from hashlib import sha256
import time
def SHA256(text):
return sha256(text.encode("ascii")).hexdigest()
def solution_helper(args):
solution, nonce = do_job(args)
if solution:
print(f"\nNonce Found: {nonce}\n")
return True
else:
return False
class Mining():
def __init__(self, workers, initargs):
self.pool = Pool(processes=workers, initargs=initargs)
def callback(self, result):
if result:
print('Solution Found...Terminating Processes...')
self.pool.terminate()
def do_job(self):
for args in values:
start_nonce = args[0]
end_nonce = args[1]
prefix_str = '0'*difficulty
self.pool.apply_async(solution_helper, args=args, callback=self.callback)
start = time.time()
for nonce in range(start_nonce, end_nonce):
text = str(block_number) + transactions + previous_hash + str(nonce)
new_hash = SHA256(text)
if new_hash.startswith(prefix_str):
print(f"Hashing: {text}")
print(f"\nSuccessfully mined bitcoin with nonce value: {nonce}\n")
print(f"New hash: {new_hash}")
total_time = str((time.time()-start))
print(f"\nEnd mning... Mining took {total_time} seconds\n")
return new_hash, nonce
self.pool.close()
self.pool.join()
print('.Goodbye.')
block_number = 5
transactions = """
bill->steve->20,
jan->phillis->45
"""
previous_hash = '0000000b7c7723e4d3a8654c975fe4dd23d4d37f22d0ea7e5abde2225d1567dc6'
values = [(20000, 100000), (100000, 1000000), (1000000, 10000000), (10000000, 100000000)]
difficulty = 4
m = Mining(5, values)
m.do_job()
Here's the basic concept. It works great to start the processes, but I cannot figure out how to stop them:
from multiprocessing import Pool
from hashlib import sha256
import functools
MAX_NONCE = 1000000000
def SHA256(text):
return sha256(text.encode("ascii")).hexdigest()
def nonce(block_number, transactions, previous_hash, prefix_str):
import time
start = time.time()
for nonce in range(MAX_NONCE):
text = str(block_number) + transactions + previous_hash + str(nonce)
new_hash = SHA256(text)
if new_hash.startswith(prefix_str):
print(f"\nYay! Successfully mined bitcoins with nonce value:{nonce}")
total_time = str((time.time()-start))
print(f"\nend mining. Mining took: {total_time} seconds\n")
print(new_hash + "\n")
def mine(block_number, transactions, previous_hash, prefix_zeros):
from multiprocessing import Pool
with Pool(4) as p:
prefix_str = '0'*prefix_zeros
p.map(nonce(block_number, transactions, previous_hash, prefix_str), [20000, 40000, 60000, 80000, 100000])
if __name__=='__main__':
transactions="""
bill->steve->20,
jan->phillis->45
"""
difficulty=7
print("\nstart mining\n")
new_hash = mine(5, transactions, '0000000b7c7723e4d3a8654c975fe4dd23d4d37f22d0ea7e5abde2225d1567dc6', difficulty)
# Do some other things... Here is where I'd like to get to after the multiproccesses are killed
print(f"\nMission Complete...{new_hash}\n") <---This never gets a chance to happen
I am new to Raspberry pi and python and am having a bit of trouble with some code
I wish to push a button and have the gpio pin that corresponds to that button trigger a relay to turn on for a given amount of time then turn off. I have it working with the code below but by using the 'time.sleep(my-variable)' it holds up the raspberry pi for the duration of the time and i am unable to do anything else.
What i am after is the ability to push one button and get the relay to act for say 10 seconds and within those 10 seconds be able to press another button to fire another relay and do the same thing without tying up the pi
my code below first checks if input_state_LHS is equel to false, then clears the LCD display, writes text to the LCD on one line then on the next line it write the value of my variable(LHS_feedtime) then fires the relay with the time on the next line time.sleep, this is the bit i wish to be rid of but am unable to figure out the code to do it.
if input_state_LHS == False:
## calls the LCD_Clear function which has to be in the same folder as this file
mylcd.lcd_clear()
mylcd.lcd_display_string("LHS Feedtime",1,2)
mylcd.lcd_display_string(str(round(LHS_feedtime, 2)) + " sec" , 2,5)
GPIO.output(27, GPIO.input(12) )
time.sleep(LHS_feedtime)
mylcd.lcd_clear()
mylcd.lcd_display_string("Flatson Feeding", 1)
mylcd.lcd_display_string("Systems", 2,4)
GPIO.output(27, GPIO.input(12) )
menuitem = 0
thanks for the help
The functionality you need is in the Python standard library class threading.Timer. When you start a timer it launches another thread, which consists of a time delay followed by a call to a function that you specify. In contrast to time.sleep() which stops your main thread at that point, with a Timer your main thread will keep going.
Here is roughly what you want:
from threading import Timer
def turn_off_lcd():
mylcd.lcd_clear()
mylcd.lcd_display_string("Flatson Feeding", 1)
mylcd.lcd_display_string("Systems", 2,4)
GPIO.output(27, GPIO.input(12) )
if input_state_LHS == False:
## calls the LCD_Clear function which has to be in the same folder as this file
mylcd.lcd_clear()
mylcd.lcd_display_string("LHS Feedtime",1,2)
mylcd.lcd_display_string(str(round(LHS_feedtime, 2)) + " sec" , 2,5)
GPIO.output(27, GPIO.input(12) )
t = Timer(LHS_feedtime, turn_off_led)
t.start()
menuitem = 0
Here you go, this will constantly loop the code until the 10 seconds have passed. But it will constantly print "10 seconds havent passed" (you can remove this line.) As you will notice, this code does not use the time.sleep() and so will not hold the script up.
import time
#Get the initial system time
timebuttonpressed = time.strftime("%H%M%S")
elapsedtime = time.strftime("%H%M%S")
while True:
elapsedtime = time.strftime("%H%M%S")
if input_state_LHS == False:
#Get the new system time, but only set timesample2
timebuttonpressed = time.strftime("%H%M%S")
## calls the LCD_Clear function which has to be in the same folder as this file
mylcd.lcd_clear()
mylcd.lcd_display_string("LHS Feedtime",1,2)
mylcd.lcd_display_string(str(round(LHS_feedtime, 2)) + " sec" , 2,5)
GPIO.output(27, GPIO.input(12) )
#Check if 10 seconds have passed
if((int(elapsedtime) - int(timebuttonpressed)) == 10):
timebuttonpressed = time.strftime("%H%M%S")
mylcd.lcd_clear()
mylcd.lcd_display_string("Flatson Feeding", 1)
mylcd.lcd_display_string("Systems", 2,4)
GPIO.output(27, GPIO.input(12) )
menuitem = 0
print("10 seconds havent passed...")
So currently I am these two method where one reads the RF Data from another device constantly and another method sends that data every so often.
How could I do this? I need the RF Data incoming to be constantly updated and received while the sendData() method just grabs the data from the global variable whenever it can.
Heres the code below so far but it's not working...
import httplib, urllib
import time, sys
import serial
from multiprocessing import Process
key = 'MY API KEY'
rfWaterLevelVal = 0
ser = serial.Serial('/dev/ttyUSB0',9600)
def rfWaterLevel():
global rfWaterLevelVal
rfDataArray = ser.readline().strip().split()
print 'incoming: %s' %rfDataArray
if len(rfDataArray) == 5:
rfWaterLevelVal = float(rfDataArray[4])
print 'RFWater Level1: %.3f cm' % (rfWaterLevelVal)
#rfWaterLevel = 0
def sendData():
global rfWaterLevelVal
params = urllib.urlencode({'field1':rfWaterLevelVal, 'key':key})
headers = {"Content-type" : "application/x-www-form-urlencoded","Accept": "text/plain"}
conn = httplib.HTTPConnection("api.thingspeak.com:80", timeout = 5)
conn.request("POST", "/update", params, headers)
#print 'RFWater Level2: %.3f cm' % (rfWaterLevelVal)
response = conn.getresponse()
print response.status, response.reason
data = response.read()
conn.close()
while True:
try:
rfWaterLevel()
p = Process(target=sendData(), args())
p.start()
p.join()
#Also tried threading...did not work..
#t1 = threading.Thread(target=rfWaterLevel())
#t2 = threading.Thread(target=sendData())
#t1.start()
#t1.join()
#t2.join()
except KeyboardInterrupt:
print "caught keyboard interrupt"
sys.exit()
Please help!
Just to clarify, I need rfWaterLevel() method to run constantly as the rf data is incoming constantly, and I need sendData() to just be called as soon as it's ready to send again (roughly every 5 seconds or so). But it seems as if, if there is any sort of delay to the incoming rf data then rf data stops updating itself (the received end) and thus the data being sent is not accurate to what is being sent from the rf transmitter.
Thanks in advance!
I can't give you a full solution but I can guide you into the right direction.
Your code has three problems.
Process starts (as the name suggests) a new process and not a new thread.
A new process cannot share data with the old process.
You should use mutlithreading instead.
Have a look at threading as explained here
You are calling rfWaterLevel() inside the main thread.
You need to start the second thread before entering the while Loop.
Your are creating the second thread again and again inside the while Loop.
Create it only once and put the while Loop inside the function
Your basic program structure should be like this:
import time
def thread_function_1():
while True:
rfWaterLevel()
def thread_function_2():
while True:
sendData()
time.sleep(5)
# start thread 1
thread1 = Thread(target = thread_function_1)
thread1.start()
# start thread 2
thread2 = Thread(target = thread_function_2)
thread2.start()
# wait for both threads to finish
thread1.join()
thread2.join()
I've been dealing with this for a while. I don't think I can find an easy solution. What I want to know is how many hours I play games.
Why does printing result_1 return 0 sometimes and 1 other times? Why doesn't it return "No tasks are running..." or show me the running task?
import subprocess
from time import *
from datetime import *
command_1 = "TASKLIST /FI \"IMAGENAME eq Application-steam*\"" #Code for finding the "Banished" game
no_task_respond_0 = "INFO: No tasks are running which match the specified criteria."
while True:
the_time = datetime.now()
result_1 = subprocess.call(command_1, shell=True) # running this code prints the result whether i want it or not
open_file = open("watcher.txt","w")
current_task_file = open_file.write(str(result_1))
print(current_task_file,"Inside of Current task FILE!!")
if str(current_task_file) == str(no_task_respond_0): #If no games are running then continue
continue
else: #If the game started then note the time
date_file = open("datefile.txt","a")
date_watcher = date_file.write(str(the_time)+"\n")
sleep(4)
When I close the game, this is the result that it returns:
**INFO: No tasks are running which match the specified criteria.**
**1 Inside of Current Task FILE!!**
This is the output while the game is running:
Image Name PID Session Name Session# Mem Usage
========================= ======== ================ =========== ============
Application-steam-x64.exe 14684 Console 1 298.588 K
**1 Inside of Current task FILE!!**
These 1s are from printing result_1.