Comparing old CPU usage with new one using python not working - python

I am trying to check the CPU usage and print it the user every 20 seconds.
when the code first runs it sshould print the CPU usage, but after that it should compare the old cpu usage with the new one, and
When there is an increase in CPU usage over 9%, we will print every process using over 1% of the CPU using the code below
for process in psutil.process_iter():
p = process.as_dict()
if p["memory_percent"] > 1:
print(p["name"] + " is using " + str(p["memory_percent"]) + " of the CPU")
here is what I have tried below
import psutil
import time
status = 'start'
while(status != 'stop'):
old_cpu_usage = 0
cpu_usage_percent = psutil.cpu_percent(1, False)
print()
# if the CPU usage has not increased by over 10% then it prints the current CPU usage, if not the
if(old_cpu_usage==0):
print(f"The CPU usage is {str(cpu_usage_percent)}%")
old_cpu_usage = cpu_usage_percent
elif ((cpu_usage_percent - old_cpu_usage )/cpu_usage_percent * 100 > 9):
for process in psutil.process_iter():
p = process.as_dict()
if p["memory_percent"] > 1:
print(p["name"] + " is using " + str(p["memory_percent"]) + " of the CPU")
old_cpu_usage = cpu_usage_percent
else:
print(f"The CPU usage did not increase significantly, the usage is {str(cpu_usage_percent)}%")
old_cpu_usage = cpu_usage_percent
time.sleep(2)
upon trying the solution above, it doesn't work. instead it keeps printing the first if statement, and does not make any comparism

Related

cpu_usage() from python psutil library showing 100% of cpu usage, but HWMonitor only 10-11%?

Like above, why function cpu_usage shows 100% of cpu, while in HWMonitor or in Windows Task Manager I can see only 10-11% of usage. I got some code that measuring cpu and ram usage. Memory usage seems to be working like charm, but this cpu is somehow 10x greater than in task manager. Why?
import random
import threading
import psutil
def display_cpu():
global running
running = True
currentProcess = psutil.Process()
# start loop
while running:
print("CPU: ",currentProcess.cpu_percent(interval=1), "%", "| Memory: ", currentProcess.memory_info().rss/(1024*1024), "MB")
def start():
global t
# create thread and start it
t = threading.Thread(target=display_cpu)
t.start()
def stop():
global running
global t
# use `running` to stop loop in thread so thread will end
running = False
# wait for thread's end
t.join()
# ---
def insertion_sort():
nums = []
for i in range(30000):
nums.append(random.randint(1, 10000))
for i in range(1, len(nums)):
item_to_insert = nums[i]
j = i - 1
while j >= 0 and nums[j] > item_to_insert:
nums[j + 1] = nums[j]
j -= 1
nums[j + 1] = item_to_insert
# ---
for i in range(1):
start()
try:
result = insertion_sort()
finally:
stop()

How can I play random audio files based on current time in Python 2.7?

Background: I'm using as Raspberry Pi rev 2 B to run a nature sound white noise generator of sorts that will randomly play audio tracks of varying length based on the time of night/morning. Some tracks are only a minute, some are several hours long. I'm looking for a way to check the time and change which type of sounds play based on time.
Current issue: I can start the appropriate audio for the time when the program first executes, but the timeloop execution stops polling once omxplayer starts up.
I have tried to call OMXPlayer without interrupting the time checker that determines what kind of audio to play, but once the audio playback starts I have been unable to continue checking time. Even if the play_audio() function wasn't recursive I would still like a way for the time checker to continue executing while the audio plays
#!/usr/bin/env python
import datetime, time, os, subprocess, random
from timeloop import Timeloop
from datetime import timedelta
from time import sleep
from omxplayer.player import OMXPlayer
from pathlib import Path
tl = Timeloop()
running_cycle = "off" # default value for the time cycle currently running
#function to check current time cycle
def check_time () :
dt_now = datetime.datetime.now()
t_now = dt_now.time()
t_night = datetime.time(hour=2,minute=0)
t_twilight = datetime.time(hour=4,minute=45)
t_morning = datetime.time(hour=7,minute=0)
t_end = datetime.time(hour=10,minute=0)
if t_night <= t_now < t_twilight:
return "night"
elif t_twilight <= t_now < t_morning:
return "twilight"
elif t_morning <= t_now < t_end:
return "morning"
else:
return "off"
# function that plays the audio
def play_audio (time_cycle):
subprocess.call ("killall omxplayer", shell=True)
randomfile = random.choice(os.listdir("/home/pi/music/nature-sounds/" + time_cycle))
file = '/home/pi/music/nature-sounds/' + time_cycle + '/' + randomfile
path = Path(file)
player = OMXPlayer(path)
play_audio (time_cycle)
# function that determines whether to maintain current audio cycle or play another
def stay_or_change():
global running_cycle
current_cycle = check_time()
if running_cycle != current_cycle:
if current_cycle == "off" :
player.quit()
else:
running_cycle = current_cycle
print "Now playing: " + running_cycle + " #{}".format(time.ctime())
play_audio(running_cycle)
#starts timeloop checker to play audio - works until stay_or_change() calls play_audio
#tl.job(interval=timedelta(seconds=10))
def job_10s():
print "10s job - running cycle: " + running_cycle + " - current time: {}".format(time.ctime())
stay_or_change()
# starts the timeloop
if __name__ == "__main__":
tl.start(block=True)
I have also tried running OMXPlayer with subprocess.run() but it still seems to hang up after the player starts. I'm completely open to any recommendations for background threading media players, process daemons, or time based execution methods.
I'm new to Python.
I had the recursion all wrong so it got caught in an infinite loop and the timeloop function wasn't really viable for this solution. Instead I had a function that played the sound, and then called the function that checked the time and plays from the appropriate sub-directory (or play nothing and wait).
Here's what I managed to come up with:
#!/usr/bin/env python
import datetime, time, os, subprocess, random
from datetime import timedelta
from time import sleep
from omxplayer.player import OMXPlayer
def check_time () :
dt_now = datetime.datetime.now()
t_now = dt_now.time()
t_night = datetime.time(hour=0,minute=0)
t_twilight = datetime.time(hour=5,minute=45)
t_morning = datetime.time(hour=7,minute=45)
t_end = datetime.time(hour=10,minute=0)
if t_night <= t_now < t_twilight:
return "night"
elif t_twilight <= t_now < t_morning:
return "twilight"
elif t_morning <= t_now < t_end:
return "morning"
else:
return "off"
def play_audio (time_cycle):
randomfile = random.choice(os.listdir("/home/pi/music/nature-sounds/" + time_cycle))
file = '/home/pi/music/nature-sounds/' + time_cycle + '/' + randomfile
print "playing track: " + randomfile
cmd = 'omxplayer --vol -200 ' + file
subprocess.call (cmd, shell=True)
what_to_play()
def what_to_play():
current_cycle = check_time()
if current_cycle == "off" :
print "sounds currently off - #{}".format(time.ctime())
time.sleep(30)
what_to_play()
else:
print "Now playing from " + current_cycle + " #{}".format(time.ctime())
play_audio(current_cycle)
what_to_play()

Python Multiprocessing - Too Slow

I have built a multiprocessing password cracker (using a wordlist) for a specific function, it halved the time needed compared to using a single process.
The original problem being that it would show you the cracked password and terminate the worker, but the remaining workers would carry on until they ran out of words to hash! not ideal.
My new step forward is to use Manager.Event() to terminate the remaining workers, this works as I had hoped (after some trial and error), but the application now takes far longer that it would take as a single process, I'm sure this must be due to the if function inside pwd_find() but I thought I would seek some advice.
#!/usr/bin/env python
import hashlib, os, time, math
from hashlib import md5
from multiprocessing import Pool, cpu_count, Manager
def screen_clear(): # Small function for clearing the screen on Unix or Windows
if os.name == 'nt':
return os.system('cls')
else:
return os.system('clear')
cores = cpu_count() # Var containing number of cores (Threads)
screen_clear()
print ""
print "Welcome to the Technicolor md5 cracker"
print ""
user = raw_input("Username: ")
print ""
nonce = raw_input("Nonce: ")
print ""
hash = raw_input("Hash: ")
print ""
file = raw_input("Wordlist: ")
screen_clear()
print "Cracking the password for \"" + user + "\" using "
time1 = time.time() # Begins the 'Clock' for timing
realm = "Technicolor Gateway" # These 3 variables dont appear to change
qop = "auth"
uri = "/login.lp"
HA2 = md5("GET" + ":" + uri).hexdigest() # This hash doesn't contain any changing variables so doesn't need to be recalculated
file = open(file, 'r') # Opens the wordlist file
wordlist = file.readlines() # This enables us to use len()
length = len(wordlist)
screen_clear()
print "Cracking the password for \"" + user + "\" using " + str(length) + " words"
break_points = [] # List that will have start and stopping points
for i in range(cores): # Creates start and stopping points based on length of word list
break_points.append({"start":int(math.ceil((length+0.0)/cores * i)), "stop":int(math.ceil((length+0.0)/cores * (i + 1)))})
def pwd_find(start, stop, event):
for number in range(start, stop):
if not event.is_set():
word = (wordlist[number])
pwd = word.replace("\n","") # Removes newline character
HA1 = md5(user + ":" + realm + ":" + pwd).hexdigest()
hidepw = md5(HA1 + ":" + nonce +":" + "00000001" + ":" + "xyz" + ":" + qop + ":" + HA2).hexdigest()
if hidepw == hash:
screen_clear()
time2 = time.time() # stops the 'Clock'
timetotal = math.ceil(time2 - time1) # Calculates the time taken
print "\"" + pwd + "\"" + " = " + hidepw + " (in " + str(timetotal) + " seconds)"
print ""
event.set()
p.terminate
p.join
else:
p.terminate
p.join
if __name__ == '__main__': # Added this because the multiprocessor module sometimes acts funny without it.
p = Pool(cores) # Number of processes to create.
m = Manager()
event = m.Event()
for i in break_points: # Cycles though the breakpoints list created above.
i['event'] = event
a = p.apply_async(pwd_find, kwds=i, args=tuple()) # This will start the separate processes.
p.close() # Prevents any more processes being started
p.join() # Waits for worker process to end
if event.is_set():
end = raw_input("hit enter to exit")
file.close() # Closes the wordlist file
screen_clear()
exit()
else:
screen_clear()
time2 = time.time() # Stops the 'Clock'
totaltime = math.ceil(time2 - time1) # Calculates the time taken
print "Sorry your password was not found (in " + str(totaltime) + " seconds) out of " + str(length) + " words"
print ""
end = raw_input("hit enter to exit")
file.close() # Closes the wordlist file
screen_clear()
exit()
Edit (for #noxdafox):
def finisher(answer):
if answer:
p.terminate()
p.join()
end = raw_input("hit enter to exit")
file.close() # Closes the wordlist file
screen_clear()
exit()
def pwd_find(start, stop):
for number in range(start, stop):
word = (wordlist[number])
pwd = word.replace("\n","") # Removes newline character
HA1 = md5(user + ":" + realm + ":" + pwd).hexdigest()
hidepw = md5(HA1 + ":" + nonce +":" + "00000001" + ":" + "xyz" + ":" + qop + ":" + HA2).hexdigest()
if hidepw == hash:
screen_clear()
time2 = time.time() # stops the 'Clock'
timetotal = math.ceil(time2 - time1) # Calculates the time taken
print "\"" + pwd + "\"" + " = " + hidepw + " (in " + str(timetotal) + " seconds)"
print ""
return True
elif hidepw != hash:
return False
if __name__ == '__main__': # Added this because the multiprocessor module sometimes acts funny without it.
p = Pool(cores) # Number of processes to create.
for i in break_points: # Cycles though the breakpoints list created above.
a = p.apply_async(pwd_find, kwds=i, args=tuple(), callback=finisher) # This will start the separate processes.
p.close() # Prevents any more processes being started
p.join() # Waits for worker process to end
You can use the Pool primitives to solve your problem. You don't need to share an Event object which access is synchronised and slow.
Here I give an example on how to terminate a Pool given the desired result from a worker.
You can simply signal the Pool by returning a specific value and terminate the pool within a callback.
I think your hunch is correct. You are checking a synchronization primitive inside a fast loop. I would maybe only check if the event is set every so often. You can experiment to find the sweet spot where you check it enough to not do too much work but not so often that you slow the program down.

Python 2.7 .datetime.datetimenow() and datetime.timedelta()

I have this following code and am stuck in the while loop
I know there is a problem with the while datetime.datetime.now() < (datetime.datetime.now() + datetime.timedelta(minutes=wait_time)): line.
Can anyone help please ?
nodes_with_scanner = []
wait_time = 60
while datetime.datetime.now() < (datetime.datetime.now() + datetime.timedelta(minutes=wait_time)):
nodes_with_scanner = get_nodes_with_scanner_in_dps(self.node_names, scanner_id, username=self.users[0].username)
log.logger.debug("Number of pre-defined {0} scanners detected in DPS: {1}/{2}".format(scanner_type, len(nodes_with_scanner), len(self.node_names)))
if state == "create":
if len(self.node_names) == len(nodes_with_scanner):
log.logger.debug("All {0} pre-defined scanners with id '{1}' have been successfully created in DPS for nodes '{2}'".format(scanner_type, scanner_id, ", ".join(self.node_names)))
return
elif state == "delete":
if len(nodes_with_scanner) < 1:
log.logger.debug("All {0} pre-defined scanners with id '{1}' have been successfully deleted in DPS for nodes '{2}'".format(scanner_type, scanner_id, ", ".join(self.node_names)))
return
log.logger.debug("Still waiting on some {0} pre-defined scanners to '{1}' in DPS; sleeping for 1 minute before next check".format(scanner_type, state))
time.sleep(60)
You are asking if the current time is smaller than the current time plus a delta. Of course that's going to be true each and every time, the future is always further away into the future.
Record a starting time once:
start = datetime.datetime.now()
while datetime.datetime.now() < start + datetime.timedelta(minutes=wait_time)):
If wait_time doesn't vary in the loop, store the end time (current time plus delta):
end = datetime.datetime.now() + datetime.timedelta(minutes=wait_time))
while datetime.datetime.now() < end:
It may be easier to just use time.time() here:
end = time.time() + 60 * wait_time
while time.time() < end:
You use datetime.datetime.now() in your while loop, what means each iteration you check if the time now is lower then the time now + delta.
That logically wrong, because it will be True forever as the time now will be always lower than the time now plus delta.
You should change it to this:
time_to_start = datetime.datetime.now()
while datetime.datetime.now() < (time_to_start + datetime.timedelta(minutes=wait_time)):
print "do something"

Is there any way to mitigate the cost of multiprocessing.Process.start()?

So I've been tooling around with threads and processes in Python, and along the way I cooked up a pattern that allows the same class to be pitched back and forth between threads and/or processes without losing state data by using by-name RPC calls and Pipes.
Everything works fine, but it takes an absurd amount of time to start a process as compared to loading the state from a pickled file, and Thread.start() returns immediately, so there's only the minor cost of the constructor. So: what's the best way start a Process with a large initial state without an absurd startup time. Snips and debug output below, the size of "counter" is just over 34,000K pickled to file with mode 2.
...
elif command == "load":
# RPC call - Loads state from file "pickle_name":
timestart = time.time()
print do_remote("take_pickled_state", pickle_name)
print "Load cost: " + str(time.time() - timestart)
elif command == "asproc":
if type(_async) is multiprocessing.Process:
print "Already running as a Process you fool!."
else:
do_remote("stop")
_async.join()
p_pipe.close()
p_pipe, c_pipe = multiprocessing.Pipe()
timestart = time.time()
_async = multiprocessing.Process(target = counter, args = (c_pipe,))
# Why is this so expensive!!?!?!?! AAARRG!!?!
_async.start()
print "Start cost: " + str(time.time() - timestart)
elif command == "asthread":
if type(_async) is threading.Thread:
print "Already running as a Thread you fool!."
else:
# Returns the state of counter on stop:
timestart = time.time()
counter = do_remote("stop")
print "Proc stop time: " + str(time.time() - timestart)
_async.join()
p_pipe.close()
p_pipe, c_pipe = multiprocessing.Pipe()
timestart = time.time()
_async = threading.Thread(target = counter, args = (c_pipe,))
_async.start()
print "Start cost: " + str(time.time() - timestart)
...
Corresponding debug statements:
Waiting for command...
>>> load
Load complete.
Load cost: 2.18700003624
Waiting for command...
>>> asproc
Start cost: 23.3910000324
Waiting for command...
>>> asthread
Proc stop time: 0.921999931335
Start cost: 0.0629999637604
Edit 1:
OS: Win XP 64.
Python version: 2.7.x
Processor: Xeon quad core.
Edit 2:
The thing I really don't get is it takes ~1 sec for the process stop to return the entire state, but it takes 20x longer to receive the state and start. (debug outputs added)

Categories