How would I add a timer above other text? - python

I would like to have a timer above an input, then end the timer once the player inputs anything.
So far I've tried using threading and sys.flush, and multithreading to terminate the thread, but I wasn't able to input anything since the timer just followed the cursor.
My Code:
def DisplayTime():
import sys
while True:
sys.stdout.write('\r'+str(format_time))
sys.stdout.flush()
displayTime = threading.Thread(name='DisplayTime', target=DisplayTime)
Somewhere else:
displayTime.start()
What happened:
>>> Text Here
>>> Timer: 0:00:00(Wait for input)
I was expecting something like this:
>>> Timer: 0:00:00
>>> Text Here
>>> (Wait for input)
>>> Timer: 0:00:01
>>> Text Here
>>> (Wait for input)

The following code prints a timer followed by a line of text followed by an empty line on which the cursor is displayed. start_time lets us calculate the elapsed time, and last_printed keeps track of the last printed time so we don't have to print on every iteration. The important parts are taken from other StackOverflow answers:
Move cursor up one line
Non-blocking console input
import sys
import time
import msvcrt
import datetime
start_time = time.time()
last_printed = -1
print('\n')
while True:
elapsed_time = time.time() - start_time
int_elapsed = int(elapsed_time)
if int_elapsed > last_printed:
elapsed_td = datetime.timedelta(seconds=int_elapsed)
# Go up one line: https://stackoverflow.com/a/11474509/20103413
sys.stdout.write('\033[F'*2 + str(elapsed_td) + '\nText here\n')
sys.stdout.flush()
last_printed = int_elapsed
# Non-blocking input: https://stackoverflow.com/a/2409034/20103413
if msvcrt.kbhit():
print(msvcrt.getch().decode())
break

Related

While loop wont run until tray icon is closed

import pystray
import PIL.Image
from datetime import datetime
from text_to_speech import speak
from time import time, sleep
import os
from gtts import gTTS
import vlc
image = PIL.Image.open('hourglass.jpg')
def on_clicked(icon, item):
icon.stop()
icon = pystray.Icon('Hourglass', image, menu=pystray.Menu(
pystray.MenuItem('Exit', on_clicked)))
icon.run()
stop = False ## To loop forever
while stop == False:
print('test')
now = datetime.now()
second = now.second
minute = now.minute
if second == 0 :
myText = 'It is now ' + (now.strftime("%I %p"))
print(myText)
output = gTTS(text=myText, lang='en', slow=False)
output.save("Time.mp3")
p = vlc.MediaPlayer("Time.mp3")
p.play()
sleep(10)
os.remove("Time.mp3")
this is my code. For some reason which i cant figure out until i press on the icon and exit, the rest of the code wont run. I was trying to make an icon try for when i run this in the background.
The icon.run() internally run a loop. So until this loop breaks (by closing the window) the code below will not be executed. If you want for the icon and the code below to run independently, you can use Threads.
import threading
def run_icon():
icon = pystray.Icon('Hourglass', image, menu=pystray.Menu(
pystray.MenuItem('Exit', on_clicked)))
icon.run()
def run_second():
stop = False ## To loop forever
while stop == False:
print('test')
now = datetime.now()
second = now.second
minute = now.minute
if second == 0 :
myText = 'It is now ' + (now.strftime("%I %p"))
print(myText)
output = gTTS(text=myText, lang='en', slow=False)
output.save("Time.mp3")
p = vlc.MediaPlayer("Time.mp3")
p.play()
sleep(10)
os.remove("Time.mp3")
Thread1 = threading.Thread(target=run_icon)
Thread2 = threading.Thread(target=run_second)
Thread1.join() # wait for thread to stop
Thread2.join() # wait for thread to stop
You can use icon.run_detached(). Then just run your main code underneath.

How do I get the computational time of my code using this Python code?

I found the following code on a different question. But, I'm not sure where to place my code. I've tried placing my code after the entire code, and then tried placing it between the start=time() line and the ones after it. But, none of these are printing an elapsed time. Anyone know where I would place my lines of code to get an elapsed time printed?
#python3
import atexit
from time import time, strftime, localtime
from datetime import timedelta
def secondsToStr(elapsed=None):
if elapsed is None:
return strftime("%Y-%m-%d %H:%M:%S", localtime())
else:
return str(timedelta(seconds=elapsed))
def log(s, elapsed=None):
line = "="*40
print(line)
print(secondsToStr(), '-', s)
if elapsed:
print("Elapsed time:", elapsed)
print(line)
print()
def endlog():
end = time()
elapsed = end-start
log("End Program", secondsToStr(elapsed))
start = time()
atexit.register(endlog)
log("Start Program")
Update: I ended up using this code instead:
import pandas as pd
start = pd.Timestamp.now()
# code
print(pd.Timestamp.now()-start)
That's more elaborate than it needs to be. As long as you can identify the exact code you want to time, it's just:
import time
...
start = time.time()
execute_my_code_1()
elapsed1 = time.time() - start
start = time.time()
execute_my_code_2()
elapsed2 = time.time() - start
print(f"One took {elapsed1}s, two took {elapsed2}s.")

How to get Thread execution time in Python

Hello I have a script that does a GET request and I need to measure the thread that is loaded with that function. This is the code that I have written but it doesn`t show the correct time it shows 0 and sometimes 0.001 or something like that.
import requests
import threading
import time
def functie():
URL = "http://10.250.100.170:9082/SPVWS2/rest/listaMesaje"
r = requests.get(url = URL)
data = r.json()
threads = []
for i in range(5):
start = time.clock_gettime_ns()
t = threading.Thread(target=functie)
threads.append(t)
t.start()
end = time.clock_gettime_ns()
print(end-start)
I need an example on how to get in my code the exact thread execution time. Thanks
The code in this script runs on the main thread and you are trying to measure the timing of thread t. To do that, you can tell main thread to wait until thread t has finished like this:
import requests
import threading
import time
threads = []
start = []
end = []
def functie():
start.append(time.clock_gettime_ns())
URL = "http://10.250.100.170:9082/SPVWS2/rest/listaMesaje"
r = requests.get(url = URL)
data = r.json()
end.append(time.clock_gettime_ns())
for i in range(5):
start.append(time.clock_gettime_ns())
t = threading.Thread(target=functie)
threads.append(t)
t.start()
for (i,t) in enumerate(threads):
t.join()
print(end[i]-start[i])
The other answer would produce incorrect results. If the first thread takes longer than the second, the time of the second will be recorded as the same as the first. This is because the end times are recorded sequentially after each join finishes rather than when the thread's target function actually finishes which may be in any order.
A better way would be to wrap the target functions of the threads with code that does this:
def thread_time(target):
def wrapper(*args, **kwargs):
st = time.time()
try:
return target(*args, **kwargs)
finally:
et = time.time()
print(et - st)
threading.currentThread().duration = et - st
return wrapper
def functie():
print "starting"
time.sleep(1)
print "ending"
t = threading.Thread(target=thread_time(functie))
t.start()
t.join()
print(t.duration)

IndentationError: unexpected indent...FRUSTRATING

I'm very new at Python scripting and am working on a script to turn on a fan when my Raspberry Pi3 reaches a specific temp. I've been trying to debug my code all day and found I can't figure out what's wrong. Here is my code:
import os
import sys
import signal
import subprocess
import atexit
import time
from time import sleep
import RPi.GPIO as GPIO
pin = 18
maxTMP = 60
def setup():
GPIO.setmode(GPIO.BCM)
GPIO.setup(pin, GPIO.OUT)
GPIO.setwarnings(False)
return()
def setPin(mode):
GPIO.output(pin, mode)
return()
def exit_handler():
GPIO.cleanup()
def FanON():
SetPin(True)
return()
def FanOFF():
SetPin(False)
return()
try:
setup()
while True:
process = subprocess.Popen('/opt/vc/bin/vcgencmd measure_temp',stdout =
subprocess.PIPE,shell=True)
temp,err = process.communicate()
temp = str(temp).replace("temp=","")
temp = str(temp).replace("\'C\n","")
temp = float(temp)
if temp>maxTMP:
FanON()
else:
FanOFF()
sleep(5)
finally:
exit_handler()
Here is my error:
File "/home/pi/Scripts/run-fan.py", line 36
while True:
^
IndentationError: unexpected indent
I've tried to indent every way possible. I need help.
Thanks!
I want to preface this with, you should use four spaces for your indentation. If you do, it will be way, way easier to see problems like the one you have here. If you use an IDE like Spyder or PyCharm, there are settings that automatically highlight indentation problems for you (regardless of how many spaces you want to use).
That said, with your current indentation scheme of one-space-per-indent, you want to replace your bottom block with this:
try:
setup()
while True:
process = subprocess.Popen('/opt/vc/bin/vcgencmd measure_temp',stdout =
subprocess.PIPE,shell=True)
temp,err = process.communicate()
temp = str(temp).replace("temp=","")
temp = str(temp).replace("\'C\n","")
temp = float(temp)
if temp>maxTMP:
FanON()
else:
FanOFF()
sleep(5)
If you used four spaces instead of one on your original code, it would have looked like this:
try:
setup()
while True:
process = subprocess.Popen('/opt/vc/bin/vcgencmd measure_temp',stdout =
subprocess.PIPE,shell=True)
temp,err = process.communicate()
temp = str(temp).replace("temp=","")
temp = str(temp).replace("\'C\n","")
temp = float(temp)
if temp>maxTMP:
FanON()
else:
FanOFF()
sleep(5)
There's another problem here, which is that your while True block will currently never exit (maybe you want a break statement somewhere).

How many network ports does Linux allow python to use?

So I have been trying to multi-thread some internet connections in python. I have been using the multiprocessing module so I can get around the "Global Interpreter Lock". But it seems that the system only gives one open connection port to python, Or at least it only allows one connection to happen at once. Here is an example of what I am saying.
*Note that this is running on a linux server
from multiprocessing import Process, Queue
import urllib
import random
# Generate 10,000 random urls to test and put them in the queue
queue = Queue()
for each in range(10000):
rand_num = random.randint(1000,10000)
url = ('http://www.' + str(rand_num) + '.com')
queue.put(url)
# Main funtion for checking to see if generated url is active
def check(q):
while True:
try:
url = q.get(False)
try:
request = urllib.urlopen(url)
del request
print url + ' is an active url!'
except:
print url + ' is not an active url!'
except:
if q.empty():
break
# Then start all the threads (50)
for thread in range(50):
task = Process(target=check, args=(queue,))
task.start()
So if you run this you will notice that it starts 50 instances on the function but only runs one at a time. You may think that the 'Global Interpreter Lock' is doing this but it isn't. Try changing the function to a mathematical function instead of a network request and you will see that all fifty threads run simultaneously.
So will I have to work with sockets? Or is there something I can do that will give python access to more ports? Or is there something I am not seeing? Let me know what you think! Thanks!
*Edit
So I wrote this script to test things better with the requests library. It seems as though I had not tested it very well with this before. (I had mainly used urllib and urllib2)
from multiprocessing import Process, Queue
from threading import Thread
from Queue import Queue as Q
import requests
import time
# A main timestamp
main_time = time.time()
# Generate 100 urls to test and put them in the queue
queue = Queue()
for each in range(100):
url = ('http://www.' + str(each) + '.com')
queue.put(url)
# Timer queue
time_queue = Queue()
# Main funtion for checking to see if generated url is active
def check(q, t_q): # args are queue and time_queue
while True:
try:
url = q.get(False)
# Make a timestamp
t = time.time()
try:
request = requests.head(url, timeout=5)
t = time.time() - t
t_q.put(t)
del request
except:
t = time.time() - t
t_q.put(t)
except:
break
# Then start all the threads (20)
thread_list = []
for thread in range(20):
task = Process(target=check, args=(queue, time_queue))
task.start()
thread_list.append(task)
# Join all the threads so the main process don't quit
for each in thread_list:
each.join()
main_time_end = time.time()
# Put the timerQueue into a list to get the average
time_queue_list = []
while True:
try:
time_queue_list.append(time_queue.get(False))
except:
break
# Results of the time
average_response = sum(time_queue_list) / float(len(time_queue_list))
total_time = main_time_end - main_time
line = "Multiprocessing: Average response time: %s sec. -- Total time: %s sec." % (average_response, total_time)
print line
# A main timestamp
main_time = time.time()
# Generate 100 urls to test and put them in the queue
queue = Q()
for each in range(100):
url = ('http://www.' + str(each) + '.com')
queue.put(url)
# Timer queue
time_queue = Queue()
# Main funtion for checking to see if generated url is active
def check(q, t_q): # args are queue and time_queue
while True:
try:
url = q.get(False)
# Make a timestamp
t = time.time()
try:
request = requests.head(url, timeout=5)
t = time.time() - t
t_q.put(t)
del request
except:
t = time.time() - t
t_q.put(t)
except:
break
# Then start all the threads (20)
thread_list = []
for thread in range(20):
task = Thread(target=check, args=(queue, time_queue))
task.start()
thread_list.append(task)
# Join all the threads so the main process don't quit
for each in thread_list:
each.join()
main_time_end = time.time()
# Put the timerQueue into a list to get the average
time_queue_list = []
while True:
try:
time_queue_list.append(time_queue.get(False))
except:
break
# Results of the time
average_response = sum(time_queue_list) / float(len(time_queue_list))
total_time = main_time_end - main_time
line = "Standard Threading: Average response time: %s sec. -- Total time: %s sec." % (average_response, total_time)
print line
# Do the same thing all over again but this time do each url at a time
# A main timestamp
main_time = time.time()
# Generate 100 urls and test them
timer_list = []
for each in range(100):
url = ('http://www.' + str(each) + '.com')
t = time.time()
try:
request = requests.head(url, timeout=5)
timer_list.append(time.time() - t)
except:
timer_list.append(time.time() - t)
main_time_end = time.time()
# Results of the time
average_response = sum(timer_list) / float(len(timer_list))
total_time = main_time_end - main_time
line = "Not using threads: Average response time: %s sec. -- Total time: %s sec." % (average_response, total_time)
print line
As you can see, it is multithreading very well. Actually, most of my tests show that the threading module is actually faster than the multiprocessing module. (I don't understand why!) Here are some of my results.
Multiprocessing: Average response time: 2.40511314869 sec. -- Total time: 25.6876308918 sec.
Standard Threading: Average response time: 2.2179402256 sec. -- Total time: 24.2941861153 sec.
Not using threads: Average response time: 2.1740363431 sec. -- Total time: 217.404567957 sec.
This was done on my home network, the response time on my server is much faster. I think my question has been answered indirectly, since I was having my problems on a much more complex script. All of the suggestions helped me optimize it very well. Thanks to everyone!
it starts 50 instances on the function but only runs one at a time
You have misinterpreted the results of htop. Only a few, if any, copies of python will be runnable at any specific instance. Most of them will be blocked waiting for network I/O.
The processes are, in fact, running parallel.
Try changing the function to a mathematical function instead of a network request and you will see that all fifty threads run simultaneously.
Changing the task to a mathematical function merely illustrates the difference between CPU-bound (e.g. math) and IO-bound (e.g. urlopen) processes. The former is always runnable, the latter is rarely runnable.
it only prints one at a time. If it was actually running multiple processes it would print many out at once.
It prints one at a time because you are writing lines to a terminal. Because the lines are indistinguishable, you wouldn't be able to tell if they are written all by one thread, or each by a separate thread in turn.
First of all, using multiprocessing to parallelize network I/O is an overkill. Using the built-in threading or a lightweight greenlet library like gevent are a much better option with less overhead. The GIL has nothing to do with blocking IO calls, so you don't have to worry about that at all.
Secondly, an easy way to see if your subprocesses/threads/greenlets are running in parallel if you are monitoring stdout is to print out something at the very beginning of the function, right after the subprocesses/threads/greenlets are spawned. For example, modify your check() function like so
def check(q):
print 'Start checking urls!'
while True:
...
If your code is correct, you should see many Start checking urls! lines printed out before any of the url + ' is [not] an active url!' printed out. It works on my machine, so it looks like your code is correct.
It appears that your issue is actually with the serial behavior of gethostbyname(3). This is discussed in this SO thread.
Try this code that uses the Twisted asynchronous I/O library:
import random
import sys
from twisted.internet import reactor
from twisted.internet import defer
from twisted.internet.task import cooperate
from twisted.web import client
SIMULTANEOUS_CONNECTIONS = 25
# Generate 10,000 random urls to test and put them in the queue
pages = []
for each in range(10000):
rand_num = random.randint(1000,10000)
url = ('http://www.' + str(rand_num) + '.com')
pages.append(url)
# Main function for checking to see if generated url is active
def check(page):
def successback(data, page):
print "{} is an active URL!".format(page)
def errback(err, page):
print "{} is not an active URL!; errmsg:{}".format(page, err.value)
d = client.getPage(page, timeout=3) # timeout in seconds
d.addCallback(successback, page)
d.addErrback(errback, page)
return d
def generate_checks(pages):
for i in xrange(0, len(pages)):
page = pages[i]
#print "Page no. {}".format(i)
yield check(page)
def work(pages):
print "started work(): {}".format(len(pages))
batch_size = len(pages) / SIMULTANEOUS_CONNECTIONS
for i in xrange(0, len(pages), batch_size):
task = cooperate(generate_checks(pages[i:i+batch_size]))
print "starting..."
reactor.callWhenRunning(work, pages)
reactor.run()

Categories