I'm trying to turn displays on using python. Found this code very useful but the problem is that it is turning display on for a brief moment and after like 1 second all displays are still 'entering power-save mode'. How can I make this 'power on' feature permanently?
Thanks for the hint, martineau! Simple sending message like that will not work:
win32gui.SendMessage(win32con.HWND_BROADCAST, win32con.WM_SYSCOMMAND, SC_MONITORPOWER, -1)
I also tried:
ctypes.windll.user32.SetCursorPos(100, 20)
and
pygame.mouse.set_pos((random.choice(range(600)), random.choice(range(600))))
to utilise mouse, but this is nothing when compare to:
win32api.mouse_event(win32con.MOUSEEVENTF_ABSOLUTE|win32con.MOUSEEVENTF_MOVE,100,20)
mouse_event did the trick as explained here
Related
I am using PyObjC bindings to try to get a spoken sound file from phonemes.
I figured out that I can turn speech into sound as follows:
import AppKit
ss = AppKit.NSSpeechSynthesizer.alloc().init()
ss.setVoice_('com.apple.speech.synthesis.voice.Alex')
ss.startSpeakingString_toURL_("Hello", AppKit.NSURL.fileURLWithPath_("hello.aiff"))
# then wait until ve.isSpeaking() returns False
Next for greater control I'd like to turn the text first into phonemes, and then speak them.
phonemes = ss.phonemesFromText_("Hello")
But now I'm stuck, because I know from the docs that to get startSpeakingString to accept phonemes as input, you first need to set NSSpeechSynthesizer.SpeechPropertyKey.Mode to "phoneme". And I think I'm supposed to use setObject_forProperty_error_ to set that.
There are two things I don't understand:
Where is NSSpeechSynthesizer.SpeechPropertyKey.Mode in PyObjC? I grepped the entire PyObjC directory and SpeechPropertyKey is not mentioned anywhere.
How do I use setObject_forProperty_error_ to set it? I think based on the docs that the first argument is the value to set (although it's called just "an object", so True in this case?), and the second is the key (would be phoneme in this case?), and finally there is an error callback. But I'm not sure how I'd pass those arguments in Python.
Where is NSSpeechSynthesizer.SpeechPropertyKey.Mode in PyObjC?
Nowhere.
How do I use setObject_forProperty_error_ to set it?
ss.setObject_forProperty_error_("PHON", "inpt", None)
"PHON" is the same as NSSpeechSynthesizer.SpeechPropertyKey.Mode.phoneme
"inpt" is the same as NSSpeechSynthesizer.SpeechPropertyKey.inputMode
It seems these are not defined anywhere in PyObjC, but I found them by firing up XCode and writing a short Swift snippet:
import Foundation
import AppKit
let synth = NSSpeechSynthesizer()
let x = NSSpeechSynthesizer.SpeechPropertyKey.Mode.phoneme
let y = NSSpeechSynthesizer.SpeechPropertyKey.inputMode
Now looking at x and y in the debugger show that they are the strings mentioned above.
As for how to call setObject_forProperty_error_, I simply tried passing in those strings and None as the error handler, and that worked.
Basing my code on this allegedly Python-specific documentation example, I have:
def on_output(spin):
adj = spin.get_adjustment()
val = int(adj.get_value())
s = "%02d" % val
print "on_output: %s" % s
spin.set_text(s)
which I connect to my SpinButton's "output" signal. It seems to work when the control is first displayed (shows "00"), but when I click SpinButton's increment button, the formatted value from on_output is overwritten, so e.g. my "01" is shown as plain "1". Looks like another signal or event is causing the control to reformat itself after on_output, but I'm struggling a bit to diagnose. Any experts on GTK3 with Python, please help with debugging suggestions.
Platform is Xubuntu 18.10, Python 2.7, GTK3 3.22.
Wow! If ever there was a case for responding with 'RTFM!' this was it. As very politely pointed out by Alexander Dmitriev, the 'return' statement is missing. In my first attempt, I returned True, but it failed (for some presumably unrelated reason) so I tried False which made no difference. Somehow, the return value got lost after that, and when I re-read the doco 'carefully' I missed seeing it -- we see what we expect to see! I must be getting too old for this hacking game, maybe time to hang up my keyboard. :)
Morning folks,
I'm trying to get a few unit tests going in Python to confirm my code is working, but I'm having a real hard time getting a Mock anything to fit into my test cases. I'm new to Python unit testing, so this has been a trying week thus far.
The summary of the program is I'm attempting to do serial control of a commercial monitor I got my hands on and I thought I'd use it as a chance to finally use Python for something rather than just falling back on one of the other languages I know. I've got pyserial going, but before I start shoving a ton of commands out to the TV I'd like to learn the unittest part so I can write for my expected outputs and inputs.
I've tried using a library called dummyserial, but it didn't seem to be recognising the output I was sending. I thought I'd give mock_open a try as I've seen it works like a standard IO as well, but it just isn't picking up on the calls either. Samples of the code involved:
def testSendCmd(self):
powerCheck = '{0}{1:>4}\r'.format(SharpCodes['POWER'], SharpCodes['CHECK']).encode('utf-8')
read_text = 'Stuff\r'
mo = mock_open(read_data=read_text)
mo.in_waiting = len(read_text)
with patch('__main__.open', mo):
with open('./serial', 'a+b') as com:
tv = SharpTV(com=com, TVID=999, tvInput = 'DVI')
tv.sendCmd(SharpCodes['POWER'], SharpCodes['CHECK'])
com.write(b'some junk')
print(mo.mock_calls)
mo().write.assert_called_with('{0}{1:>4}\r'.format(SharpCodes['POWER'], SharpCodes['CHECK']).encode('utf-8'))
And in the SharpTV class, the function in question:
def sendCmd(self, type, msg):
sent = self.com.write('{0}{1:>4}\r'.format(type,msg).encode('utf-8'))
print('{0}{1:>4}\r'.format(type,msg).encode('utf-8'))
Obviously, I'm attempting to control a Sharp TV. I know the commands are correct, that isn't the issue. The issue is just the testing. According to documentation on the mock_open page, calling mo.mock_calls should return some data that a call was made, but I'm getting just an empty set of []'s even in spite of the blatantly wrong com.write(b'some junk'), and mo().write.assert_called_with(...) is returning with an assert error because it isn't detecting the write from within sendCmd. What's really bothering me is I can do the examples from the mock_open section in interactive mode and it works as expected.
I'm missing something, I just don't know what. I'd like help getting either dummyserial working, or mock_open.
To answer one part of my question, I figured out the functionality of dummyserial. The following works now:
def testSendCmd(self):
powerCheck = '{0}{1:>4}\r'.format(SharpCodes['POWER'], SharpCodes['CHECK'])
com = dummyserial.Serial(
port='COM1',
baudrate=9600,
ds_responses={powerCheck : powerCheck}
)
tv = SharpTV(com=com, TVID=999, tvInput = 'DVI')
tv.sendCmd(SharpCodes['POWER'], SharpCodes['CHECK'])
self.assertEqual(tv.recv(), powerCheck)
Previously I was encoding the dictionary values as utf-8. The dummyserial library decodes whatever you write(...) to it so it's a straight string vs. string comparison. It also encodes whatever you're read()ing as latin1 on the way back out.
I am trying to run my own version of baselines code source of reinforcement learning on github: (https://github.com/openai/baselines/tree/master/baselines/ppo2).
Whatever I do, I keep having the same display which looks like this :
Where can I edit it ? I know I should edit the "learn" method but I don't know how
Those prints are the result of the following block of code, which can be found at this link (for the latest revision at the time of writing this at least):
if update % log_interval == 0 or update == 1:
ev = explained_variance(values, returns)
logger.logkv("serial_timesteps", update*nsteps)
logger.logkv("nupdates", update)
logger.logkv("total_timesteps", update*nbatch)
logger.logkv("fps", fps)
logger.logkv("explained_variance", float(ev))
logger.logkv('eprewmean', safemean([epinfo['r'] for epinfo in epinfobuf]))
logger.logkv('eplenmean', safemean([epinfo['l'] for epinfo in epinfobuf]))
logger.logkv('time_elapsed', tnow - tfirststart)
for (lossval, lossname) in zip(lossvals, model.loss_names):
logger.logkv(lossname, lossval)
logger.dumpkvs()
If your goal is to still print some things here, but different things (or the same things in a different format) your only option really is to modify this source file (or copy the code you need into a new file and apply your changes there, if allowed by the code's license).
If your goal is just to suppress these messages, the easiest way to do so would probably be by running the following code before running this learn() function:
from baselines import logger
logger.set_level(logger.DISABLED)
That's using this function to disable the baselines logger. It might also disable other baselines-related output though.
Here's the code that I'm trying to run:
import pyautogui
r=pyautogui.locateOnScreen('C:\Users\David\Desktop\index.png',grayscale=False)
print r
It has to be a pixel-perfect match in order to be found. To allow for any sort of deviance you can invoke a confidence parameter.
For example:
loc = pyautogui.locateOnScreen(image, grayscale=True, confidence=.5)
However, in order to use the confidence parameter you have to have opencv_python installed. This is easy to install with pip:
./python -m pip install opencv_python
After that is in place, you should be able to account for minor differences.
I was encountering the same problem, what I did is
import pyautogui
r= None
while r is None:
r=pyautogui.locateOnScreen('C:\Users\David\Desktop\index.png',grayscale=False)
print r
I think its just because that it takes time to locate image. If you found a better solution share with me :)
I had the similar problem.
My fault was I had saved the compare picture as jpg first and then as png in MS paint.
Be sure to save the compare picture as png format. After this the Locate function worked for me.
I had same issue and kept returning None value.
I did several trials and found the solution for me.
OS : MacOS
I saved photo with my system screenshot tool ( command+shift+5) and saved. it seems it's different pixel info as what it's displayed in my screen.
Therefore I used pyautogui screenshot instead to save the photo which I wanted to.
pyautogui.screenshot('num7_.png', region=(260,360, 110, 100))
After that, it's working good regardless of grayscale parameter.
pyautogui.locateOnScreen('num7_.png')
Box(left=260, top=360, width=110, height=100)
The locateOnScreen() function returns None if the image wasn't found on the screen. Remember, the match has to be pixel-perfect in order to match it, so be sure to crop index.png to the smallest recognizable size to prevent extra details from ruining your match. Also, make sure the thing you are looking for is not obscured by any other windows on top of it.
I got this working by using the following:
r = None
while r is None:
r = pyautogui.locateOnScreen('rbin.PNG', grayscale = True)
print icon_to_click + ' now loaded'
The key is to make grayscale = True.
The official documentation says;
The Locate Functions
NOTE: As of version 0.9.41, if the locate functions can’t find the provided image,
they’ll raise ImageNotFoundException instead of returning None.
So you can decide whether an exception was raise or not. Also you should try for a finite number of times not a While True loop.
retry_counter = 0
while retry_counter < 5:
try:
result = pyautogui.locateOnScreen(IMAGE_PATH_TO_FIND)
if result:
time.sleep(1)
retry_counter = 10 # to break the loop
except:
time.sleep(1) # retry after some time, i.e. 1 sec
retry_counter += 1
I found that if you program a pause of 1 or 2 seconds (using time.sleep), then it is able to locate the image. It also takes time for python to locate the image (my computer took about 5 seconds).
I had that problem but then i crop the photo into specific part then it was locating and yes it takes time.
or this can also work.
b = pyautogui.center('calc7key.png')
I found a way to fix my problem. Only search for an image as small as possible. A picture that is only 1 pixel is found after 3 seconds. And when i try to search for an image over 500x500 then it won't find anything.
I think that the library pyautogui needs several recognition points. For example, Number seven on calculator:
The number seven on the calculator of windows 10. In this format, I´ve got the location on the screen.
Thank you for your comments
My problem was I was trying to take a snip of the calculator button. That must make a different pixel match because I tried every other option in here and nothing was working. I did a print screen then cropped it to the button I wanted and it worked.
Before
import pyautogui
image = '9.png'
loc = pyautogui.locateOnScreen(image, grayscale=True, confidence=.5)
print (loc)
Error:
None
>>>
Solution
import pyautogui
import time
time.sleep(5)
image = '9.png'
loc = pyautogui.locateOnScreen(image, grayscale=True, confidence=.5)
print (loc)
Summary: Just add this two lines:
import time
time.sleep(5)
Output
Box(left=1686, top=248, width=70, height=47)
>>>
import time
time.sleep(5)
If you have taken screenshot with snipping tool it wont work, so take screenshot with "prt sc" or command prompt. This worked for me!
I am sure most of you guys know this, but if you forget to run your python script or IDE (wether that be Visual Studio, Python IDLE, etc.) as an administrator, then pyautogui will not be able to use the click command. It will still be able to move the mouse around, but it just won’t be able to click (this is true for all operating systems, they prevent any software that dose not have administrative privileges from clicking and i think it also prevents the software from using the keyboard but I am not sure. The OS dose this as a security measure so no software can do malicious things on your computer without your consent, like for instance: prompting the user to allow the software administrator access, and then taking control of the mouse and keyboard to click the “Yes” button for the user). If you’re using Linux or Mac then I am sure you know: you would have to use the “sudo” command. Hope this helps someone ;)