Android adb sendevent does not execute events correctly - python

I am trying to record screen events and execute them after for replay.
I wrote a small python script which listens for events,converts them from hexadecimal to decimal,waits for 5 seconds and executes recorded events with adb sendevent.
But for some reason sendevent never executes correctly, sometimes it touches wrong coordinates, sometimes touches for too long also there are problems with delays between touches.
I couldnt understand why this is happening ? What i expect is it should just replay since getevent captured all necessary data needed(?)
import subprocess
import threading
import os
from time import sleep
eventsToSend = []
def eventSender():
while(True):
if(len(eventsToSend) > 200):
print("starting to execute in 5 seconds...")
sleep(5)
for command in eventsToSend:
#with open('output.txt', 'a') as f1:
#f1.write(command+os.linesep)
subprocess.Popen(command.split(), stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
print("done")
break
else:
None
eventSenderStarter = threading.Thread(target = eventSender)
eventSenderStarter.start()
def runProcess(exe):
p = subprocess.Popen(exe, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
while(True):
# returns None while subprocess is running
retcode = p.poll()
line = p.stdout.readline()
yield line
if retcode is not None or len(eventsToSend)>200:
print("Executing events...")
break
print("Listening for events...")
for line in runProcess('adb shell -- getevent /dev/input/event1'.split()):
myLine = line.decode().strip()
splittedLine = myLine.split(" ")
decimalString = ""
for index,hexadecimal in enumerate(splittedLine):
decimal = int(hexadecimal, 16)
if(index==0):
decimalString = decimalString+str(decimal)
if(index>0):
decimalString = decimalString+" "+str(decimal)
eventsToSend.append("adb shell sendevent /dev/input/event1 "+decimalString)
Just connect your phone to pc then run this script play with your screen after 200 events it will start replay(be careful because it might press wrong coordinates :P ) .In my case it was
/dev/input/event1
so you might need to edit event1 for testing.

Consider adding some small delay between the events you send - time.sleep(0.5). You may have to change the value of 0.5 - try some values until it will work.

Related

PYTHON subprocess cmd.exe closes after first command

I am working on a python program which implements the cmd window.
I am using subproccess with PIPE.
If for example i write "dir" (by stdout), I use communicate() in order to get the response from the cmd and it does work.
The problem is that in a while True loop, this doesn't work more than one time, it seems like the subprocess closes itself..
Help me please
import subprocess
process = subprocess.Popen('cmd.exe', shell=False, stdin=subprocess.PIPE,stdout=subprocess.PIPE,stderr=None)
x=""
while x!="x":
x = raw_input("insert a command \n")
process.stdin.write(x+"\n")
o,e=process.communicate()
print o
process.stdin.close()
The main problem is that trying to read subprocess.PIPE deadlocks when the program is still running but there is nothing to read from stdout. communicate() manually terminates the process to stop this.
A solution would be to put the piece of code that reads stdout in another thread, and then access it via Queue, which allows for reliable sharing of data between threads by timing out instead of deadlocking.
The new thread will read standard out continuously, stopping when there is no more data.
Each line will be grabbed from the queue stream until a timeout is reached(no more data in Queue), then the list of lines will be displayed to the screen.
This process will work for non-interactive programs
import subprocess
import threading
import Queue
def read_stdout(stdout, queue):
while True:
queue.put(stdout.readline()) #This hangs when there is no IO
process = subprocess.Popen('cmd.exe', shell=False, stdout=subprocess.PIPE, stdin=subprocess.PIPE)
q = Queue.Queue()
t = threading.Thread(target=read_stdout, args=(process.stdout, q))
t.daemon = True # t stops when the main thread stops
t.start()
while True:
x = raw_input("insert a command \n")
if x == "x":
break
process.stdin.write(x + "\n")
o = []
try:
while True:
o.append(q.get(timeout=.1))
except Queue.Empty:
print ''.join(o)

How can I track time a python subprocess while it's running dynamically and update a database?

I have a subprocess that encodes a video , and what I would love to do is update at database record with the time it is taking to encode the video (so I can print it out in ajax on a web page)
I am very close - this code I have so far updates the database and encodes the video - but the process/loop gets stuck on the final db.commit() and never exits the while True: loop. Is there a better way to do this? Here is the code I am tinkering with:
time_start = time.time()
try:
p = subprocess.Popen(["avconv" , "-y" , "-t" , "-i" , images , "-i" , music_file , video_filename], universal_newlines=True, stdout=subprocess.PIPE)
while True:
time_now = time.time()
elapsed_time = time_now - time_start
progress_time = "ENCODING TIME" + str(int(elapsed_time)) + " Seconds "
cursor.execute("UPDATE video SET status = %s WHERE id = %s" ,[progress_time , video_id] )
db.commit()
out, err = p.communicate()
retcode = p.wait()
except IOError:
pass
else:
print "] ENCODING OF VIDEO FINISHED:" + str(retcode)
You're right, because you have no way of exiting your infinite loop, it will just spin forever. What you need to do it call check p.poll() to see if the process has exited (it will return none if it hasn't). So, something like:
while True:
if p.poll():
break
... other stuff...
or better yet:
while p.poll() == None:
.... other stuff....
will cause your loop to terminate when the subprocess is complete. then you can call p.communicate() to get the output.
I would also suggest using a sleep or delay in there so that your loop doesn't spin using 100% of your CPU. Only check and update your database every second, not continuously. So:
while p.poll() == None:
time.sleep(1)
...other stuff...
In addition to the infinite loop issue pointed out by #clemej, there is also a possibility of a deadlock because you don't read from p.stdout pipe in the loop despite stdout=subprocess.PIPE i.e., while p.poll() is None: will also loop forever if avconv generates enough output to fill its stdout OS pipe buffer.
Also, I don't see the point to update the progress time in the database while the process is still running. All you need is two records:
video_id, start_time # running jobs
video_id, end_time # finished jobs
If the job is not finished then the progress time is current_server_time - start_time.
If you don't need the output then you could redirect it to devnull:
import os
from subprocess import call
try:
from subprocess import DEVNULL # Python 3
except ImportError:
DEVNULL = open(os.devnull, 'r+b', 0)
start_time = utcnow()
try:
returncode = call(["avconv", "-y", "-t", "-i", images,
"-i", music_file, video_filename],
stdin=DEVNULL, stdout=DEVNULL, stderr=DEVNULL)
finally:
end_time = utcnow()

How to clear stdout in Python subprocess?

this snippet will ping an ip address in windows and get output line each 2 seconds, however, I found there's a very slowly memory increasement of ping.exe process after run it, if I deploy it to ping 1000 ip parallel, soon it will cause server hang, I think it may because of stdout buffer, may I know how to clear the stdout or limit its size? thanks!
...
proc = subprocess.Popen(['c:\windows\system32\ping.exe','127.0.0.1', '-l', '10000', '-t'],stdout=subprocess.PIPE, creationflags=subprocess.CREATE_NEW_PROCESS_GROUP)
while True:
time.sleep(2)
os.kill(proc.pid, signal.CTRL_BREAK_EVENT)
line = proc.stdout.readline()
ping is producing many more lines than you're reading due to the 2 second timeout between reads. I'd move the os.kill call into another thread, and use the main thread to read every line from proc.stdout:
import sys, os
import subprocess
import threading
import signal
import time
#Use ctrl-c and ctrl-break to terminate the script/ping
def sigbreak(signum, frame):
import sys
if proc.poll() is None:
print('Killing ping...')
proc.kill()
sys.exit(0)
signal.signal(signal.SIGBREAK, sigbreak)
signal.signal(signal.SIGINT, sigbreak)
#executes in a separate thread
def run(pid):
while True:
time.sleep(2)
try:
os.kill(pid, signal.CTRL_BREAK_EVENT)
except WindowsError:
#quit the thread if ping is dead
break
cmd = [r'c:\windows\system32\ping.exe', '127.0.0.1', '-l', '10000', '-t']
flags = subprocess.CREATE_NEW_PROCESS_GROUP
proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, creationflags=flags)
threading.Thread(target=run, args=(proc.pid,)).start()
while True:
line = proc.stdout.readline()
if b'statistics' in line:
#I don't know what you're doing with the ping stats.
#I'll just print them.
for n in range(4):
encoding = getattr(sys.stdout, 'encoding', 'ascii')
print(line.decode(encoding).rstrip())
line = proc.stdout.readline()
print()
Try ping.py instead of juggling with the ping.exe

How to achieve desired results when using the subprocees Popen.send_signal(CTRL_C_EVENT) in Windows?

In python 2.7 in windows according to the documentation you can send a CTRL_C_EVENT
(Python 2.7 Subprocess Popen.send_signal documentation).
However when I tried it I did not receive the expected keyboard interrupt in the subprocess.
This is the sample code for for the parent process:
# FILE : parentProcess.py
import subprocess
import time
import signal
CREATE_NEW_PROCESS_GROUP = 512
process = subprocess.Popen(['python', '-u', 'childProcess.py'],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
universal_newlines=True,
creationflags=CREATE_NEW_PROCESS_GROUP)
print "pid = ", process.pid
index = 0
maxLoops = 15
while index < maxLoops:
index += 1
# Send one message every 0.5 seconds
time.sleep(0.5)
# Send data to the subprocess
process.stdin.write('Bar\n')
# Read data from the subprocess
temp = process.stdout.readline()
print temp,
if (index == 10):
# Send Keyboard Interrupt
process.send_signal(signal.CTRL_C_EVENT)
This is the sample code for the child proceess:
# FILE : childProcess.py
import sys
while True:
try:
# Get data from main process
temp = sys.stdin.readline()
# Write data out
print 'Foo ' + temp,
except KeyboardInterrupt:
print "KeyboardInterrupt"
If I run the file parentProcess.py I expect to get "Foo Bar" ten times then a "KeyboardInterrupt" followed by "Foo Bar" 4 times but I get "Foo Bar" 15 times instead.
Is there a way to get the CTRL_C_EVENT to behave as a keyboard interrupt just as SIGINT behaves in Linux?
After doing some reading I found some information that seems to contradic the python documentation regarding CTRL_C_EVENT, in particular it says that
CTRL_C_EVENT
0 Generates a CTRL+C signal. This signal cannot be generated for process groups
The following site provide more inforamtion about creation flags:
Process Creation Flags.
This method of signal handling by subprocesses worked for me on both Linux and Windows 2008, both using Python 2.7.2, but it uses Ctrl-Break instead of Ctrl-C. See the note about process groups and Ctrl-C in http://msdn.microsoft.com/en-us/library/ms683155%28v=vs.85%29.aspx.
catcher.py:
import os
import signal
import sys
import time
def signal_handler(signal, frame):
print 'catcher: signal %d received!' % signal
raise Exception('catcher: i am done')
if hasattr(os.sys, 'winver'):
signal.signal(signal.SIGBREAK, signal_handler)
else:
signal.signal(signal.SIGTERM, signal_handler)
print 'catcher: started'
try:
while(True):
print 'catcher: sleeping...'
time.sleep(1)
except Exception as ex:
print ex
sys.exit(0)
thrower.py:
import signal
import subprocess
import time
import os
args = [
'python',
'catcher.py',
]
print 'thrower: starting catcher'
if hasattr(os.sys, 'winver'):
process = subprocess.Popen(args, creationflags=subprocess.CREATE_NEW_PROCESS_GROUP)
else:
process = subprocess.Popen(args)
print 'thrower: waiting a couple of seconds for catcher to start...'
time.sleep(2)
print 'thrower: sending signal to catch'
if hasattr(os.sys, 'winver'):
os.kill(process.pid, signal.CTRL_BREAK_EVENT)
else:
process.send_signal(signal.SIGTERM)
print 'thrower: i am done'
try with
win32api.GenerateConsoleCtrlEvent(CTRL_C_EVENT, pgroupid)
or
win32api.GenerateConsoleCtrlEvent(CTRL_BREAK_EVENT, pgroupid)
references:
http://docs.activestate.com/activepython/2.5/pywin3/win32process_CREATE_NEW_PROCESS_GROUP.html
http://msdn.microsoft.com/en-us/library/ms683155%28v=vs.85%29.aspx
read info about dwProcessGroupId, the groupid should be the same of the process id

Python, Popen and select - waiting for a process to terminate or a timeout

I run a subprocess using:
p = subprocess.Popen("subprocess",
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
stdin=subprocess.PIPE)
This subprocess could either exit immediately with an error on stderr, or keep running. I want to detect either of these conditions - the latter by waiting for several seconds.
I tried this:
SECONDS_TO_WAIT = 10
select.select([],
[p.stdout, p.stderr],
[p.stdout, p.stderr],
SECONDS_TO_WAIT)
but it just returns:
([],[],[])
on either condition. What can I do?
Have you tried using the Popen.Poll() method. You could just do this:
p = subprocess.Popen("subprocess",
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
stdin=subprocess.PIPE)
time.sleep(SECONDS_TO_WAIT)
retcode = p.poll()
if retcode is not None:
# process has terminated
This will cause you to always wait 10 seconds, but if the failure case is rare this would be amortized over all the success cases.
Edit:
How about:
t_nought = time.time()
seconds_passed = 0
while(p.poll() is not None and seconds_passed < 10):
seconds_passed = time.time() - t_nought
if seconds_passed >= 10:
#TIMED OUT
This has the ugliness of being a busy wait, but I think it accomplishes what you want.
Additionally looking at the select call documentation again I think you may want to change it as follows:
SECONDS_TO_WAIT = 10
select.select([p.stderr],
[],
[p.stdout, p.stderr],
SECONDS_TO_WAIT)
Since you would typically want to read from stderr, you want to know when it has something available to read (ie the failure case).
I hope this helps.
This is what i came up with. Works when you need and don't need to timeout on thep process, but with a semi-busy loop.
def runCmd(cmd, timeout=None):
'''
Will execute a command, read the output and return it back.
#param cmd: command to execute
#param timeout: process timeout in seconds
#return: a tuple of three: first stdout, then stderr, then exit code
#raise OSError: on missing command or if a timeout was reached
'''
ph_out = None # process output
ph_err = None # stderr
ph_ret = None # return code
p = subprocess.Popen(cmd, shell=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
# if timeout is not set wait for process to complete
if not timeout:
ph_ret = p.wait()
else:
fin_time = time.time() + timeout
while p.poll() == None and fin_time > time.time():
time.sleep(1)
# if timeout reached, raise an exception
if fin_time < time.time():
# starting 2.6 subprocess has a kill() method which is preferable
# p.kill()
os.kill(p.pid, signal.SIGKILL)
raise OSError("Process timeout has been reached")
ph_ret = p.returncode
ph_out, ph_err = p.communicate()
return (ph_out, ph_err, ph_ret)
Here is a nice example:
from threading import Timer
from subprocess import Popen, PIPE
proc = Popen("ping 127.0.0.1", shell=True)
t = Timer(60, proc.kill)
t.start()
proc.wait()
Using select and sleeping doesn't really make much sense. select (or any kernel polling mechanism) is inherently useful for asynchronous programming, but your example is synchronous. So either rewrite your code to use the normal blocking fashion or consider using Twisted:
from twisted.internet.utils import getProcessOutputAndValue
from twisted.internet import reactor
def stop(r):
reactor.stop()
def eb(reason):
reason.printTraceback()
def cb(result):
stdout, stderr, exitcode = result
# do something
getProcessOutputAndValue('/bin/someproc', []
).addCallback(cb).addErrback(eb).addBoth(stop)
reactor.run()
Incidentally, there is a safer way of doing this with Twisted by writing your own ProcessProtocol:
http://twistedmatrix.com/projects/core/documentation/howto/process.html
Python 3.3
import subprocess as sp
try:
sp.check_call(["/subprocess"], timeout=10,
stdin=sp.DEVNULL, stdout=sp.DEVNULL, stderr=sp.DEVNULL)
except sp.TimeoutError:
# timeout (the subprocess is killed at this point)
except sp.CalledProcessError:
# subprocess failed before timeout
else:
# subprocess ended successfully before timeout
See TimeoutExpired docs.
If, as you said in the comments above, you're just tweaking the output each time and re-running the command, would something like the following work?
from threading import Timer
import subprocess
WAIT_TIME = 10.0
def check_cmd(cmd):
p = subprocess.Popen(cmd,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
def _check():
if p.poll()!=0:
print cmd+" did not quit within the given time period."
# check whether the given process has exited WAIT_TIME
# seconds from now
Timer(WAIT_TIME, _check).start()
check_cmd('echo')
check_cmd('python')
The code above, when run, outputs:
python did not quit within the given time period.
The only downside of the above code that I can think of is the potentially overlapping processes as you keep running check_cmd.
This is a paraphrase on Evan's answer, but it takes into account the following :
Explicitly canceling the Timer object : if the Timer interval would be long and the process will exit by its "own will" , this could hang your script :(
There is an intrinsic race in the Timer approach (the timer attempt killing the process just after the process has died and this on Windows will raise an exception).
DEVNULL = open(os.devnull, "wb")
process = Popen("c:/myExe.exe", stdout=DEVNULL) # no need for stdout
def kill_process():
""" Kill process helper"""
try:
process.kill()
except OSError:
pass # Swallow the error
timer = Timer(timeout_in_sec, kill_process)
timer.start()
process.wait()
timer.cancel()

Categories