I'm trying to do some research on iOS and it involves attaching lldb to a process. I'm able to do it with lldb console, however when I'm trying to convert it to a python script, it stuck at "process continue" for the first time and never reach the commands at the end. Can anyone helps? Thanks!
import lldb
debugger = lldb.SBDebugger.Create()
debugger.SetAsync(False)
debugger.HandleCommand('platform select remote-ios')
debugger.HandleCommand('process connect connect://localhost:1234')
debugger.HandleCommand('process continue')
#other commands
You are running in synchronous mode, so "process continue" won't return till the process stops for some reason. You didn't set any breakpoints, so short of crashing, there's nothing to make it stop.
If you want to have more control over handling the process as it runs, you might want to try modifying the event-handling example at:
http://llvm.org/svn/llvm-project/lldb/trunk/examples/python/process_events.py
to your purposes.
Related
Hi so I’m working on a python script that involves a loop function, so far the loop function process is failing for some reason(although I kinda know why) but the problem I’ve got os.system(‘pause’) and also input(“prompt:”) at end of the code in order to pause all activity so I can read the error messages prior to script completion and termination but the script still shuts down, I need a way to HARD pause it or freeze before the window closes abruptly. Need help and any further insight.
Ps. Let me know if you need any more info to better describe this problem.
I assume you are just 'double clicking' the icon on Window Explorer. This has the disadvantage which you are encountering here in that the shell (terminal window) closes when the process finishes so you can't tell what went wrong if it terminated due to an error.
A better method would be to use the command prompt. If you are not familiar with this, there are many tutorials online.
The reason this will help with your problem is that, once navigating to the script's containing directory, you can use python your_script.py (assuming python is in your path environmental variable) to run the script within the same window.
Then, even if it fails, you can read the error messages as you will only be returned to the command line.
An alternative hacky method would be to create a script called something like run_pythons.py which will use the subprocess module to call your actual script in the same window, and then (no matter how it terminates), wait for your input before terminating itself so that you can read the error messages.
So something like:
import subprocess
subprocess.call(('python', input('enter script name: ')))
input('press ENTER to kill me')
I needed something like this at one point. I had a wrapper that loaded a bunch of modules and data and then waited for a prompt to run something. If I had a stupid mistake in a module, it would quit, and that time that it spent loading all that data into memory would be wasted, which was >1min. For me, I wanted a way to keep that data in memory even if I had an error in a module so that I could edit the module and rerun the script.
To do this:
while True:
update = raw_input("Paused. Enter = start, 'your input' = update params, C-C = exit")
if update:
update = update.split()
#unrelevant stuff used to parse my update
#custom thing to reload all my modules
fullReload()
try:
#my main script that needed all those modules and data loaded
model_starter.main(stuff, stuff2)
except Exception as e:
print(e)
traceback.print_exc()
continue
except KeyboardInterrupt:
print("I think you hit C-C. Do it again to exit.")
continue
except:
print("OSERROR? sys.exit()? who knows. C-C to exit.")
continue
This kept all the data loaded that I grabbed from before my while loop started, and prevented exiting on errors. It also meant that I could still ctrl+c to quit, I just had to do it from this wrapper instead of once it got to the main script.
Is this somewhat what you're looking for?
The answer is basically, you have to catch all your exceptions and have a method to restart your loop once you figured out and fixed the issue.
I've been ripping my hair out over this. I've searched the internet and can't seem to find a solution to my problem. I'm trying to auto test some code using the gdb module from python. I can do basic command and things are working except for stopping a process that's running in the background. Currently I continue my program in the background after a break point with this:
gdb.execute("c&")
I then interact with the running program reading different constant values and getting responses from the program.
Next I need to get a chunk of memory so I run these commands:
gdb.execute("interrupt") #Pause execution
gdb.execute("dump binary memory montiormem.bin 0x0 (&__etext + 4)") #dump memory to file
But when I run the memory dump I get an error saying the command can't be run when the target is running, after the error the interrupt command is run and the target is paused, then from the gdb console window I can run the memory dump.
I found a similar issue from awhile ago that seems to not be answered here.
I'm using python2.7.
I also found this link which seems to be the issue but no indication if it's in my build of gdb (which seems unlikely).
I had the same problem, from what I can tell from googling it is a current limitation of gdb: interrupt simply doesn't work in batch mode (when specifying commands with --ex, or -x file, or on stdin, or sourcing from file), it runs the following commands before actually stopping the execution (inserting a delay doesn't help). Building on the #dwjbosman's solution, here's a compact version suitable for feeding to gdb with --ex arguments for example:
python import threading, gdb
python threading.Timer(1.0, lambda: gdb.post_event(lambda: gdb.execute("interrupt"))).start()
cont
thread apply all bt full # or whatever you wanted to do
It schedules an interrupt after 1 second and resumes the program, then you can do whatever you wanted to do after the pause right in the main script.
I had the same problem, but found that none of the other answers here really work if you are trying to script everything from python. The issue that I ran into was that when I called gdb.execute('continue'), no code in any other python thread would execute. This appears to be because gdb does not release the python GIL while the continue command is waiting for the program to be interrupted.
What I found that actually worked for me was this:
def delayed_interrupt():
time.sleep(1)
gdb.execute('interrupt')
gdb.post_event(delayed_interrupt)
gdb.execute('continue')
I just ran into this same issue while writing some automated testing scripts. What I've noticed is that the 'interrupt' command doesn't stop the application until after the current script has exited.
Unfortunately, this means that you would need to segment your scripts anytime you are causing an interrupt.
Script 1:
gdb.execute('c&')
gdb.execute('interrupt')
Script 2:
gdb.execute("dump binary memory montiormem.bin 0x0 (&__etext + 4)")
I used multi threading to get arround this issue:
def post(cmd):
def _callable():
print("exec " + cmd , flush=True)
gdb.execute(cmd)
print("schedule " + cmd , flush=True)
gdb.post_event(_callable)
class ScriptThread (threading.Thread):
def run (self):
while True:
post("echo hello\n")
time.sleep(1)
x = ScriptThread()
x.start()
Save this as "test_script.py"
Use the script as follows:
gdb
> source test_script.py
Note: that you can also pipe "source test_script.py", but you need to keep the pipe open.
Once the thread is started GDB will wait for the thread to end and will process any commands you send to it via the "post_event" function. Even "interrupt"!
is there a way to restart another script in another shell?
i have script that sometimes stuck waiting to read email from gmail and imap. from another script i would like to restart the main one but without stopping the execution of the second
i have tried:
os.system("C:\Users\light\Documents\Python\BOTBOL\Gmail\V1\send.py")
process = subprocess.Popen(["python", "C:\Users\light\Documents\Python\BOTBOL\Gmail\V1\send.py"])
but both run the main in the second's shell
EDIT:
sorry, for shell i mean terminal window
After your last comment and as the syntax show that you are using Windows, I assume that you want to launch a Python script in another console. The magic word here is START if you want that the launching execute in parallel with the new one, or START /W if you want to wait for the end of the subprocess.
In your case, you could use:
subprocess.call(["cmd.exe", "/c", "START", "C:\Path\To\PYTHON.EXE",
"C:\Users\light\Documents\Python\BOTBOL\Gmail\V1\send.py"])
Subprocess has an option called shell which is what you want. Os calls are blocking which means that only after the command is completed will the interpreter move to the next line. On the other hand subprocess popens are non blocking, however both these commands will spawn off child process from the process running this code. If you want to run in shell and get access shell features to execute this , try the shell = True in subprocess.
I could try and explain everything you need but I think this video will do it better: Youtube Video about multithreading
This will allow you to run 2 things f.e.
Have 1 run on checkin email and the other one on inputs so it wont stop at those moments and making multiple 'shelves' possible, as they are parallel.
If you really want to have a different window for this, i am sorry and I can not help.
Hope this was were you were looking for.
When debugging scripts in Python (2.7, running on Linux) I occasionally inject pdb.set_trace() (note that I'm actually using ipdb), e.g.:
import ipdb as pdb
try:
do_something()
# I'd like to look at some local variables before running do_something_dangerous()
pdb.set_trace()
except:
pass
do_something_dangerous()
I typically run my script from the shell, e.g.
python my_script.py
Sometimes during my debugging session I realize that I don't want to run do_something_dangerous(). What's the easiest way to halt program execution so that do_something_dangerous() is not run and I can quit back to the shell?
As I understand it pressing ctrl-d (or issuing the debugger's quit command) will simply exit ipdb and the program will continue running (in my example above). Pressing ctrl-c seems to raise a KeyboardInterrupt but I've never understood the context in which it was raised.
I'm hoping for something like ctrl-q to simply take down the entire process, but I haven't been able to find anything.
I understand that my example is highly contrived, but my question is about how to abort execution from pdb when the code being debugged is set up to catch exceptions. It's not about how to restructure the above code so it works!
I found that ctrl-z to suspend the python/ipdb process, followed by 'kill %1' to terminate the process works well and is reasonably quick for me to type (with a bash alias k='kill %1'). I'm not sure if there's anything cleaner/simpler though.
From the module docs:
q(uit)
Quit from the debugger. The program being executed is aborted.
Specifically, this will cause the next debugger function that gets called to raise a BdbQuit exception.
I am trying to constantly monitor a process which is basically a Python program. If the program stops, then I have to start the program again. I am using another Python program to do so.
For example, say I have to constantly run a process called run_constantly.py. I initially run this program manually, which writes its process ID to the file "PID" (in the location out/PROCESSID/PID).
Now I run another program which has the following code to monitor the program run_constantly.py from a Linux environment:
def Monitor_Periodic_Process():
TIMER_RUNIN = 1800
foo = imp.load_source("Run_Module","run_constantly.py")
PROGRAM_TO_MONITOR = ['run_constantly.py','out/PROCESSID/PID']
while(1):
# call the function checkPID to see if the program is running or not
res = checkPID(PROGRAM_TO_MONITOR)
# if res is 0 then program is not running so schedule it
if (res == 0):
date_time = datetime.now()
scheduler.add_cron_job(foo.Run_Module, year=date_time.year, day=date_time.day, month=date_time.month, hour=date_time.hour, minute=date_time.minute+2)
scheduler.start()
scheduler.get_jobs()
time.sleep(TIMER_NOT_RUNIN)
continue
else:
#the process is running sleep and then monitor again
time.sleep(TIMER_RUNIN)
continue
I have not included the checkPID() function here. checkPID() basically checks if the process ID still exists (i.e. if the program is still running) and if it does not exist, it returns 0. In the above program, I check if res == 0, and if so, then I use Python's scheduler to schedule the program. However, the major problem that I am currently facing is that the process ID of this program and the run_constantly.py program turns to be same once I schedule the run_constantly.py using the scheduler.add_cron_job() function. So if the program run_constantly.py crashes, the following program still thinks that the run_constantly.py is running (since both process IDs are same), and therefore continues to go into the else loop to sleep and monitor again.
Can someone tell me how to solve this issue? Is there a simple way to constantly monitor a program and reschedule it when it has crashed?
There are many programs that can do this.
On Ubuntu there is upstart (installed by default)
Lots of people like http://supervisord.org/
monit as mentioned by #nathan
If you are looking for a python alternative there is a library that has just been released called circus which looks interesting.
And pretty much every linux distro probably has one of these built in.
The choice is really just down to which one you like better, but you would be far better off using one of these than writing it yourself.
Hope that helps
If you are willing to control the monitored program directly from python instead of using cron, have a look at the subprocess module :
The subprocess module allows you to spawn new processes,
connect to their input/output/error pipes, and obtain their return codes.
Check examples like track process status with python on SO for examples and references.
You could just use monit
http://mmonit.com/monit/
It monitors processes and restarts them (and other things.)
I thought I'd add a more versatile solution, which is one that I personally use all the time as well.
It's name is Immortal (source is at https://github.com/immortal/immortal)
To have it monitor and instantly restart a program if it stops, simply run the following command:
immortal <command>
So in your case I would run run_constantly.py like so:
immortal python run_constantly.py
The command ps aux | grep run_constantly.py should return 2 process IDs, one for the Immortal command, and one for the separate command Immortal started (just the regular command. As long as the Immortal process is running, run_constantly.py will stay running.