I got a python program running as a windows service which does in my opinion catch all exceptions. In my development environment I cannot reproduce any situation, where no exception is logged, when the program crashes. Except 2 cases: the program is killed via Task Manager or I power off the computer.
However, on the target environment (Windows 2000 with all necessary libraries and python installed), the windows service quits suddenly ca. 4 Minutes after reboots without logging any Exception or reason for the fail. The environment was definitely not powered off.
Does anybody have a suggestion how to determine what killed the python program?
EDIT: I cannot use a debugger in the target environment (as is it is productional level). Therefore I need a way to log the reason for the failure. So, I am looking for tools or methods to log additional information at runtime (or failure time) which can be used for post-mortem analysis.
You need to give more information like "Is your program multi-threaded ?" Does the code depend on the version of Python interpreter you are using, or any imported modules used not present in the target environment ?
If you have GDB for Windows, you can do "gdb -p pid" where "pid" is the pid of the python program you are running. If there is a crash, you can get the back trace.
You may want to check also the following tools from sysinternals.com (now acquired by MSFT):
http://technet.microsoft.com/en-us/sysinternals/bb795533
such as ProcDump, Process Monitor or even Process Explorer (yet less adapted than the previous ones).
You may also be able to install a lightweight debugger such as OllyDbg, or use Moonsols's tools to monitor the guest VM's process if you happen to have this in a virtualized environment.
Related
I'm trying to achieve something ; I created a exe file, that automatically force shutdown the computer at 11 pm.
I would like to make this script impossible to stop or crash, or even make the entire system crash if the program is closed.
How can i achieve this ?
Note :
I'm on a laptop, running windows 10. I made a python file, and i converted it in an exe file with py installer. Then i created a shortcut to that exe file that run the program with admin rights
If you mark the process as critical Windows will trigger a blue screen crash if that process is stopped/killed.
There is information about how to do this here
Note: Although it is possible to do this, it is not a good idea to do so. For example as suggested by Anders, use a Scheduled Task. Having the system crash could result in information loss, or other unintended consequences.
Create a Windows service. You can deny Administrators trying to stop/pause the service, that should slow them down a little bit.
Or since we are talking about triggering something at a specific time, you might want to use the task scheduler instead.
In any case, you will never fully lock down something like this from an Administrator since they can always take ownership and modify the ACL.
I've been working with Python programs which take several hours to complete, but crash occasionally. To debug, so far I have been adding conditional breakpoints, which drop me into a PDB session whenever a problem occurs. This is great because pinpointing the exact cause of the problem is hard, and the interactive session lets me explore the whole program (including all the stack frames and so on).
The only issue is, if I ever accidentally close or crash my debugging session, I need to start the whole program again! Reaching my breakpoint takes several hours! I would really, really like a way of serializing a PDB session and re-opening it multiple times. Does anything like this exist? I have looked into dill to serialize an interpreter session, unfortunately several of my types fail to serialize (it also isn't robust to code changes down the line). Thanks!
You haven't specified your operating system of choice, but in linux world there is a criu utility - https://criu.org/Main_Page , which can be used to save an application state. Now there are a lot of pitfalls, especially with tty-based applications (see https://criu.org/Advanced_usage#Shell_jobs_C.2FR) but here is an exaple.
I got a simple python application with pdb debug point, let's call it app.py:
print("hello")
import pdb; pdb.set_trace()
print("world")
After running this application with python app.py you get expected
hello
> /home/user/app.py(3)<module>()
-> print("world")
Get your pid with pgrep -f app.py, in my case it was 17060
Create a folder to dump your process
mkdir /tmp/criu
Dump your process with
sudo criu dump -D /tmp/criu -t 17060 --shell-job
notice that you current process will be killed (AFAIK due to --shell-job key, see link above).
you'll see
(Pdb) [1] 17060 killed python app.py
in your tty
Restore your process with
sudo criu restore -D /tmp/criu --shell-job
your tty will be restored in a same window where you used this command.
Since debugger is attached, you may type c and enter to see if it actually worked. Here's the result on my machine:
(Pdb) c
world
Hope that helps, there are a lot of pitfalls that might make this approach unfeasible for you.
The other way is to run your code in a VM and snapshot disk and memory each time. It might not be the best solution resource-wise, but many hypervisors have a nice UI, or even shell utilities to control state of virtual machines. Snapshoting tech is mature in any hypervisor now days, you shouldn't run into any problems. Setup remote debugging and connect with your favorite IDE after bringing your snapshot back.
Edit: There is also an easy way to do this if you are running your applications in containers and you OS supports podman and criu 3.11+
https://criu.org/Podman
You can use something like
podman run -d --name your_container_name your_image
To snapshot use
podman container checkpoint your_container_id
To restore use
podman container restore your_container_id
All these commands require root priviliges. Unfortunately i wasn't able to test it because my distro provides criu 3.8, and for podman 3.11 is required.
Same functionality is available as experimental flag in Docker, see https://criu.org/Docker
I created a a very simple test that launches and close a software I was testing using Python Nose test platform to track down a bug in the start up sequence of the software I was working on.
The test was set up so that it would launch and close about 1,500 times in a singling execution.
A few hours later, I discovered that the test was not able to launch to the software around after 300 iterations. It was timing out while waiting for the process to start. As soon as I logged back in, the test started launching the process without any problem and all the tests started passing as well.
This is quite puzzling to me. I have never seen this behavior. This never happened on Windows also.
I am wondering if there is a sort of power saving state that Mac was waiting for currently running process to finish and prohibits new process from starting.
I would really appreciate if anybody can shed light on this confusion.
I was running Python 2.7.x on High Sierra.
I am not aware of any state where the system flat out denies new processes while old ones are still running.
However, I can easily imagine a situation in which a process may hang because of some unexpected dependency on e.g. the window server. For example, I once noticed that rsvg-convert, a command-line SVG-to-image converter, running in an SSH session, had different fonts available to it depending on whether I was also simultaneously logged in on the console. This behavior went away when I recompiled the SVG libraries to exclude all references to macOS specific libraries...
I've written a lot of python scripts. Now I want to run it on another computer which running non-stop to crawling, analyzing data and update to an sql database.
Normally I open a command prompt and run the scripts:
python [script directory]
But with many scripts I have to open many cmd and every script call an python interpreter, so It end up with huge mess using a lot of memory.
What should I do to manage these scripts.
You haven't specified what OS your server is, but assuming that it's a Linux server you should probably research a process management tool such as Supervisord or Systemd. These are tools designed to run and monitor your program automatically, and even restart it if it crashes.
If you're using Ubuntu 16.04 then it comes with Systemd out of the box, however I personally find Supervisord easier to configure and use for simple tasks.
These programs won't necessarily help with your memory consumption issues however. Sure you can place caps on memory use for a process, but that's not really going to help you if it stops your program from working. You're probably best to re-evaluate your code and look for ways to reduce its memory footprint or use a server with more ram.
EDIT:
You've just added that the OS is Windows 10, which makes the above irrelevant. You can use the Windows Task Scheduler to automatically execute long running tasks.
you can use pythonw *.py , and it will run in background.
I wrote a temperature logger Python script and entered it as a scheduled task in Windows XP. It has the following command line:
C:\Python26\pythonw.exe "C:\path\to\templogger.py"
It writes data to a file in local public folder (e.g. fully accessible by all who login locally).
So far, I was able to achieve this objective:
1. Get the task to run even before anyone logs in (i.e. at the "Press Ctrl+Alt+Del" screen)
But I'm having problems with these:
1. When I log in, log out, then log back in, the scheduled task is no longer active. I can no longer see it in the Task Manager's Processes tab. I suspect it closes when I log out.
2. I tried to set the task's "Run As..." property to DOMAIN\my-username and also tried SYSTEM, but problem #1 above still persists.
SUMMARY:
I want my program to be running as long as Windows is active. It should not matter whether anyone has logged in or out.
P.S.
I asked the same question in Super User, and I was advised to write it as a service, which I know nothing about (except starting and stopping them). I hope to reach a wider audience here at SO.
Is it possible to run a Python script as a service in Windows? If possible, how?
http://agiletesting.blogspot.com/2005/09/running-python-script-as-windows.html
Your scenario is exactly the required use case for a service, unfortunately tasks are ill suited for what you are looking to do. That said writing services in python is not a walk in the park either, to ease the pain here is a few links I have perused in the past:
http://agiletesting.blogspot.com/2005/09/running-python-script-as-windows.html
http://mail.python.org/pipermail/python-win32/2008-April/007298.html
I used the second link in particular to create a windows scripts that was then compiled to a executable service with py2exe and installed with SrvAny.