I'm developing a script that runs a program with other scripts over and over for testing purposes.
How it currently works is I have one Python script which I launch. That script calls the program and loads the other scripts. It kills the program after 60 seconds to launch the program again with the next script.
For some scripts, 60 seconds is too long, so I was wondering if I am able to set a FLAG variable (not in the main script), such that when the script finishes, it sets FLAG, so the main script and read FLAG and kill the process?
Thanks for the help, my writing may be confusing, so please let me know if you cannot fully understand.
You could use atexit to write a small file (flag.txt) when script1.py exits. mainscript.py could regularly be checking for the existence of flag.txt and when it finds it, will kill program.exe and exit.
Edit:
I've set persistent environment variables using this, but I only use it for python-based installation scripts. Usually I'm pretty shy about messing with the registry. (this is for windows btw)
This seems like a perfect use case for sockets, in particular asyncore.
You cannot use environment variables in this way. As you have discovered it is not persistent after the setting application completes
Related
I'm trying to achieve something ; I created a exe file, that automatically force shutdown the computer at 11 pm.
I would like to make this script impossible to stop or crash, or even make the entire system crash if the program is closed.
How can i achieve this ?
Note :
I'm on a laptop, running windows 10. I made a python file, and i converted it in an exe file with py installer. Then i created a shortcut to that exe file that run the program with admin rights
If you mark the process as critical Windows will trigger a blue screen crash if that process is stopped/killed.
There is information about how to do this here
Note: Although it is possible to do this, it is not a good idea to do so. For example as suggested by Anders, use a Scheduled Task. Having the system crash could result in information loss, or other unintended consequences.
Create a Windows service. You can deny Administrators trying to stop/pause the service, that should slow them down a little bit.
Or since we are talking about triggering something at a specific time, you might want to use the task scheduler instead.
In any case, you will never fully lock down something like this from an Administrator since they can always take ownership and modify the ACL.
I have looked at this question but I am not sure I got it correctly or not.
I have opened pycharm and one python script and its running (it's topic modeling).
Also I have another python script in which I opened in another pycharm in the same server. I also run it.
Now these two program are running in the same server, I should mention that I have not changed any configuration neither server nor pycharm.
Do you think its ok in this way? or one script technically won't run(in terms of progressing I mean it just show its running but practically wont run) until the other script finished?
Edit Configurations -> Allow parallel run. Done
First, PyCharm will create independent processes on the server, so both scripts will run. You can check it with something like htop - search for processes and verify that they're running.
Second, you don't have to open second PyCharm window to run the second script. You can run both of them from the single one. There are at least two ways: with run configurations or by spawning multiple terminal windows and running scripts from there.
From the Run/Debug Configurations windows you can add a Compound configuration that contains multiple configurations that will run in parallel. The Allow parallel run option for child configurations make no difference in this case.
The default behaviour was changed starting from version 2018.3. You can allow multiple runs by selecting Allow parallel run within the Edit Configurations menu.
I've written a lot of python scripts. Now I want to run it on another computer which running non-stop to crawling, analyzing data and update to an sql database.
Normally I open a command prompt and run the scripts:
python [script directory]
But with many scripts I have to open many cmd and every script call an python interpreter, so It end up with huge mess using a lot of memory.
What should I do to manage these scripts.
You haven't specified what OS your server is, but assuming that it's a Linux server you should probably research a process management tool such as Supervisord or Systemd. These are tools designed to run and monitor your program automatically, and even restart it if it crashes.
If you're using Ubuntu 16.04 then it comes with Systemd out of the box, however I personally find Supervisord easier to configure and use for simple tasks.
These programs won't necessarily help with your memory consumption issues however. Sure you can place caps on memory use for a process, but that's not really going to help you if it stops your program from working. You're probably best to re-evaluate your code and look for ways to reduce its memory footprint or use a server with more ram.
EDIT:
You've just added that the OS is Windows 10, which makes the above irrelevant. You can use the Windows Task Scheduler to automatically execute long running tasks.
you can use pythonw *.py , and it will run in background.
I have a python script that continuously process new data and writes to a mongodb. In the script, its a while loop and a sleep that runs the code continuously.
What is the recommended way to run the Python script forever, logging errors when they occur, and restarting when it crashes?
Will node.js's forever be suitable? I'm also running node/meteor on the same Ubuntu server.
supervisord is perfect for this sort of thing. While I used to check that programs were still running every couple of minutes with a cron job, supervisord runs all programs in an in-process thread, so in the event your program terminates, supervisord will automatically restart the process. I no longer need to parse the output of ps to see if a program crashed.
It has a simple declaritive config file and configurable logging. By default it creates a log file for your-program-name-stderr.log your-program-name-stdout.log which are automatically handled by logrotate when supervisord is installed from an OS package manager (Debian for me).
If you don't want to configure supervisord's logging, you should look at logging in python so you can control what goes into those files.
if you're on a debian derivative you should be able to install and start the daemon simply by executing apt-get install supervisord as root.
The config file is very straightforward too:
[program:myprogram]
command=/path/to/my/program/script
directory=/path/to/my/program/base
user=myuser
autostart=true
autorestart=true
redirect_stderr=True
supervisorctl also allows you to see what your program is doing interactively and can start and stop multiple programs with supervisorctl start myprogram etc
Recently wrote something similar. The basic pattern I follow is
while True:
try:
#functionality
except SpecificError:
#log exception
except: #catch everything else
finally:
time.sleep(600)
to handle reboots you can use init.d or cron jobs.
If you are writing a daemon, you should probably do it with this command:
http://manpages.ubuntu.com/manpages/lucid/man8/start-stop-daemon.8.html
You can spawn this from a System V /etc/init.d/ script, or use Upstart which is slowly replacing it.
Upstart: http://upstart.ubuntu.com/getting-started.html
System V: http://www.cyberciti.biz/tips/linux-write-sys-v-init-script-to-start-stop-service.html
I find System V easier to write, but if this will ever be packaged and distributed in a debian file, I recommend writing an Upstart conf.
Definitely keep the sleep so it won't keep a grip on CPU load.
I don't know if this is still relevant to you, but I have been reading forever about how to do this and want to share somewhere what I did.
For me, the goal was to have a python script running always (on my Linux computer). The python script also has a "while True " loop in it which should theoretically run forever, but if it for any reason I cannot think of would crash, I want the script to restart. Also, when I restart the computer it should run the script.
I am not an expert but for me the best and most understandable was to use systemd (assuming you use Linux).
There are two nice examples of how to do this given here and here, showing how to write your .service files in either /etc/systemd/system or /lib/systemd/system. If you want to be completely correct you should take the former:
" /etc/systemd/system/: units installed by the system administrator" 1
The documentation of systemd here is actually nice to read, even if you are not an expert.
Hope this helps someone!
I am using Debian and I have a python script that I would like to run during rc.local so that it will run on boot. I already have it working with a test file that is meant to run and terminate.
The problem is that this file should eventually run indefinitely using Scheduler. It's job is to do serial reads, a small amount of processing on those reads, and inserts into a MySQL database. However, I am nervous about then not being able to cancel the script to get to my login prompt if changes need to be made since I was unable to terminate the test script early using Ctrl+C (^C).
My hope is that there is some command that I am just missing that will accomplish this. Is there another key command that I'm missing that will terminate the python script and end rc.local?
Thanks.
EDIT: Another possible solution that would help me here is if there is a way to start a python script in the background during boot. So it would start the script and then allow login while continuing to run the script in the background.
I'm starting to think this isn't something that's possible to accomplish so other suggestions to accomplish something similar to what I'm trying to do would be helpful as well.
Thanks again.
Seems like it was just a dumb mistake on my part.
I realized the whole point of this was to allow the python script to run as a background process during boot so I added the " &" to the end of the script call like you would when running it from the shell and viola I can get to my password prompt by pressing "Enter".
I wanted to put this answer here just in case this would be something horribly wrong to do, but it accomplishes what I was looking for.
Making scripts run at boot time with Debian
Put your script in /etc/init.d/. So, if your script is in a file called my_script, it should be located at /etc/init.d/my_script.
Run update-rc.d my_script defaults as root.
Don't forget to make your script executable and include the shebang. That means the first line of the script should be #!/usr/bin/python.