We are trying to create a simple command line utility that tracks some metrics over an unknown period of time (start/stop triggered via command line externally).
For example,
python metrics-tool.py --start-collect
... run some additional commands external to metrics-tool ...
python metrics-tool.py --stop-collect
Does anyone have any ideas or suggestions on how an application can receive a command for a "second time"? Is this even possible, or a good way to do this?
It almost sounds like this should be a service, configurable at runtime by an endpoint?
This does sound more like a service which can be started and stopped, via Systemd (or Supervisor) for example.
Using Systemd means there's no need to daemonise your Python process yourself: https://stackoverflow.com/a/30189540/736221
You can of course do that if you want to, if you're using something other than Systemd: https://pagure.io/python-daemon
Related
I am currently executing:
process connect <url>
from the lldb command line to make a connection. What I would like to do is to include in a python script to automate making this connection.
What is the python api I need to use?
In the SB API this is SBTarget::ConnectRemote. It can't be a SBProcess method because you don't have a process before you call this...
Note, you probably don't want to provide your own SBListener to SBTarget::ConnectRemote. It (and all the other process creation calls that take a SBListener) will use the Debugger's listener if you provide an nil listener.
For instance, if you are writing a Python command to do the connect, then you'll want to let the regular Debugger's event handler to deal with process events after you've connected.
On the off chance you want to try handling process events yourself, the example:
http://llvm.org/svn/llvm-project/lldb/trunk/examples/python/process_events.py
will get you started.
How can I access a running Python script's variable? Or access a function, to set the variable. I want to access it from the command line or from another Python script, that doesn't matter.
For example,
I have one script running run_motor.py, with a variable called mustRun. When the user pushes the stop button it should access the variable mustRun to change it to false.
If you want to interact with a running python script and modify some variables in it (I don't know why you want to do that, but... meh) you can have a look at Pyrasite.
Here is a demo of Pyrasite on asciinema
This is damn impressive.
By the way just so you know, that's NOT the best practice for what you want to do. I assume this is for testing purpose because using that kind of script in production or something like that wouldn't be safe at all...
Easiest way of accomplishing this is to run a small TCP server in a thread and have it change the variable you want to change when it receives a command to do so. Then write a python script that sends the stop command to that TCP server.
I want to create a build pipeline, and developers need to set up a few things into a properties file which gets populated using a front end GUI.
I tried running sample CLI interactive script using python that just asked for a name and prints it out afterwards, but Jenkins just waited for ages then hanged. I see that it asked for the input, but there was no way for the user to input the data.
EDIT: Currently running Jenkins as a service..Or is there a good plugin anyone recommends or is it the way I created the python script?
Preference:
I would prefer to use Python because it is a little lightweight, but if people had success with other languages I can comprise.
Using a GUI menu to populate the data, would be cool because I can use option boxes, drop down menus and make it fancy but it isn't a necessity, a CLI is considerably better than our current deployment.
BTW, running all this on Windows 7 laptop running Python 2.7 and Java 1.7
Sorry for the essay! Hopefully people can help me!
Sorry, but Jenkins is not an interactive application. It is designed for automated execution.
The only viable way to get input to a Jenkins job (and everything that is executed from that job) is with the job parameters that are populated before the job is started. Granted, Jenkins GUI for parameter entry is not the greatest, but it does the job. Once the Jenkins job collected the job parameters at the start of the job, it can pass those parameters to anything it executes (Python, shell, whatever) at any time during the job. Two things have to be true for that to happen:
You need to collect all the input data before the job starts
Whatever your job calls (Python, shell, etc) need to be able to receive their input not interactively, but through command line.
How to get input into program
A well designed script should be able to simply accept parameters on the command line:
./goodscript.sh MyName will be the simplest way of doing it, where value MyName will be stored in $1 first parameter of the script. Subsequent command line parameters will be available in variables $2, $3 and so on.
./goodscript.sh -name MyName -age 30 will be a better way of doing it, where the script can take multiple parameters regardless of their order by specifying a parameter name before parameter value. You can read about using getopt for this method of parameter passing
Both examples above assume that the goodscript.sh is written well enough to be able to process those command line parameters. If the script does not explicitly process command line parameters, doing the above will be useless.
You can "pipe" some output to an interactive script that is not designed to handle command line parameters explicitly:
echo MyName | ./interactivescript.sh will pass value MyName to the first interactive prompt that interactivescript.sh provides to the user. Problem with this is that you can only pass a value to the first interactive prompt.
Jenkins job parameters GUI
Like I said above, you can use Jenkins GUI to gather all sorts of job parameters (dropdown lists, checkboxes, text entry). I assume you know how to setup Jenkins job with parameters. If not, in the job configuration click "This build is parameterized" checkbox. If you can't figure out how to set this up, that's a different question and will need to be explained separately.
However, once your Jenkins job collected all the parameters up front, you can reference them in your "execute shell" step. If you are using Windows, you will reference them as %PARAM_NAME%, and for Linux as $PARAM_NAME.
Explain what you need help with: getting your script to accept command line parameters, or passing those command line parameters from jenkins job GUI, and I will expand this answer further
I'm running a python script manually that fetches data in JSON format.How do I automate this script to run automatically on an hourly basis?
I'm working on Windows7.Can I use tools like Task scheduler?If I can use it,what do I need to put in the batch file?
Can I use tools like Task scheduler?
Yes. Any tool that can run arbitrary programs can run your Python script. Pick the one you like best.
If I can use it,what do I need to put in the batch file?
What batch file? Task Scheduler takes anything that can be run, with arguments—a C program, a .NET program, even a document with a default app associated with it. So, there's no reason you need a batch file. Use C:\Python33\python.exe (or whatever the appropriate path is) as your executable, and your script's path (and its arguments, if any) as the arguments. Just as you do when running the script from the command line.
See Using the Task Scheduler in MSDN for some simple examples, and Task Scheduler Schema Elements or Task Scheduler Scripting Objects for reference (depending on whether you want to create the schedule in XML, or via the scripting interface).
You want to create an ExecAction with Path set to "C:\Python33\python.exe" and Arguments set to "C:\MyStuff\myscript.py", and a RepetitionPattern with Interval set to "PT1H". You should be able to figure out the rest from there.
As sr2222 points out in the comments, often you end up scheduling tasks frequently, and needing to programmatically control their scheduling. If you need this, you can control Task Scheduler's scripting interface from Python, or build something on top of Task Scheduler, or use a different tool that's a bit easier to get at from Python and has more helpful examples online, etc.—but when you get to that point, take a step back and look at whether you're over-using OS task scheduling. (If you start adding delays or tweaking times to make sure the daily foo1.py job never runs until 5 minutes after the most recent hourly foo0.py has finished its job, you're over-using OS task scheduling—but it's not always that obvious.)
May I suggest WinAutomation or AutoMate. These two do the exact same thing, except the UI is a little different. I prefer WinAutomation, because the scripts are a little easier to build.
Yes, you can use the Task Scheduler to run the script on an hourly bases.
To execute a python script via a Batch File, use the following code:
start path_to_python_exe path_to_python_file
Example:
start C:\Users\harshgoyal\AppData\Local\Continuum\Anaconda3\python.exe %UserProfile%\Documents\test_script.py
If python is set as Window’s Environment Window then you can reduce the syntax to:
start python %UserProfile%\Documents\test_script.py
What I generally do is run the batch file once via Task Scheduler and within the python script I call a thread/timer every hour.
class threading.Timer(interval, function, args=None, kwargs=None)
I would like to daemonize a python process, and now want to ask if it is good practice to have a daemon running, like a parent process and call another class which opens 10-30 threads.
I'm planning on writing a monitoring script for group of servers and would like to check every server every 5 mins, that each server is checked exactly 5minutes.
I would like to have it this way ( sort of speak, ps auxf style output ):
|monitor-daemon.py
\-check-server.py
\-check-server.py
....
Thank you!
Maybe you should use http://pypi.python.org/pypi/python-daemon
You can use supervisord for this. You can configure tasks to respond to events. The events can be manually created or automatically by monitoring processes or based on regular intervals.
It is fully customizable and written in Python.
Example:
[program:your_daemon_name]
command=your_daemon_process
# Add extra options here according to the manual...
[eventlistener:your_monitor_name]
command=your_monitor_process
events=PROCESS_STATE_RUNNING # Will be triggered after a program changes from starting to running
# Add extra options here according to the manual...
Or if you want the eventlistener to respond to the process output use the event PROCESS_COMMUNICATION_STDOUT or TICK_60 for a check every minute. The logs can be redirected to files and such so you can always view the state.
There's really not much to creating your own daemonize function: The source for Advanced Programming in the Unix Environment (2nd edition) is freely available: http://www.apuebook.com/src.tar.gz -- you're looking for the apue.2e/daemons/init.c file.
There is a small helper program that does all the work of creating a proper daemon, it can be used to wrap arbitrary programs; this might save some hassle.