I wrote a simple script using python-daemon which prints to sys.stdout:
#!/usr/bin/env python
#-*- coding: utf-8 -*-
import daemon
import sys
import time
def main():
with daemon.DaemonContext(stdout=sys.stdout):
while True:
print "matt daemon!!"
time.sleep(3)
if __name__ == '__main__':
main()
The script works as I would hope, except for one major flaw--it interrupts my input when I'm typing in my shell:
(daemon)modocache $ git clomatt daemon!!
matt daemon!!ne
matt daemon!! https://github.com/schacon/cowsay.git
(daemon)modocache $
Is there any way for the output to be displayed in a non-intrusive way? I'm hoping for something like:
(daemon)modocache $ git clo
matt daemon!! # <- displayed on new line
(daemon)modocache $ git clo # <- what I had typed so far is displayed on a new line
Please excuse me if this is a silly question, I'm not too familiar with how shells work in general.
Edit: Clarification
The reason I would like this script to run daemonized is that I want to provide updates to the shell user from within the shell, such as printing weather updates to the console in a non-intrusive way. If there is a better way to accomplish this, please let me know. But the purpose is to display information from within the terminal (not via, say, Growl notifications), without blocking.
If it doesn't need to be an "instant" notification, and you can wait until the next time the user runs a command, you can bake all kinds of things into your bash shell prompt. Mine tells me the time and the git repository status for the directory I'm in, for example.
The shell variable for "normal user" shell prompts is PS1, so Googling for bash PS1 or bash prompt customisation will get you some interesting examples.
Here's some links:
Some basic customisations
A more complex example: git bash prompt
In general, you can include the output of any arbitrary script in the prompt string. Be aware, however, that high-latency commands will delay printing of the prompt string until they can be evaluated, so it may be a good idea to cache information. (For example, if you want to display the weather from a weather website, don't make your bash prompt go out and retrieve the webpage every time the prompt is displayed!)
A daemon process, by definition should run in the background. Therefore it should write to a log file.
So either you redirect it's output to a logfile (shell redirect or hand it over to some sys logging daemon) or you make it write to a logfile in the python code.
Update:
man write
man wall
http://linux.die.net/man/1/write, http://linux.die.net/man/1/wall
It probably is best practice to write to a log file for daemons. But could you not write to stderr and have the behavior desired above with inter-woven lines?
Take a look at the logging library (part of the standard library). This can be made to route debug and runtime data either to the console or a file (or anywhere for that matter) depending on the state of the system.
It provides several log facilities, e.g. error, debug, info. Each can be configured differently.
See documentation on logging - link
import logging
logging.basicConfig(filename='example.log',level=logging.DEBUG)
logging.debug('This message should go to the log file')
logging.info('So should this')
logging.warning('And this, too')
Related
I made a simple project in python that pings a server every few seconds and I want to store the ping data in a .txt file. (It might also be cool to put it in a GUI but I need it in a txt file for now). Also, it just shows the ping in the terminal so I have no idea how I would make it go into a txt because I'm new at coding.
(here's my code btw)
import os
import time
while 1:
os.system('ping 1.1.1.1 -n 1')
time.sleep(5)
I didn't try much because I couldn't figure out anything I looked up stuff and nothing was what I wanted.
(also I'm a noob at coding anyways)
You'll have to run your code with Popen instead of os.system (which is a bad idea in most cases, anyway, for security reasons).
With Popen (python.org -> documentation is your friend!) you can capture the output of the programs you run. You can then write that to a file object. (That's a built-in type in python. Again, official documentation on this is good and comes with examples!)
I honestly don't see a reason to write the results of ping to a file. Wouldn't you just care about whether that ping worked and was reasonably fast? Maybe extract that information instead and just log it instead!
As much as I hate regurgitating questions, it's a necessary evil to achieve a result to the next issue I'll present.
Using python3, tkinter and the subprocess package, my goal is to write a control panel to start and stop different terminal windows with a specific set of commands to run applications/sessions of the ROS application stack, including the core.
As such, the code would look like this per executable I wish to control:
class TestProc(object):
def __init__(self):
pass
def start(self):
self.process = subprocess.Popen(["gnome-terminal", "-c", "'cd /path/to/executable/script.sh; ./script.sh'"])
print("Process started.")
def stop(self):
self.process.terminate()
print("Process terminated.")
Currently, it is possible to start a terminal window and the assigned commands/processes, yet two issues persist:
gnome-terminal is set to launch a terminal window, then relieve control to the processes inside; as such, I have no further control once it has started. A possible solution for this is to use xterm yet that poses a slew of other issues. I am required to have variables from the user's .bashrc and/or export
Certain "global commands" eg. cd or roslaunch would be unavailable to the terminal sessions, perhaps due to the order of execution (eg. the commands are run before the bash profile is loaded) preventing any usable terminal at all
Thus, the question rings: How would I be able to start and stop a new terminal window that would run up to two commands/processes in the user environment?
There are a couple approaches you can take, the most flexible here is also the most complicated, so you'd want to consider whether you need to do it.
If you only need to show the output of the script, you can simply pipe the output to a file or to a named pipe. You can then capture that output by reading/tailing the file. This is simplest, as long as the script don't actually need to have any user interaction.
If you really only need to spawn a script that runs in the background, and you need to simulate user interaction but you don't actually need to accept actual user input, you can use expect approach (using the pexpect library).
If you need to actually allow the real user to interact with the program, then you have two approaches. First is that you can embed the VTE widget into your application, this is the most seamless integration as it'll make the terminal look seamless with your application, however it's also the most heavy.
Another approach is to start gnome-terminal as you've done here, this necessarily spawns a new window.
If you need to both script some interaction while also allowing some user input, you can do this by spawning your script in a tmux session. Using tmux send-keys command to automate the moon interactive part, and then spawn a terminal emulator for users to interact with tmux attach. If you need to go back and forth between automated part and interactive part, you can combine this approach with expect.
All Maya script logs and errors printed in history tab. This is output from all commands and python scripts.
For better debugging scripts I want all the logs were sent somewhere on the server. How to intercept and send the output to your script. Then I will do all that is necessary, and the output is either a remote console, or somewhere in the files on the server.
The task to intercept the output. How to do it?
You can also redirect Script Editor history using Maya's scriptEditorInfo command found here:
An example usage of this would be something like:
import maya.cmds as cmds
outfile = r'/path/to/your/outfile.txt'
# begin output capture
cmds.scriptEditorInfo(historyFilename=outfile, writeHistory=True)
# stop output capture
cmds.scriptEditorInfo(writeHistory=False)
There is also cmdFileOutput which you can either call interactively or enable/disable via a registry entry to MAYA_CMD_FILE_OUTPUT, documentation here
Lastly, you can augment Maya start using the -log flag to write the Output Window text to another location. With this however, you do not get the Script Editor output, but could be all you desire given what it is you are trying to log.
its sounds like that you need a real-time error tracker like Sentry
, in Sentry are logging modules that are maked exactly for this reason, communicate Server/Client logging with a richer error/debug handling
here a example for Rerouting the Maya Script Editor to a terminal
What I need to do, is execute a python program/script in conjunction with user presses print, and not let the print job spool before this program quits.
Reason is that the print driver is not open source, and I need to change user settings (in this case a department id and password), that normally is per/user, but as this is a kiosk(different users with the same account) I need to make sure to reset, and prompt user before print jobs is spooled, so that different users won't pick up each others jobs.
I have created a program to handle the settings, I only need a way to start it, and not let the spool job start before the user has finished the program/settings.
I've tried to search/google this but can't really find an answer, do I need to spool the job through a cups filter first or if their is smarter way to handle this?
I found the perfect solution for my problem. tea4cups, it acts as wrapper for cups.
And using a tea4cups prehook solved my issue.
I run into some issues though, so I note them here if someone is coming down the same road.
tea4cups is based on python2 and I have python3 as standard library, this gave some unexpected errors like "wrong key" from cups log.
To solve this I edited "/usr/lib/cups/backend/tea4cups" and changed the environment:
#! /usr/bin/env python
into:
#! /usr/bin/env python2
My prehook needed to start a python program, as the that uses x display, and this was not working out of the box. And also this program needs to be started as the user who actually submit the print job. To get these two things work I had to write the prehook as follows:
prehook_popUp : su $TEAUSERNAME -c "DISPLAY=:0.0 python /usr/share/candepid/PopUp.py"
I am using Supervisor (process controller written in python) to start and control my web server and associated services. I find the need at times to enter into pdb (or really ipdb) to debug when the server is running. I am having trouble doing this through Supervisor.
Supervisor allows the processes to be started and controlled with a daemon called supervisord, and offers access through a client called supervisorctl. This client allows you to attach to one of the foreground processes that has been started using a 'fg' command. Like this:
supervisor> fg webserver
All logging data gets sent to the terminal. But I do not get any text from the pdb debugger. It does accept my input so stdin seems to be working.
As part of my investigation I was able to confirm that neither print nor raw_input send and text out either; but in the case of raw_input the stdin is indeed working.
I was also able to confirm that this works:
sys.stdout.write('message')
sys.flush()
I though that when I issued the fg command that it would be as if I had run the process in the foreground in the standard terminal ... but it appears that supervisorctl is doing something more. Regular printing does not flush for example. Any ideas?
How can I get pdb, standard prints, etc to work properly when connecting to the foreground terminal using the fg command in supervisorctl?
(Possible helpful ref: http://supervisord.org/subprocess.html#nondaemonizing-of-subprocesses)
It turns out that python defaults to buffering its output stream. In certain cases (such as this one) - it results in output being detained.
Idioms like this exist:
sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)
to force the buffer to zero.
But the better alternative I think is to start the base python process in an unbuffered state using the -u flag. Within the supervisord.conf file it simply becomes:
command=python -u script.py
ref: http://docs.python.org/2/using/cmdline.html#envvar-PYTHONUNBUFFERED
Also note that this dirties up your log file - especially if you are using something like ipdb with ANSI coloring. But since it is a dev environment it is not likely that this matters.
If this is an issue - another solution is to stop the process to be debugged in supervisorctl and then run the process temporarily in another terminal for debugging. This would keep the logfiles clean if that is needed.
It could be that your webserver redirects its own stdout (internally) to a log file (i.e. it ignores supervisord's stdout redirection), and that prevents supervisord from controlling where its stdout goes.
To check if this is the case, you can tail -f the log, and see if the output you expected to see in your terminal goes there.
If that's the case, see if you can find a way to configure your webserver not to do that, or, if all else fails, try working with two terminals... (one for input, one for ouptut)