All Maya script logs and errors printed in history tab. This is output from all commands and python scripts.
For better debugging scripts I want all the logs were sent somewhere on the server. How to intercept and send the output to your script. Then I will do all that is necessary, and the output is either a remote console, or somewhere in the files on the server.
The task to intercept the output. How to do it?
You can also redirect Script Editor history using Maya's scriptEditorInfo command found here:
An example usage of this would be something like:
import maya.cmds as cmds
outfile = r'/path/to/your/outfile.txt'
# begin output capture
cmds.scriptEditorInfo(historyFilename=outfile, writeHistory=True)
# stop output capture
cmds.scriptEditorInfo(writeHistory=False)
There is also cmdFileOutput which you can either call interactively or enable/disable via a registry entry to MAYA_CMD_FILE_OUTPUT, documentation here
Lastly, you can augment Maya start using the -log flag to write the Output Window text to another location. With this however, you do not get the Script Editor output, but could be all you desire given what it is you are trying to log.
its sounds like that you need a real-time error tracker like Sentry
, in Sentry are logging modules that are maked exactly for this reason, communicate Server/Client logging with a richer error/debug handling
here a example for Rerouting the Maya Script Editor to a terminal
Related
I am trying to use Paramiko to access the input and output of a program running inside of a screen session. Let's assume there is a screen screenName with a single window running the program I wish to access the I/O of. When I try something like client.exec_command('screen -r screenName') for example, I get the message "Must be connected to a terminal" in the stdout.
Searching around, I learned that for some programs, one needs to request a "pseudo-terminal" by adding the get_pty=True parameter to exec_command. When I try this, my code just hangs, and my terminal becomes unresponsive.
What else can I try to make this work?
To provide a little more context to my problem, I am creating a web app that allows users to perform basic operations that I have previously done only in a PuTTy terminal. A feature I wish to have is a kind of web terminal designed to interact with the input and output of this particular program. Effectively all I want to do is continuously pipe output from the program to my backend, and pipe input to the program.
I am trying to automate a scenario in which, I have a terminal window open with multiple tabs open in it. I am able to migrate between the tabs, but my problem is how do i pass control to another terminal tab while i run my perl script in a different tab.
Example: I have a terminal open with Tab1,Tab2,Tab3,Tab4 open in the same terminal, i run the perl script in Tab3 and i would want to pass some commands onto Tab1. Could you please tell me how can i do this ??
I use GUI tool to switch between tabs X11::GUITest and use keyboard shortcuts to switch between tabs, any alternative suggestion is welcome, my ultimate aim is to pass control on to a different tab.
The main thing to understand is that each tab has a different instance of terminal running, more importantly a different instance of shell (just thought I would mention as it didnt seem like you were clear about that from your choice of words). So "passing control" in such a scenario could most probably entail inter-process communication (IPC).
Now that opens up a range of possibilities. You could, for example, have a python/perl script running in the target shell (tab) to listen on a unix socket for commands in the form of text, which the script can then execute. In Python, you have modules subprocess (call, Popen) and os (exec*) for this. If you have to transfer control back to the calling process, then I would suggest using subprocess as you would be able to send back return codes too.
Switching between tabs is a different action and has no consequences on the calling/called processes. And you have already mentioned how you intend on doing that.
I am using Supervisor (process controller written in python) to start and control my web server and associated services. I find the need at times to enter into pdb (or really ipdb) to debug when the server is running. I am having trouble doing this through Supervisor.
Supervisor allows the processes to be started and controlled with a daemon called supervisord, and offers access through a client called supervisorctl. This client allows you to attach to one of the foreground processes that has been started using a 'fg' command. Like this:
supervisor> fg webserver
All logging data gets sent to the terminal. But I do not get any text from the pdb debugger. It does accept my input so stdin seems to be working.
As part of my investigation I was able to confirm that neither print nor raw_input send and text out either; but in the case of raw_input the stdin is indeed working.
I was also able to confirm that this works:
sys.stdout.write('message')
sys.flush()
I though that when I issued the fg command that it would be as if I had run the process in the foreground in the standard terminal ... but it appears that supervisorctl is doing something more. Regular printing does not flush for example. Any ideas?
How can I get pdb, standard prints, etc to work properly when connecting to the foreground terminal using the fg command in supervisorctl?
(Possible helpful ref: http://supervisord.org/subprocess.html#nondaemonizing-of-subprocesses)
It turns out that python defaults to buffering its output stream. In certain cases (such as this one) - it results in output being detained.
Idioms like this exist:
sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)
to force the buffer to zero.
But the better alternative I think is to start the base python process in an unbuffered state using the -u flag. Within the supervisord.conf file it simply becomes:
command=python -u script.py
ref: http://docs.python.org/2/using/cmdline.html#envvar-PYTHONUNBUFFERED
Also note that this dirties up your log file - especially if you are using something like ipdb with ANSI coloring. But since it is a dev environment it is not likely that this matters.
If this is an issue - another solution is to stop the process to be debugged in supervisorctl and then run the process temporarily in another terminal for debugging. This would keep the logfiles clean if that is needed.
It could be that your webserver redirects its own stdout (internally) to a log file (i.e. it ignores supervisord's stdout redirection), and that prevents supervisord from controlling where its stdout goes.
To check if this is the case, you can tail -f the log, and see if the output you expected to see in your terminal goes there.
If that's the case, see if you can find a way to configure your webserver not to do that, or, if all else fails, try working with two terminals... (one for input, one for ouptut)
I wrote a simple script using python-daemon which prints to sys.stdout:
#!/usr/bin/env python
#-*- coding: utf-8 -*-
import daemon
import sys
import time
def main():
with daemon.DaemonContext(stdout=sys.stdout):
while True:
print "matt daemon!!"
time.sleep(3)
if __name__ == '__main__':
main()
The script works as I would hope, except for one major flaw--it interrupts my input when I'm typing in my shell:
(daemon)modocache $ git clomatt daemon!!
matt daemon!!ne
matt daemon!! https://github.com/schacon/cowsay.git
(daemon)modocache $
Is there any way for the output to be displayed in a non-intrusive way? I'm hoping for something like:
(daemon)modocache $ git clo
matt daemon!! # <- displayed on new line
(daemon)modocache $ git clo # <- what I had typed so far is displayed on a new line
Please excuse me if this is a silly question, I'm not too familiar with how shells work in general.
Edit: Clarification
The reason I would like this script to run daemonized is that I want to provide updates to the shell user from within the shell, such as printing weather updates to the console in a non-intrusive way. If there is a better way to accomplish this, please let me know. But the purpose is to display information from within the terminal (not via, say, Growl notifications), without blocking.
If it doesn't need to be an "instant" notification, and you can wait until the next time the user runs a command, you can bake all kinds of things into your bash shell prompt. Mine tells me the time and the git repository status for the directory I'm in, for example.
The shell variable for "normal user" shell prompts is PS1, so Googling for bash PS1 or bash prompt customisation will get you some interesting examples.
Here's some links:
Some basic customisations
A more complex example: git bash prompt
In general, you can include the output of any arbitrary script in the prompt string. Be aware, however, that high-latency commands will delay printing of the prompt string until they can be evaluated, so it may be a good idea to cache information. (For example, if you want to display the weather from a weather website, don't make your bash prompt go out and retrieve the webpage every time the prompt is displayed!)
A daemon process, by definition should run in the background. Therefore it should write to a log file.
So either you redirect it's output to a logfile (shell redirect or hand it over to some sys logging daemon) or you make it write to a logfile in the python code.
Update:
man write
man wall
http://linux.die.net/man/1/write, http://linux.die.net/man/1/wall
It probably is best practice to write to a log file for daemons. But could you not write to stderr and have the behavior desired above with inter-woven lines?
Take a look at the logging library (part of the standard library). This can be made to route debug and runtime data either to the console or a file (or anywhere for that matter) depending on the state of the system.
It provides several log facilities, e.g. error, debug, info. Each can be configured differently.
See documentation on logging - link
import logging
logging.basicConfig(filename='example.log',level=logging.DEBUG)
logging.debug('This message should go to the log file')
logging.info('So should this')
logging.warning('And this, too')
Im attempting to start a server app (in erlang, opens ports and listens for http requests) via the command line using pexpect (or even directly using subprocess.Popen()).
the app starts fine, logs (via pexpect) to the screen fine, I can interact with it as well via command line...
the issue is that the servers wont listen for incoming requests. The app listens when I start it up manually, by typing commands in the command line. using subprocess/pexpect stops the app from listening somehow...
when I start it manually "netstat -tlp" displays the app as listening, when I start it via python (subprocess/pexpect) netstat does not register the app...
I have a feeling it has something to do with the environemnt, the way python forks things, etc.
Any ideas?
thank you
basic example:
note:
"-pz" - just ads ./ebin to the modules path for the erl VM, library search path
"-run" - runs moduleName, without any parameters.
command_str = "erl -pz ./ebin -run moduleName"
child = pexpect.spawn(command_str)
child.interact() # Give control of the child to the user
all of this stuff works correctly, which is strange. I have logging inside my code and all the log messages output as they should. the server wouldnt listen even if I started up its process via a bash script, so I dont think its the python code thats causing it (thats why I have a feeling that its something regarding the way the new OS process is started).
It could be to do with the way that command line arguments are passed to the subprocess.
Without more specific code, I can't say for sure, but I had this problem working on sshsplit ( https://launchpad.net/sshsplit )
To pass arguments correctly (in this example "ssh -ND 3000"), you should use something like this:
openargs = ["ssh", "-ND", "3000"]
print "Launching %s" %(" ".join(openargs))
p = subprocess.Popen(openargs, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
This will not only allow you to see exactly what command you are launching, but should correctly pass the values to the executable. Although I can't say for sure without seeing some code, this seems the most likely cause of failure (could it also be that the program requires a specific working directory, or configuration file?).