I have a python script that runs continuously in background and print some logs on standard output.
If I log with the same user on the same machine via ssh however I cannot see the output since (I guess) I opened a different shell.
Is there any way to specify that the standard output of this process must be seen by all the shell where I am logged with the same username of the one who launched the process?
Alternatively I thought of redirecting the output to a file and open this file… however I would prefer avoiding such a solution.
Thanks in advance for any suggestion…
In UNIX you could write to all shells using the command wall. I'm not sure if there is a Python binding for wall though.
I have seen that indeed screen does what i need...
Thanks Ignacio
Related
I'm using Node to execute a Python script. The Python script SSH's into a server, and then runs a Pig job. I want to be able to get the standard out from the Pig job, and display it in the browser.
I'm using the PExpect library to make the SSH calls, but this will not print the output of the pig call until it has totally completed (at least the way I have it written). Any tips on how to restructure it?
child.sendline(command)
child.expect(COMMAND_PROMPT)
print(child.before)
I know I shouldn't be expecting the command prompt (cause that will only show up when the process ends), but I'm not sure what I should be expecting.
Repeating my comment as an answer, since it solved the issue:
If you set child.logfile_read to a writable file-like object (e.g. sys.stdout), Pexpect will the forward the output there as it reads it.
child.logfile_read = sys.stdout
child.sendline(command)
child.expect(COMMAND_PROMPT)
I'm working on a script that should check on certain system events (like opening of a file, or changing of a registry key) and start further actions depending on that. But I haven't found a clean way to get the information into my script.
I'm looking for a way to get the output of Sysinternals Process Monitor into another program. This should happen without user interaction in close to real time; so saving into a CSV/XML and than using this doesn't work.
I've checked on using the backing file, but this is in the Process Monitor PML format, which i haven't found to be documented anywhere.
Does anybody know a way how I can get the output of Process Monitor into my script?
Or an other (not too messy) way to get a real time list of opened files, registry keys etc into a python program?
Thanks!
If you want to parse stdout or a file, and your ok with a 32 bit only solution, try Dr Strace or ntstrace.
YOu could also look into ospy or another ProcMon alternative. ospy is open source, so at the very least you could look at the source code for capturing events.
Here is a list of alternates to ProcMon.
I had a program that Scraped certain data from certain Web-Pages, and when the Web-Pages changed, acted accordingly.
How would one set up the program so it continues to run in the background?
I don't need any specifics
I'm just really confused on this concept and would appreciate whatever help anybody has to offer.
start path-to-pythonw.exe your-code.py
pythonw means without console.
start means start on background.
if your python is installed system-wide, you can probably start your-code.pyw
.pyw is associated with pythonw.exe
remember you cannot use print (to stdout) in this case.
If you want to be able to just start your process and have it background itself and do a few more typical things that "daemon" processes do in Unix, look here: How do you create a daemon in Python?
There is no concept of "background" in Windows. But the UNIX shell concept of a background process can be reasonably emulated by running your Python script as a Windows service. There are a couple of suggestions in this question: Is it possible to run a Python script as a service in Windows? If possible, how?
For casual use, I suggest that you learn how to use srvany from the second answer.
You simply need to leave your program running! Please google "python daemon" and see how to implement a persistent background process in Python.
Now, you cannot know when a website changes unless you poll it. If the website is well designed, the page you are trying to poll will have a "Last-Modified" header, you can make a "HEAD" request every so often (be nice: don't poll like crazy) and act when Last-Modified is >= than the one on record. If the site is not well designed, it will not have a reliable Last-Modified or ETAG header, in that case you will have to parse manually and check for changes yourself.
Cheers.
Now I am working in a project where the testscript has to connect many (3-10) remote computers (SSH and do some stuff).
I started to use the pexpect and it is simple as a button. It works fine.
I want to see the communication during test. I know it is possible to redirect the log to the screen. But in this case the logs (from different computer) are mixed.
What I would like is to open new terminal window (or consol or whatever) for every new spawn object. In this case I could see all communication in different windows. Additionally I would like to keep the possibility of spawn.interact() in every window.
I feel that it is possible somehow but I don't know how. I think some file pointer (or pipe) should pass to the new window somehow(?)
(SecureCRT knows sometihng like this, it has tabbed consol windows and can access them separately, but it is a commercial product)
Or let me make the problem more simple.
If I do this, I can open a new shell in a new window:
p=Popen(["cygstart", "bash"])
How can I read and write into this shell from my script (parent) to see it in this new window?
I would really appreciate it, if one of you could point me in the right direction.
It is enough if you tell me what to read or find for (on Google) because I did not find anybody such kind of problem.
The environment is cygwin.
Thanks in advance
br:drv
Have you tried using the logfile parameter?
child = pexpect.spawn('some_command')
mylog = open('/tmp/mylog','w')
child.logfile = mylog
This will automatically log all communication to the file, including commands you enter after calling spawn.interact()
More info available on the website: http://pexpect.sourceforge.net/pexpect.html
Search for 'logfile' to find the relevant documentation.
I was working with Python with a Linux terminal screen. When I typed:
help(somefunction)
It printed the appropriate output, but then my screen was stuck, and at the bottom of the terminal was "(end)".
How do I get unstuck? Thanks in advance.
The standard on GNU (or other Unix-like) systems is to use the environment variable PAGER for the command that should receive output for viewing one screenful ("page") at a time.
Mine is set to:
$ echo $PAGER
less
Yours might be set to more, or a different command, or not set at all in which case a system-wide default command will be used.
It sounds like yours is modelled after the more program. The program is showing you page-by-page output, and in this case telling you you're at the end.
Most of them (basically, any pager more modern than more) allow you to go forward and backward in the output by using the cursor control keys (arrows and PgUp/PgDown), and many other operations besides.
Since you can do all these things wherever you are in the output, the program needs an explicit command from you to know that you're done navigating the output. In all likelihood that command is the keypress q.
For more information on how to drive your pager, e.g. less, read its manpage with the command man less (which, of course, will show pages of output using the pager program :-)
That program uses your pager, which is by default more. You can exit just by pressing q.