I use Python3.6 to write a simple HTTP server to redirect all requests.
The file I written can be found here
I can see output in both Win8.1 CMD & Ubuntu 16.04.3 Bash.
However , whatever I try any of those methods below , it doesn't work , the log cannot be saved into the file.
nohup python3 ./filename.py > ./logfile 2>&1 &
python3 ./filename.py > ./logfile 2>&1 &
setsid ./filename.py > ./logfile 2>&1 &
I tried to use:
import sys
logfile = open('logfile.log','w')
sys.stdout = logfile
sys.stdin = logfile
sys.stderr = logfile
It didn't work.
By default, Python's stdout and stderr are buffered. As other responders have noted, if your log files are empty then (assuming your logging is correct) the output has not been flushed.
The link to the script is no longer valid, but you can try running your script either as python3 -u filename.py or the equivalent PYTHONUNBUFFERED=x python3 filename.py. This causes the stdout and stderr streams to be unbuffered.
A full example that uses the standard library's http.server module to serve files from the current directory:
PYTHONUNBUFFERED=x python3 -m http.server &> http.server.log & echo $! > http.server.pid
All output (stdout & stderr) is redirected to http.server.log, which can be tailed, and the process ID of the server is written to http.server.pid so that you can kill the process by kill $(cat http.server.pid).
i've tried your code on Ubuntu 16.04 and it worked like charm.
import sys
logfile = open('logfile.log','w')
sys.stdout = logfile
sys.stdin = logfile
sys.stderr = logfile
Related
I have a python script(myscript.py) as follows:
#!/bin/python
import os
import optparse
import subprocess
import sys
sys.stdout.flush()
print("I can see this message on Jenkins console output")
cmd="sshpass -p 'xxx' ssh test#testmachine 'cmd /c cd C:\stage && test.bat'"
retval=subprocess.call(cmd,shell=True)
print retval
In jenkins, I have a job with execute shell as follows:
#!/bin/sh
./myscript.py
Problem:
Jenkins console shows only "I can see this message on Jenkins console output".
If there is any output from the subprocess call, it does not print it out on the console.
If I putty to Server A and run the same command (./myscript.py) on shell, I can see the output of subprocess call.
How can I print this output of subprocess call on Jenkins console?
FYI: As you can see from my command, the subprocess call is running a batch file on windows; Jenkins is running on Linux; There is ssh setup between the two machines..
Edit:
My test.bat looks like this:
echo off
RMDIR /S /Q C:\Test
IF %ERRORLEVEL% NEQ 0 (
ECHO Could not delete
EXIT /b %ERRORLEVEL%
)
if I run this batch file locally on windows server, it returns a 1 ( because am holding a file open in Test folder )
But when the python script calls this batch file using the subprocess call, all i get is a Zero for retval.
Why is this and how to fix this? If I can capture the correct retval, I can make the Jenkins job fail.
Edit 12/12:
Helllo!! Anybody! Somebody! Help!
I wonder if it has to do anything with stdout being buffered
Can you try setting PYTHONUNBUFFERED before running your command?
export PYTHONUNBUFFERED=true
In my Jenkins environment, executing python scripts with the unbuffered argument makes the output appear immediately. Like this:
python3 -u some_script.py
More information comes from the help menu (python3 --help):
-u : force the stdout and stderr streams to be unbuffered;
this option has no effect on stdin; also PYTHONUNBUFFERED=x
TL; DR
The fix is to use some conditional execution (the || operator) on rmdir to fix the errorlevel being returned.
Investigation
This was a corker of a bug, with quite a few twists and turns! We initially suspected that the stdout chain was broken somehow, so looked into that through explicit use of pipes in Popen and then removing sshpass from your command and so using the output from ssh directly.
However, that didn't do the trick, so we moved on to looking at the return code of the command. With sshpass removed, ssh should return the result of the command that was run. However, this was always 0 for you.
At this point, I found a known bug in Windows that rmdir (which is the same as rd) doesn't always set errorlevel correctly. The fix is to use some conditional execution (the || operator) on rmdir to fix up the errorlevel.
See batch: Exit code for "rd" is 0 on error as well for full details.
When you execute your script in a shell, Python sets your shell's STDOUT as the subprocess's STDOUT, so everything that gets executed gets printed to your terminal. I'm not sure why, but when you're executing in Jenkins the subprocess is not inheriting the shell's STDOUT so its output is not displayed.
In all likelihood, the best way to solve your problem will be to PIPE the STDOUT (and STDERR for good measure) and print it after the process ends. Also, if you exit with the exit code of your subprocess and the exit code is not 0, it will likely terminate your Jenkins job.
p = subprocess.Popen(cmd, stdout=subprocess.PIPE,
stderr=subprocess.PIPE, shell=True)
exit_code = p.wait() # wait for it to end
print('Got the following output from the script:\n', p.stdout.read().decode())
print('Got the following errors from the script:\n', p.stderr.read().decode())
print('Script returned exit code:', exit_code)
sys.exit(exit_code)
I have a question about syntax of bash regarding launching scripts from within bash script.
My questions are:
I've seen the following syntax:
#!/bin/bash
python do_something.py > /dev/null 2>&1 &
Can you please explain what is directed to /dev/null, and what is the meaning of 2>&1 if before already mentioned /dev/null?
In addition if I have a line defined like:
python do_something.py > 2>&1 &
how is that different?
If I have the same python file in many paths, how can I differentiate between each process after launching ps -ef |grep python.
When I'm doing so, I get a list of processes which are all called do_something.py, it would be nice if I could have the full execution path string of each pid; how can I do that?
NOTE: The python file launched is writing its own log files.
Ok, diclaimer: I don't have access to a bash right now, so I might be wrong.
Let's break your command: python do_something.py > /dev/null 2>&1 &
python do_something.py will run your command
> /dev/null will redirect stdout to /dev/null
2>&1 will redirect stderr to stdout
& will fork your process and run in background
So your command will ignore stdout/stderr and be run in background which
is equivalent to the command python do_something.py >& /dev/null & [1][2]
python do_something.py > 2>&1 &:
> 2 will redirect stdout to a file named 2
>&1 will redirect stdout to stdout (yes stdout to stdout)
& will fork your process and run in background
So this command is almost equivalent to python do_something.py >2 &,
it will redirect the output to a file named 2 (eg: echo 'yes' > 2>&1)
Note: the behavior of >&1 is probably unspecified.
Since you have run your command using &, your command will be fork and
run in background, therefore I'm not aware of any way to do it in that
case. You can still lookup the /proc directory [3] to see from
which directory your command have been run thought.
[1]: What is the difference between &> and >& in bash?
[2]: In the shell, what does “ 2>&1 ” mean?
[3]: ls -l /proc/$PROCCESSID/cwd
1) stdout (Standard Output) is redirected to /dev/null and stderr (error messages) is redirected to standard output i.e console.
1>filename : Redirect stdout to file "filename."
1>>filename: Redirect and append stdout to file "filename."
2>filename : Redirect stderr to file "filename."
2>>filename: Redirect and append stderr to file "filename."
&>filename : Redirect both stdout and stderr to file "filename."
3) Using the ps auxww flags, you will see the full path to output in both your terminal window and from shell scripts. "ps manual":
-w Wide output. Use this option twice for unlimited width.
Answers:
1, 2. > redirects whatever that is printed in stdout as result of executing the command (in your case python do_something.py) to a file called /dev/null. The /dev/null is kind of a black hole. Whatever you write to it disappers.
2>&1 redirects the output of stderr (which has fd as 2) to stdout (whose fd is 1).
Refer I/O redirection for more info about redirections.
Refer this link for more info about /dev/null
I have python script for Python 2.7.10 that looks like this:
import logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
logger.addHandler(logging.StreamHandler())
logger.info("Hello, world!");
Is it possible to somehow run this script from Powershell on Windows machine so it produces no output? I've tried redirecting output to file
C:\Python2.7\python.exe C:\Users\User\script.py > output.txt
But it didn't help and script writes Hello, world! string to console.
You need to redirect all output to null:
command > nul 2>&1
or
2> nul
to kill stderr
Finally to outputfile:
command > a.txt 2>&1
In powershell:
2>&1>$null
2>&1 | out-null
From:
https://serverfault.com/questions/132963/windows-redirect-stdout-and-stderror-to-nothing
I'm trying to run a python script with the nice level set.
nice -n 5 python3 blah.py
runs as expected and sends text output to the screen. However, I would like to pipe the output to a text file and run this all in the background so I can go and check on the progress remotely.
However,
nice -n 5 python3 blah.py > log.txt &
creates the log file log.txt but doesn't write anything to the text file so I'm not sure where the standard output is being sent to or how to direct it to my text file.
I eventually solved this using the command
nice -n 5 python3 -u blah.py >log.txt &
-u forces the binary I/O layers of stdin, stdout and stderr to be unbuffered. This allows the output of the python script to be written to the text file whilst the process is running.
I'm guessing you're running the command via ssh and want to log out between running and checking the log. To do this run:
nohup nice -n 5 python3 blah.py > log.txt &
This will prevent killing the program on logout. As well nohup redirects stderr to stdout, which also might be what's causing an empty log.txt file.
I've tried "nosetests p1.py > text.txt" and it is not working.
What is the proper way to pipe this console output?
Try:
nosetests -s p1.py > text.txt 2>&1
Last --obvious--tip: If you are not in the test file directory, add before the .py file.
I want to add more detail here.
On my version (v1.3.7) Nose is logging on stderr instead of the expected stdout. Why nose logs on stderr instead of stdout is beyond me. So the solution is to redirect the stderr stream to your file. The redirect character sends stdout by default. The --nocapture, -s flag is used to stop nose from capturing your own print statements.
$ nosetests -s -v test/ > stdout.log 2> stderr.log
One thing to note, is although stderr seems to be flushed on each output, stdout does not get flushed, so if you are tailing the stdout.log file, you will not see output until nose or the OS decides to flush. So if you think it's not working try exiting the test runner which will cause stdout to flush.
See this answer on redirecting linux streams.
parameter -s - Not capturing stdout