Perfect Wrapper (in Python) - python

I run a configuration management tool which calls /usr/bin/dpkg, but does not show the stdout/stderr.
Something goes wrong and I want to debug the root of the problem.
I want to see all calls to dpkg and stdout/stderr.
I moved the original /usr/bin/dpkg to /usr/bin/dpkg-orig and wrote a wrapper:
#!/usr/bin/env python
import os
import sys
import datetime
import subx
import psutil
cmd=list(sys.argv)
cmd[0]='dpkg-orig'
def parents(pid=None):
if pid==1:
return '\n'
if pid is None:
pid = os.getpid()
process = psutil.Process(pid)
lines = [parents(process.ppid())]
lines.append('Parent: %s' % ' '.join(process.cmdline()))
return '\n'.join(lines)
result = subx.call(cmd, assert_zero_exit_status=False)
with open('/var/tmp/dpkg-calls.log', 'ab') as fd:
fd.write('----------- %s\n' % (datetime.datetime.now()))
fd.write('%s\n' % parents())
fd.write('stdout:\n%s\n\n' % result.stdout)
sys.stdout.write(result.stdout)
fd.write('stderr:\n%s\n' % result.stderr)
fd.write('ret: %s\n' % result.ret)
sys.stderr.write(result.stderr)
sys.exit(result.ret)
Now I run the configuration management tool again and searched for non zero "ret:" lines.
The output:
Parent: /usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold -o DPkg::Options::=--force-confdef install openssl-foo-bar-aptguettler.cert
Parent: python /usr/bin/dpkg --force-confold --force-confdef --status-fd 67 --no-triggers --unpack --auto-deconfigure /var/cache/apt/archives/openssl-foo-bar-aptguettler.cert_1-2_all.deb
stdout:
stderr:
dpkg: error: unable to read filedescriptor flags for <package status and progress file descriptor>: Bad file descriptor
ret: 2
This happens because my wrapper is not perfect yet.
The tool which calls dpkg wants to read the file descriptor but this does not work with my wrapper.
My goal:
Capture all calls to dpkg and write it to a logfile (works)
Write out the parent processes (works)
The parent process of dpkg should not notice a difference and not fail like above (does not work yet).
Any idea how to achieve this?

I wrote a simple python script which solves this:
https://github.com/guettli/wrap_and_log_calls
Wrapper to log all calls to a linux command
particular use case: My configuration management tool calls
/usr/bin/dpkg. An error occurs, but unfortunately my configuration
management tool does not show me the whole stdout/stderr. I have no
clue what's wrong.
General use case: Wrap a linux command like /usr/bin/dpkg and write
out all calls to this.

Related

Python - shell command causes InterfaceError on file download

Recently, we replaced curl with aria2c in order to download files faster from our backend servers for later conversion to different formats.
Now for some reason we ran into the following issue with aria2c:
Pool callback raised exception: InterfaceError(0, '')
It's not clear to us where this InterfaceError occurs or what it actually could mean. Besides, we can trigger the executed command manually without any problems.
Please also have a look at our download function:
def download_file(descriptor):
"""
creates the WORKING_DIR structure and Download the descriptor.
The descriptor should be a URI (processed via aria2c)
returns the created resource path
"""
makedirs(WORKING_DIR + 'output/', exist_ok=True)
file_path = WORKING_DIR + decompose_uri(descriptor)['fileNameExt']
print(file_path)
try:
print(descriptor)
exec_command(f'aria2c -x16 "{descriptor}" -o "{file_path}"')
except CalledProcessError as err:
log('DEBUG', f'Aria2C error: {err.stderr}')
raise VodProcessingException("Download failed. Aria2C error")
return file_path
def exec_command(string):
"""
Shell command interface
Returns returnCode, stdout, stderr
"""
log('DEBUG', f'[Command] {string}')
output = run(string, shell=True, check=True, capture_output=True)
return output.returncode, output.stdout, output.stderr
Is stdout here maybe misunderstood by python which then drop into this InterfaceError?
Thanks in advance
As I just wanted to use aria2c to download files faster, as it support multiple connection, I now switched over to a tool called "axel". It also supports multiple connections without the excessive overhead aria2c has, at least for me in this situation.

Live and verbose command output using Python subprocess [duplicate]

I'd like to be able to git clone a large repository using python, using some library, but importantly I'd like to be able to see the progress of the clone as it's happening. I tried pygit2 and GitPython, but they don't seem to show their progress. Is there another way?
You can use RemoteProgress from GitPython. Here is a crude example:
import git
class Progress(git.remote.RemoteProgress):
def update(self, op_code, cur_count, max_count=None, message=''):
print 'update(%s, %s, %s, %s)'%(op_code, cur_count, max_count, message)
repo = git.Repo.clone_from(
'https://github.com/gitpython-developers/GitPython',
'./git-python',
progress=Progress())
Or use this update() function for a slightly more refined message format:
def update(self, op_code, cur_count, max_count=None, message=''):
print self._cur_line
If you simply want to get the clone information, no need to install gitpython, you can get it directly from standard error stream through built-in subprocess module.
import os
from subprocess import Popen, PIPE, STDOUT
os.chdir(r"C:\Users") # The repo storage directory you want
url = "https://github.com/USER/REPO.git" # Target clone repo address
proc = Popen(
["git", "clone", "--progress", url],
stdout=PIPE, stderr=STDOUT, shell=True, text=True
)
for line in proc.stdout:
if line:
print(line.strip()) # Now you get all terminal clone output text
You can see some clone command relative informations after execute the command git help clone.
--progress
Progress status is reported on the standard error stream by default
when it is attached to a terminal, unless --quiet is specified. This
flag forces progress status even if the standard error stream is not
directed to a terminal.

Running Newman command with subprocess

I have a Newman command that is part of my script. I'd like the entire script to quit (or go back to the main menu) when the collection encounters an error.
from subprocess import CalledProcessError, Popen, PIPE
from io import TextIOWrapper
def run_sh(command):
process = Popen(shlex.split(command), stdout=PIPE)
for line in TextIOWrapper(process.stdout, newline=""):
print(line, "")
cmd = newman run "My_Collection.postman_collection.json" --folder "My_Folder" -e ../../Postman/Environments/My_Environment.json -d "CREDS.txt" -r cli,csv -n 1 --reporter-csv-includeBody --reporter-csv-export ./RESPONSES.csv
The run_sh(cmd) is part of a bigger script. I'd like the script to not carry on to the next step if errors are encountered when running the collection. Using try and except won't work because the run_sh command successfully goes through (it runs the collection but the collection yields errors).
Example of an error:
# failure detail
1. AssertionError Status code is 202
expected response to have status code 202 but got 401
at assertion:0 in test-script
inside "MY FOLDER"
What would be the best way to go about this?

How to check whether a shell command returned nothing or something

I am writing a script to extract something from a specified path. I am returning those values into a variable. How can i check whether the shell command has returned something or nothing.
My Code:
def any_HE():
global config, logger, status, file_size
config = ConfigParser.RawConfigParser()
config.read('config2.cfg')
for section in sorted(config.sections(), key=str.lower):
components = dict() #start with empty dictionary for each section
#Retrieving the username and password from config for each section
if not config.has_option(section, 'server.user_name'):
continue
env.user = config.get(section, 'server.user_name')
env.password = config.get(section, 'server.password')
host = config.get(section, 'server.ip')
print "Trying to connect to {} server.....".format(section)
with settings(hide('warnings', 'running', 'stdout', 'stderr'),warn_only=True, host_string=host):
try:
files = run('ls -ltr /opt/nds')
if files!=0:
print '{}--Something'.format(section)
else:
print '{} --Nothing'.format(section)
except Exception as e:
print e
I tried checking 1 or 0 and True or false but nothing seems to be working. In some servers, the path '/opt/nds/' does not exist. So in that case, nothing will be there on files. I wanted to differentiate between something returned to files and nothing returned to files.
First, you're hiding stdout.
If you get rid of that you'll get a string with the outcome of the command on the remote host. You can then split it by os.linesep (assuming same platform), but you should also take care of other things like SSH banners and colours from the retrieved outcome.
As perror commented already, the python subprocess module offers the right tools.
https://docs.python.org/2/library/subprocess.html
For your specific problem you can use the check_output function.
The documentation gives the following example:
import subprocess
subprocess.check_output(["echo", "Hello World!"])
gives "Hello World"
plumbum is a great library for running shell commands from a python script. E.g.:
from plumbum.local import ls
from plumbum import ProcessExecutionError
cmd = ls['-ltr']['/opt/nds'] # construct the command
try:
files = cmd().splitlines() # run the command
if ...:
print ...:
except ProcessExecutionError:
# command exited with a non-zero status code
...
On top of this basic usage (and unlike the subprocess module), it also supports things like output redirection and command pipelining, and more, with easy, intuitive syntax (by overloading python operators, such as '|' for piping).
In order to get more control of the process you run, you need to use the subprocess module.
Here is an example of code:
import subprocess
task = subprocess.Popen(['ls', '-ltr', '/opt/nds'], stdout=subprocess.PIPE)
print task.communicate()

python-daemon not logging stdout redirection

I am using python-daemon in my code that has print statements in it. I want to send them to a file so I ran the following:
python server.py >> log.out
However, nothing goes in log.out.
Can anyone tell me what I need to do?
Thanks.
The DaemonContext object allows redirecting stdout/stderr/stdin when you create the object. For example:
import os
import daemon
if __name__ == '__main__':
here = os.path.dirname(os.path.abspath(__file__))
out = open('checking_print.log', 'w+')
with daemon.DaemonContext(working_directory=here, stdout=out):
for i in range(1, 1000):
print('Counting ... %s' % i)
You should be able to cat checking_print.log and see the output from the print statements.
A good reference for the DaemonContext object is PEP 3143.
if you have an error in your code it will not be written to the file. See http://tldp.org/HOWTO/Bash-Prog-Intro-HOWTO-3.html
Try creating this file:
print 'stdout'
raise Exception('stderr')
If it's already running as a daemon you'll most likely need to force redirection of STDOUT, STDERR etc. You can read more on I/O Redirection here.
python server.py 2>log.out >&2

Categories