Why Does Azure DevOps Server interleave output - python

Sorry in advance, I can't post actual code because of security restrictions at my job, but I'll try to make a contrived example.
I am working with python 3.6.1 and running a module in an Azure Pipeline (ADS 2019). In the module we have output done using a dictionary with the following structure
#dummy data, assume files could be in any order in any category
{
"compliant": ['file1.py', 'file2.py'], #list of files which pass
"non-compliant":['file3.py'], #list of files which fail
"incompatible":['file4.py'] #list of files which could not be tested due to exceptions
}
When a failure occurs one of our customers wants the script to output the command to call a script that can be run to correct the non-compliant files. The program is written similar to what follows
result = some_func() #returns the above dict
print('compliant:')
for file in result['compliant']:
print(file)
print('non-compliant:')
for file in result['non-compliant']:
print(file)
print('incompatible:')
for file in result['incompatible']:
print(file)
# prints a string to sys.stderr simillar to python -m script arg1 arg2 ...
# script that is output is based on the arguments used to call
print_command_to_fix(sys.argv)
When run normally I would get the correct output like follows:
#correct output: occurs on bash and cmd
compliant:
file1.py
file2.py
non-compliant:
file3.py
incompatible:
file4.py
python -m script arg1 arg2 arg_to_fix
when I run on the Azure Pipeline though, the output gets interleaved like follows
#incorrect output: occurs only on azure pipeline runs
compliant:
python -m script arg1 arg2 arg_to_fix
file1.py
file2.py
non-compliant:
file3.py
incompatible:
file4.py
Whether I try to use print or sys.stderr.write it doesn't seem to resolve the interleave, and I'm assuming the print_command_to_fix() is being called asynchronously somehow. But my guess probably isn't accurate since I haven't been working with ADS or python for very long.
TL;DR: What am I doing wrong to get the above interleaved output on Pipelines only?
Edit: clarified certain points and fixed typos

Discovered the answer after a few hours of troubleshooting and solutions.
ADS tracks both output streams in the program but does it asynchronously. The error was cause by outputting to both stdout and stderr. This being the case, outputting all output to one stream resolved the issue. The approach I took ended up being something like follows
result = some_func() #returns the above dict
output = []
output.append('compliant:')
output.extend(result['compliant'])
output.append(file)
output.extend(result['non-compliant'])
output.append('incompatible:')
output.extendresult['incompatible'])
# returns a string to simillar to python -m script arg1 arg2 ...
# script that is output is based on the arguments used to call
output.append(format_command_to_fix(sys.argv))
print('\n'.join(output))
Alternatively, I imagine other techniques for outputting async information should resolve as well.

Related

How to run multiple python scripts using single python(.py) script

I have written multiple python scripts that are to be run sequentially to achieve a goal. i.e:
my-directory/
a1.py,
xyz.py,
abc.py,
....,
an.py
All these scripts are in the same directory and now I want to write a single script that can run all these .py scripts in a sequence. To achieve this goal, I want to write a single python(.py) script but don't know how to write it. I have windows10 so the bash script method isn't applicable.
What's the best possible way to write an efficient migration script in windows?
using a master python script is a possibility (and it's cross platform, as opposed to batch or shell). Scan the directory and open each file, execute it.
import glob,os
os.chdir(directory) # locate ourselves in the directory
for script in sorted(glob.glob("*.py")):
with open(script) as f:
contents = f.read()
exec(contents)
(There was a execfile method in python 2 but it's gone, in python 3 we have to read file contents and pass it to exec, which also works in python 2)
In that example, order is determined by the script name. To fix a different order, use an explicit list of python scripts instead:
for script in ["a.py","z.py"]:
That method doesn't create subprocesses. It just runs the scripts as if they were concatenated together (which can be an issue if some files aren't closed and used by following scripts). Also, if an exception occurs, it stops the whole list of scripts, which is probably not so bad since it avoids that the following scripts work on bad data.
You can name a function for all the script 2 like this:
script2.py
def main():
print('Hello World!')
And import script2 function with this:
script1.py
from script2.py import *
main()
I hope this is helpful (tell me if I didn't answer to your question I'm Italian..)

How to run a .py file from a .py file in an entirely different project

For the life of me i can't figure this one out.
I have 2 applications build in python, so 2 projects in different folders, is there a command to say in the first application like run file2 from documents/project2/test2.py ?
i tried something like os.system('') and exec() but that only seems to work if its in the same folder. How can i give a command a path like documents/project2 and then for example:
exec(documents/project2 python test2.py) ?
short version:
Is there a command that runs python test2.py while that test2 is in a completely different file/project?
thnx for all feedback!
There's a number of approaches to take.
1 - Import the .py
If the path to the other Python script can be made relative to your project, you can simply import the .py. This will cause all the code at the 'root' level of the script to be executed and makes functions as well as type and variable definitions available to the script importing it.
Of course, this only works if you control how and where everything is installed. It's the most preferable solution, but only works in limited situations.
import ..other_package.myscript
2 - Evaluate the code
You can load the contents of the Python file like any other text file and execute the contents. This is considered more of a security risk, but given the interpreted nature of Python in normal use not that much worse than an import under normal circumstances.
Here's how:
with open('/path/to/myscript.py', 'r') as f:
exec(f.read())
Note that, if you need to pass values to code inside the script, or out of it, you probably want to use files in this case.
I'd consider this the least preferable solution, due to it being a bit inflexible and not very secure, but it's definitely very easy to set up.
3 - Call it like any other external program
From a Python script, you can call any other executable, that includes Python itself with another script.
Here's how:
from subprocess import run
run('python path/to/myscript.py')
This is generally the preferable way to go about it. You can use the command line to interface with the script, and capture the output.
You can also pipe in text with stdin= or capture the output from the script with stdout=, using subprocess.Popen directly.
For example, take this script, called quote.py
import sys
text = sys.stdin.read()
print(f'In the words of the poet:\n"{text}"')
This takes any text from standard in and prints them with some extra text, to standard out like any Python script. You could call it like this:
dir | python quote.py
To use it from another Python script:
from subprocess import Popen, PIPE
s_in = b'something to say\nright here\non three lines'
p = Popen(['python', 'quote.py'], stdin=PIPE, stdout=PIPE)
s_out, _ = p.communicate(s_in)
print('Here is what the script produced:\n\n', s_out.decode())
Try this:
exec(open("FilePath").read())
It should work if you got the file path correct.
Mac example:
exec(open("/Users/saudalfaris/Desktop/Test.py").read())
Windows example:
exec(open("C:\Projects\Python\Test.py").read())

How to run one python file from another python file?

To start off, I am a beginner in python so I am not even sure if my question makes sense or is even possible.
I have 2 python files app.py. and compare.py. compare.py takes in two arguments (File paths) to run. So for example, when I want to run it, I do python compare.py ./image1.jpg ./image2.jpg. Now the return I get is some text printed to the terminal such as Comparison Done, The distance is 0.544.
Now, I want to run this compare.py from inside app.py and get a string with whatever compare.py would usually output to the terminal. So for example:
result = function('compare.py ./image1.jpg ./image2.jpg') and result will have the required string. Is this possible?
You can use os.popen:
In app.py:
import os
output = os.popen('python compare.py ./image1.jpg ./image2.jpg').readlines()

Send Arguments from Batch File to Jython/Python

I wrote a Sikuli script (Jython) to test a webpage. The script contains multiple tests in it, which means that when one kills sikuli, the ones after it will not run. To fix this, I'd like to instead call each test via a batch file. So it is currently set up similar to this:
tests = [test1, test2, test3, test4]
for test in tests:
run test
Obviously, that is a simplified version... so what I'd like to do is convert the list into 4 batch files. The first batch file would call the script with test1 as an argument; the second would send test2 as an argument, etc. I could then create another batch file to iterate through them. However, I don't know how to communicate between a batch file and jython, other than just plainly running the script.
This question refers to both the batch file and jython scripts - I'm assuming you have to do something special in each.
Any help would be appreciated.
Thanks.
Question is not 100% clear for me. I assume this is the answer:
First of all, you need to pass some arguments to Sikuli script started from batch file using --args option, for example:
YourPath\runIDE.cmd -r YourPath\YourScript.sikuli --args test1 test2 someOtherOption
Second of all, you must receive it in a script using sys.argv variable. It would work the same in either Python or Jython. Code sample:
import sys
print sys.argv
for a in sys.argv:
if a=="test1":
print "Do something"
elif a=="test2":
print "Do something else"
https://docs.python.org/2/library/sys.html#sys.argv

Python CLI program unit testing

I am working on a python Command-Line-Interface program, and I find it boring when doing testings, for example, here is the help information of the program:
usage: pyconv [-h] [-f ENCODING] [-t ENCODING] [-o file_path] file_path
Convert text file from one encoding to another.
positional arguments:
file_path
optional arguments:
-h, --help show this help message and exit
-f ENCODING, --from ENCODING
Encoding of source file
-t ENCODING, --to ENCODING
Encoding you want
-o file_path, --output file_path
Output file path
When I made changes on the program and want to test something, I must open a terminal,
type the command(with options and arguments), type enter, and see if any error occurs
while running. If error really occurs, I must go back to the editor and check the code
from top to end, guessing where the bug positions, make small changes, write print lines,
return to the terminal, run command again...
Recursively.
So my question is, what is the best way to do testing with CLI program, can it be as easy
as unit testing with normal python scripts?
I think it's perfectly fine to test functionally on a whole-program level. It's still possible to test one aspect/option per test. This way you can be sure that the program really works as a whole. Writing unit-tests usually means that you get to execute your tests quicker and that failures are usually easier to interpret/understand. But unit-tests are typically more tied to the program structure, requiring more refactoring effort when you internally change things.
Anyway, using py.test, here is a little example for testing a latin1 to utf8 conversion for pyconv::
# content of test_pyconv.py
import pytest
# we reuse a bit of pytest's own testing machinery, this should eventually come
# from a separatedly installable pytest-cli plugin.
pytest_plugins = ["pytester"]
#pytest.fixture
def run(testdir):
def do_run(*args):
args = ["pyconv"] + list(args)
return testdir._run(*args)
return do_run
def test_pyconv_latin1_to_utf8(tmpdir, run):
input = tmpdir.join("example.txt")
content = unicode("\xc3\xa4\xc3\xb6", "latin1")
with input.open("wb") as f:
f.write(content.encode("latin1"))
output = tmpdir.join("example.txt.utf8")
result = run("-flatin1", "-tutf8", input, "-o", output)
assert result.ret == 0
with output.open("rb") as f:
newcontent = f.read()
assert content.encode("utf8") == newcontent
After installing pytest ("pip install pytest") you can run it like this::
$ py.test test_pyconv.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.4.5dev1
collected 1 items
test_pyconv.py .
========================= 1 passed in 0.40 seconds =========================
The example reuses some internal machinery of pytest's own testing by leveraging pytest's fixture mechanism, see http://pytest.org/latest/fixture.html. If you forget about the details for a moment, you can just work from the fact that "run" and "tmpdir" are provided for helping you to prepare and run tests. If you want to play, you can try to insert a failing assert-statement or simply "assert 0" and then look at the traceback or issue "py.test --pdb" to enter a python prompt.
Start from the user interface with functional tests and work down towards unit tests. It can feel difficult, especially when you use the argparse module or the click package, which take control of the application entry point.
The cli-test-helpers Python package has examples and helper functions (context managers) for a holistic approach on writing tests for your CLI. It's a simple idea, and one that works perfectly with TDD:
Start with functional tests (to ensure your user interface definition) and
Work towards unit tests (to ensure your implementation contracts)
Functional tests
NOTE: I assume you develop code that is deployed with a setup.py file or is run as a module (-m).
Is the entrypoint script installed? (tests the configuration in your setup.py)
Can this package be run as a Python module? (i.e. without having to be installed)
Is command XYZ available? etc. Cover your entire CLI usage here!
Those tests are simplistic: They run the shell command you would enter in the terminal, e.g.
def test_entrypoint():
exit_status = os.system('foobar --help')
assert exit_status == 0
Note the trick to use a non-destructive operation (e.g. --help or --version) as we can't mock anything with this approach.
Towards unit tests
To test single aspects inside the application you will need to mimic things like command line arguments and maybe environment variables. You will also need to catch the exiting of your script to avoid the tests to fail for SystemExit exceptions.
Example with ArgvContext to mimic command line arguments:
#patch('foobar.command.baz')
def test_cli_command(mock_command):
"""Is the correct code called when invoked via the CLI?"""
with ArgvContext('foobar', 'baz'), pytest.raises(SystemExit):
foobar.cli.main()
assert mock_command.called
Note that we mock the function that we want our CLI framework (click in this example) to call, and that we catch SystemExit that the framework naturally raises. The context managers are provided by cli-test-helpers and pytest.
Unit tests
The rest is business as usual. With the above two strategies we've overcome the control a CLI framework may have taken away from us. The rest is usual unit testing. TDD-style hopefully.
Disclosure: I am the author of the cli-test-helpers Python package.
So my question is, what is the best way to do testing with CLI program, can it be as easy as unit testing with normal python scripts?
The only difference is that when you run Python module as a script, its __name__ attribute is set to '__main__'. So generally, if you intend to run your script from command line it should have following form:
import sys
# function and class definitions, etc.
# ...
def foo(arg):
pass
def main():
"""Entry point to the script"""
# Do parsing of command line arguments and other stuff here. And then
# make calls to whatever functions and classes that are defined in your
# module. For example:
foo(sys.argv[1])
if __name__ == '__main__':
main()
Now there is no difference, how you would use it: as a script or as a module. So inside your unit-testing code you can just import foo function, call it and make any assertions you want.
Maybe too little too late,
but you can always use
import os.system
result = os.system(<'Insert your command with options here'>
assert(0 == result)
In that way, you can run your program as if it was from command line, and evaluate the exit code.
(Update after I studied pytest)
You can also use capsys.
(from running pytest --fixtures)
capsys
Enable text capturing of writes to sys.stdout and sys.stderr.
The captured output is made available via ``capsys.readouterr()`` method
calls, which return a ``(out, err)`` namedtuple.
``out`` and ``err`` will be ``text`` objects.
This isn't for Python specifically, but what I do to test command-line scripts is to run them with various predetermined inputs and options and store the correct output in a file. Then, to test them when I make changes, I simply run the new script and pipe the output into diff correct_output -. If the files are the same, it outputs nothing. If they're different, it shows you where. This will only work if you are on Linux or OS X; on Windows, you will have to get MSYS.
Example:
python mycliprogram --someoption "some input" | diff correct_output -
To make it even easier, you can add all these test runs to your 'make test' Makefile target, which I assume you already have. ;)
If you are running many of these at once, you could make it a little more obvious where each one ends by adding a fail tag:
python mycliprogram --someoption "some input" | diff correct_output - || tput setaf 1 && echo "FAILED"
The short answer is yes, you can use unit tests, and should. If your code is well structured, it should be quite easy to test each component separately, and if you need to to can always mock sys.argv to simulate running it with different arguments.
pytest-console-scripts is a Pytest plugin for testing python scripts installed via console_scripts entry point of setup.py.
For Python 3.5+, you can use the simpler subprocess.run to call your CLI command from your test.
Using pytest:
import subprocess
def test_command__works_properly():
try:
result = subprocess.run(['command', '--argument', 'value'], check=True, capture_output=True, text=True)
except subprocess.CalledProcessError as error:
print(error.stdout)
print(error.stderr)
raise error
The output can be accessed via result.stdout, result.stderr, and result.returncode if needed.
The check parameter causes an exception to be raised if an error occurs. Note Python 3.7+ is required for the capture_output and text parameters, which simplify capturing and reading stdout/stderr.
Given that you are explicitly asking about testing for a command line application, I believe that you are aware of unit-testing tools in python and that you are actually looking for a tool to automate end-to-end tests of a command line tool. There are a couple of tools out there that are specifically designed for that. If you are looking for something that's pip-installable, I would recommend cram. It integrates well with the rest of the python environment (e.g. through a pytest extension) and it's quite easy to use:
Simply write the commands you want to run prepended with $ and the expected output prepended with . For example, the following would be a valid cram test:
$ echo Hello
Hello
By having four spaces in front of expected output and two in front of the test, you can actually use these tests to also write documentation. More on that on the website.
You can use standard unittest module:
# python -m unittest <test module>
or use nose as a testing framework. Just write classic unittest files in separate directory and run:
# nosetests <test modules directory>
Writing unittests is easy. Just follow online manual for unittesting
I would not test the program as a whole this is not a good test strategy and may not actually catch the actual spot of the error. The CLI interface is just front end to an API. You test the API via your unit tests and then when you make a change to a specific part you have a test case to exercise that change.
So, restructure your application so that you test the API and not the application it self. But, you can have a functional test that actually does run the full application and checks that the output is correct.
In short, yes testing the code is the same as testing any other code, but you must test the individual parts rather than their combination as a whole to ensure that your changes do not break everything.

Categories