I'm trying to use pytest with a simple example, saved as "test_lesson1.py" with directory structure as shown below.
import pytest
TOL = 2e-2
def squared(x):
return x**2
def test_squared():
x = 4
expected = 16
computed = squared(x)
msg = "fail"
np.testing.assert_allclose(expected, computed, rtol=TOL, err_msg=msg)
Example directory structure:
proj
|--tests
|--|--test_lesson1.py
On Windows but using Git Bash as a bash terminal, how can I run pytest? Here is what I am entering the terminal:
alias py='C:/Users/name/anaconda3/envs/myenv/python.exe'
. C:/Users/name/anaconda3/etc/profile.d/conda.sh
conda activate myenv
cd tests/
py -c "pytest test_lesson1.py"
which returns
File "<string>", line 1
pytest test_lesson1.py
^
SyntaxError: invalid syntax
(myenv)
I can confirm that python works with py -c "print('hello world')", which prints as expected.
pytest is a command that will execute all tests in all files whose names follow the form test_*.py or \*_test.py in the current directory and its subdirectories, therefore you don't need python to invoke it.
Just run
pytest test_lesson1.py
On the other hand, print is a python function and you need python to invoke it.
Related
The following command works:
$ pycco *.py
# generates literate-style documentation
# for all .py files in the current folder
And the following snippet in my tox.ini file works as expected:
[testenv:pycco]
deps =
pycco
commands =
pycco manage.py
# generates literate-style documentation
# for manage.py
But if I try to use a glob:
[testenv:pycco]
deps =
pycco
commands =
pycco *.py
...I get the following error:
File "/home/user/Documents/project/.tox/pycco/lib/python3.7/site-packages/pycco/main.py", line 79, in generate_documentation
code = open(source, "rb").read().decode(encoding)
FileNotFoundError: [Errno 2] No such file or directory: '*.py'
How can I pass *.py to pycco via tox?
The problem here is that pycco does not support glob expansions. What makes pycco *.py work is that before the execution happens the shell actually transforms *.py to actual files; and then passes that to the OS to run it.
When tox runs your command there's no shell involved, so whatever you write is as is passed on to the OS, so now pycco actually gets as argument *.py hence the error.
You can work around this by either explicitly listing the file paths or using the python interpreter to do the expansion:
python -c 'from glob import glob; import subprocess; subprocess.check_call(["pycco"] + glob("*.py"))'
Put the above command inside your tox commands and things will work now as python is now the shell doing the expansion of "*.py" to actual files list.
You cannot do this directly because pycco does not (currently) support glob expansions. Instead you can create a shell script execute_pycco.sh as follows:
#!/bin/sh
pycco *.py
Update tox.ini as follows:
[testenv:pycco]
deps =
pycco
commands =
./execute_pycco.sh
You will now execute your shell script in the "pycco" environment created by tox. This method also allows you to define more elaborate scripts:
#!/bin/sh
filelist=$( find . -name '*.py' | grep -v ".tox" )
# make a list of all .py files in all subfolders,
# except the .tox/ subfolder
pycco -ip $filelist
# generate literate-style documentation for all
# files in the list
After some time I finally managed to successfully install python and pip and run it on my machine using Visual Studio Code.
I am working in virtual environment in python and we have a Makefile with following statement:
test:
source .env && PYTHONPATH=. PY_ENV=testing py.test ${ARGS} --duration=20
File .env lives in the main directory next to Makefile. It contains some environmental variables needed for testing certain APIs.
When I take the line out of the file and run it in my terminal, everything works fine and all tests are running etc.
However if I call the following: make test I am getting this error:
$ make test
source .env && PYTHONPATH=. PY_ENV=testing py.test --duration=20
/usr/bin/sh: line 0: source: .env: file not found
make: *** [test] Error 1
(venv)
To me it looks like when running this command from within Makefile it can't see the .env file but have no idea how to solve it.
The source command isn't looking up the file in the current working directory. As mentioned in man source:
Read and execute commands from filename in the current shell
environment and return the exit status of the last command executed
from filename. If filename does not contain a slash, filenames in
PATH are used to find the directory containing filename.
Change the file path like so:
test:
source ./.env && PYTHONPATH=. PY_ENV=testing py.test ${ARGS} --duration=20
Note that this error does not occur in bash version < 4. This is due to an implementation bug when run under POSIX mode (what make uses, since its default shell is sh, which is usually bash --posix). The correct behaviour was first mentioned in the documentation of bash-2.05 (revision 28ef6c31, file doc/bashref.info):
When Bash is not in POSIX mode, the current directory is searched if
FILENAME is not found in `$PATH'.
These older versions searched the current directory regardless of POSIX mode. It was only in bash-4.0-rc1 (revision 3185942a, file general.c) that this was corrected. Running git diff 3185942a~ 3185942a general.c outputs this section:
## -69,6 +69,7 ## posix_initialize (on)
if (on != 0)
{
interactive_comments = source_uses_path = expand_aliases = 1;
+ source_searches_cwd = 0;
}
I want to have a little script, that will find, run and report about all the tests in the folder, like this one:
#!/bin/bash
coverage run -m unittest discover
coverage report -m
But, when I run it, I get some errors, which I do not get on Windows (like using of super() without an argument). As I've understood, it's connected with the fact, that build-in and default version of Python on Linux is 2.x, whereas I am using 3.6. How should I change the script, so it would use Python 3.6 interpreter?
EDIT:
So here's one of the files with tests that I run:
#!/usr/bin/env python3
import unittest
import random
import math
import sort_functions as s
from comparison_functions import less, greater
class BaseTestCases:
class BaseTest(unittest.TestCase):
sort_func = None
def setUp(self):
self.array_one = [101, -12, 99, 3, 2, 1]
self.array_two = [random.random() for _ in range(100)]
self.array_three = [random.random() for _ in range(500)]
self.result_one = sorted(self.array_one)
self.result_two = sorted(self.array_two)
self.result_three = sorted(self.array_three)
def tearDown(self):
less.calls = 0
greater.calls = 0
def test_sort(self):
result_one = self.sort_func(self.array_one)
result_two = self.sort_func(self.array_two)
result_three = self.sort_func(self.array_three)
self.assertEqual(self.result_one, result_one)
self.assertEqual(self.result_two, result_two)
self.assertEqual(self.result_three, result_three)
# and some more tests here
class TestBubble(BaseTestCases.BaseTest):
def setUp(self):
self.sort_func = s.bubble_sort
super().setUp()
# and some more classes looking like this
And the error:
ERROR: test_key (test_sort_func.TestBubble)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/lelik/Desktop/Sorters/test_sort_func.py", line 67, in setUp
super().setUp()
TypeError: super() takes at least 1 argument (0 given)
First, install it for your python3 (if you have it and pip installed)
sudo python3 -m pip install coverage
Then, in order to run coverage for python3, run python3 -m coverage report -m
So your final script should look like this:
#!/bin/bash
python3 -m coverage run -m unittest discover
python3 -m coverage report -m
Also you can replace python3 with path to your pythons bin. For example /usr/bin/python3. So You can call it this way as well:
#!/bin/bash
/usr/bin/python3 -m coverage run -m unittest discover
/usr/bin/python3 -m coverage report -m
The problem is that the coverage command on your Linux host has been installed for Python 2. That is, somewhere there exists a coverage script that starts with:
#!/usr/bin/python
And on your system, /usr/bin/python is python 2.
The best solution here is probably to set up a Python 3 virtual environment for running your tests (and then installing coverage into that virtualenv). You may also want to investigate tox, which will handle this for you automatically.
I am trying to do some scripting with IPython, but I am finding that it behaves very differently in a script to when I run an interactive shell.
For example, I can run the following interactively:
In [1]: %profile
default
In [2]: ls /
bin/ cdrom/ etc/ initrd.img# lib/ lib64/ media/ opt/ root/ sbin/ sys/ usr/ vmlinuz#
boot/ dev/ home/ initrd.img.old# lib32/ lost+found/ mnt/ proc/ run/ srv/ tmp/ var/ vmlinuz.old#
In [3]: mkdir tmpdir
In [4]: cd tmpdir
/home/alex/tmp/tmpdir
No problem.
However, none of these commands works when I run them in a script:
#!/usr/bin/ipython3
%profile
ls /
mkdir tmpdir
cd tmdir
I get an error:
$ ./tmp.py
File "/home/alex/tmp/tmp.ipython", line 3
%profile
^
SyntaxError: invalid syntax
I have tried running this by:
calling the file directly as above,
calling it explicitly with ipython: `ipython3 tmp.py'
passing the -i or --profile=sh arguments to ipython when calling it with ipython
changing the file extension to .ipython and .ipy
My question:
Why is it behaving differently in a script to the shell? How can I get IPython to run these commands in a script?
They are working due to IPython magic but they are shell commands and do not work in Python. To get them consider the subprocess library. Where you would have spaces in a shell command instead have comma-separated values in the list.
import subprocess
subprocess.check_call(['ls'])
subprocess.check_call(['ls', '-a'])
I am executing python script with multiple command line parameter, but using shell script.
command i execute for shell script execution is:
./scripts/run_qa.sh data/questions/questions.txt data/lexicons/paralex data/weights/paralex.txt data/database > output.txt
run_qa.sh files looks like below (please explain how it works):
#!/bin/bash
set -u
set -e
if [ $# != 4 ]; then
echo "Usage: run.sh questions lexicon weights db"
exit 1
fi
questions=$1
lexicon=$2
weights=$3
db=$4
PYTHONPATH=$PWD/python python -m lex.gearman_worker $lexicon $weights $db < $questions
I tried to execute python command as below in Command line :
python -m python/lex/gearman_worker.py data/lexicons/paralex data/weights/paralex.txt data/database > output.txt
which gives error :
/usr/bin/python: Import by filename is not supported.
Update1 :
gearman_worker.py file import other files like ths:
import lex.parse
import lex.semantics
from collections import namedtuple
from collections import defaultdict
import line gives error like this:
ImportError: No module named lex.lexicon
Update2 (executed on linux terminal):
export PYTHONPATH=$/mnt/paralex-evaluation-gearman/python
PYTHONPATH = ./python python -m python/lex/gearman_worker data/lexicons/paralex data/weights/paralex.txt data/database > output.txt
gives:
PYTHONPATH: command not found
Then
python -m python/lex/gearman_worker data/lexicons/paralex data/weights/paralex.txt data/database > output.txt
gives:
File "/mnt/paralex-evaluation-gearman/python/lex/gearman_worker.py", line 3, in <module>
import lex.lexicon
ImportError: No module named lex.lexicon
You just need to execute the following command:
PYTHONPATH=./python python -m lex.gearman_worker ARGUMENT_2 ARGUMENT_3 ARGUMENT_4 < ARGUMENT_1
If that doesn't work then you may have to export the PYTHONPATH setting:
export PYTHONPATH=${PWD}/python
python -m lex.gearman_worker ARGUMENT_2 ARGUMENT_3 ARGUMENT_4 < ARGUMENT_1
The original arguments that you would pass to the script are listed as ARGUMENT_N.
The script just:
sets some sensible defaults (see the documentation for set)
tests the the right number of arguments have been supplied
invokes the command above
Your attempt to invoke it:
misses the PYTHONPATH setting which is present in the script
passes the gearman_worker module as a file rather than a python module import