import pytest from Python3.7 instead 2.7 - python

I'm new using python on kubuntu. I'm writing some simple functions and write also tests on pytest to practise test-driven-development (although I know it's wasted on such easy functions, it's just for the sake of practice).
Because I'm beginner, I'm writing the code in an editor and I'm executing it on the terminal, in a next step I'll use an IDE like Thonny. I have installed Python3.7, although Python2.7 seems to be the standard within the system. Nonetheless, the file with the functions works fine. I'm printing some f-strings and it works fine also. The first line of the file is a Shebang which tells the interpreter, to use Python3.7 (#!/usr/bin/env python3.7). However, when I want to execute the tests, I'm writing pytest in console, as indicated by the pytest-introduction. Alas, I get a syntax error, because it seems that pytest is importing Python2.7 which of course doesn't know f-strings.
I verified that pytest is indeed importing Python2.7 by executing the command pytest --version and I was confirmed.
My question is: How can I make pytest to import Python3.7 so the test would pass or at least the Syntaxerror would go away? Replacing the f-string with a normal string makes the test pass, so I'm assuming this is the only problem.
Any help is highly appreciated. Many thanks in advance. I hope, I gave all the relevant informations. If more information is needed, I'll provide that gladly.

You can install pytest with pip3. For most libraries, pip3 defaults to python3.x and pip defaults to python2.x

Related

shebang line - multiple allowed interpreters by order

I've got a script that requires python3.7 or above.
I've got multiple systems each with different python versions. all of them have either python3.7 or python3.8 but just "python3" can also be python3.6.
Is there a way to make a shebang line take either python3.7 or python3.8? and fail if none exist of course.
I tried a billion of options and nothing works. I was hoping for something like:
#!/usr/bin/python3.7 || /usr/bin/python3.8
or
#!/usr/bin/python3.7
#!/usr/bin/python3.8
but nada.. I also couldn't google it up which kinda makes me sad but who knows maybe the answer is there somewhere and I'm just using a bizarre jargon
these are not systems I can change defaults on and installing python3.7 on each such setup is sadly not an option (would make life awesome).
The shebang simply needs to point to an actual executable.
That executable can be one which finds a suitable version of the Python interpreter; see for example the py tool which was popular on Windows for a long time but which is now also being made available on real computers. other platforms.
Ultimately, something somewhere needs to be on the user's PATH, which obviously you will not always have control over. If you are targeting unsophisticated users, perhaps look into providing your code in a compiled executable, a prepackaged Docker image, or something vaguely like that.

python power shell issues

Edited to include a specific example
I am learning to use python and work in a windows 10 environment (although on 3 separate computers). I have gotten past the "add python to %PATH% issue" but continue to see odd outcomes when I call simple scripts from powershell compared to running them from the python terminal. For example, I often see that lines of code are duplicated, even simple print statements. A second repeating issue is that some (but not all) modules work fine in the python terminal but can't be found when running a script from powershell, despite using pip install within powershell to install it in the first place.
Rather then seeking help on a specific issue, I'm hoping for some guidance into how powershell and python interact that might help me understand or identify some commonalities in these issues I keep experiencing that are common from within powershell. If thats as simple as redirecting me towards another source even that'd be apprecaited. But my search results always turn up the common issue of powershell not knowing what python is, and needing to add python to the PATH. But as indicated, I have already gotten past this and can get %50 of my python to work from powershell. There must be something else perhaps obvious to others that I'm missing that can help me understand why some things aren't working while others are.
Thanks for taking the time to consider my problem and any advice is greatly apprecaited.
Here is an example from some code I am trying to get working, but in troubleshooting I have taken out a lot of code and am now only running what is shown that defines a dataframe and a print statement. Everything works line by line in python, but when I call the script from powershell, the print statement executes twice.
#!/usr/bin/python3
import pandas as pd
joedata = {'fpr': [0.2,0.4,0.8], 'tpr':[0.9,0.5,0.1]}
joeframe = pd.DataFrame(data=joedata)
print(joeframe)
Concerning modules, you probably have multiple versions of Python installed on your system. Calling pip in the Powershell doesn't mean that it will install for the Python installation which is called by default when you execute Python in the Powershell. So you should try to figure out on which version pip is installing packages and which version is actually used when executing a script.
EDIT: I tested the example that you gave in Powershell and I don't have the problem, it's working fine.
Moreover, for me, without doing any special configuration, in Powershell the commands pip and python refer to the same version of Python. If by default everything is ok, you should consider uninstalling Python and just running the installer again and let it manage the PATH, etc.

Importing Python module prints docstring of different, unrelated script?

I've encountered this issue with two separate modules now, one that I attempted to download myself (Quartz; could probably be the way I installed it, but let's ignore this scenario for now) and another that I installed using pip install (Pandas; let's focus on this one).
I wrote a two-line script that includes just import pandas and print('test'), for testing purposes. When I execute this in the terminal, instead of printing test to confirm the script runs correctly, it prints the docstring for another completely unrelated script:
[hidden]~/Python/$ python3 test.py
Usage: python emailResponse.py [situation] - copy situation response
The second line is a docstring I wrote for a simple fetch script for responding to emails, which is unrelated. What's worse is, if I just envoke Python3 in the terminal, and try import pandas, it'll print that same docstring and take me out of Python3 and back into the terminal shell / bash (sorry if this is not the right verbiage; still learning). The same results happen trying import Quartz as well, but no other modules are impacted (at least, that I'm aware of).
I'm at a complete loss why this might be the case. It was easy enough to avoid using Quartz, but I need Pandas for work purposes and this issue is starting to directly affect my work.
Any idea why this might be the case?

How to statically determine whether a source file supports python 3, for flake8?

How can I statically determine whether a Python source file supports Python 3?
The question may be impossible to answer as stated there, so I'll be more specific:
My application for this is to pick a version of flake8 to run that won't give spurious SyntaxErrors. For that reason, I'm looking for (at least) some heuristic that will tell me whether I should run a Python 3 copy of flake8, or a Python 2 copy.
I'm using flake8 from my editor, which shows me lint errors as I type, and I'm most concerned with the fact that sometimes I lose naming errors (such as F821), as a side effect of pyflakes/flake8 thinking that something is a syntax error that's in fact correct syntax: when it encounters a syntax error it (understandably) appears to give up with things like naming errors.
This is nigh impossible. There are way too many codepaths to test.
Moreover, code can be written to run on both Python 2 and 3, and flake8 doesn't always like the tricks used to make this possible, unless the project specifically tests with flake8 and has marked such sites to be excluded. So you could either have false positives (errors in both the Python 2 and Python 3 versions of flake8) or the code will simply work on Python 2 and 3 without any warnings.
You could use tox to manage version support for a given project instead; have tox figure out what flake8 command to use (which may be multiple):
[tox]
envlist = py27,py35,flake8-27,flake8-35
# ...
[testenv:flake8-27]
basepython=python2.7
deps=flake8
commands=
flake8 projectdir
[testenv:flake8-35]
basepython=python3.5
deps=flake8
commands=
flake8 projectdir
and use tox -e flake8-27 or tox -e flake8-35.
If you only care about SyntaxError (which you specifically mention in your question), you can simply try to compile the file with Python 2 and 3:
python -m compileall
python3 -m compileall
If either of these commands fails, you at least know that the code does not work with that Python version. The reverse is of course not true: If the code compiles in a specific version of Python, that doesn't guarantee you that it will work correctly in that version of Python. It just tells you there are no SyntaxErrors.

Is there something like 'autotest' for Python unittests?

Basically, growl notifications (or other callbacks) when tests break or pass. Does anything like this exist?
If not, it should be pretty easy to write.. Easiest way would be to..
run python-autotest myfile1.py myfile2.py etc.py
Check if files-to-be-monitored have been modified (possibly just if they've been saved).
Run any tests in those files.
If a test fails, but in the previous run it passed, generate a growl alert. Same with tests that fail then pass.
Wait, and repeat steps 2-5.
The problem I can see there is if the tests are in a different file. The simple solution would be to run all the tests after each save.. but with slower tests, this might take longer than the time between saves, and/or could use a lot of CPU power etc..
The best way to do it would be to actually see what bits of code have changed, if function abc() has changed, only run tests that interact with this.. While this would be great, I think it'd be extremely complex to implement?
To summarise:
Is there anything like the Ruby tool autotest (part of the ZenTest package), but for Python code?
How do you check which functions have changed between two revisions of a script?
Is it possible to determine which functions a command will call? (Somewhat like a reverse traceback)
I found autonose to be pretty unreliable but sniffer seems to work very well.
$ pip install sniffer
$ cd myproject
Then instead of running "nosetests", you run:
$ sniffer
Or instead of nosetests --verbose --with-doctest, you run:
$ sniffer -x--verbose -x--with-doctest
As described in the readme, it's a good idea to install one of the platform-specific filesystem-watching libraries, pyinotify, pywin32 or MacFSEvents (all installable via pip etc)
autonose created by gfxmonk:
Autonose is an autotest-like tool for python, using the excellent nosetest library.
autotest tracks filesystem changes and automatically re-run any changed tests or dependencies whenever a file is added, removed or updated. A file counts as changed if it has iself been modified, or if any file it imports has changed.
...
Autonose currently has a native GUI for OSX and GTK. If neither of those are available to you, you can instead run the console version (with the --console option).
I just found this: http://www.metareal.org/p/modipyd/
I'm currently using thumb.py, but as my current project transitions from a small project to a medium sized one, I've been looking for something that can do a bit more thorough dependency analysis, and with a few tweaks, I got modipyd up and running pretty quickly.
Guard is an excellent tool that monitors for file changes and triggers tasks automatically. It's written in Ruby, but it can be used as a standalone tool for any task like this. There's a guard-nosetests plugin to run Python tests via nose.
Guard supports cross-platform notifications (Linux, OSX, Windows), including Growl, as well as many other great features. One of my can't-live-without dev tools.
One very useful tool that can make your life easier is entr. Written in C, and uses kqueue or inotify under the hood.
Following command runs your test suite if any *.py file in your project is changed.
ls */**.py | entr python -m unittest discover -s test
Works for BSD, Mac OS, and Linux. You can get entr from Homebrew.
Maybe buildbot would be useful http://buildbot.net/trac
For your third question, maybe the trace module is what you need:
>>> def y(a): return a*a
>>> def x(a): return y(a)
>>> import trace
>>> tracer = trace.Trace(countfuncs = 1)
>>> tracer.runfunc(x, 2)
4
>>> res = tracer.results()
>>> res.calledfuncs
{('<stdin>', '<stdin>', 'y'): 1, ('<stdin>', '<stdin>', 'x'): 1}
res.calledfuncs contains the functions that were called. If you specify countcallers = 1 when creating the tracer, you can get caller/callee relationships. See the docs of the trace module for more information.
You can also try to get the calls via static analysis, but this can be dangerous due to the dynamic nature of Python.
Django's development server has a file change monitor that watches for modifications and automatically reloads itself. You could re-use this code to launch unit tests on file modification.
Maybe Nose http://somethingaboutorange.com/mrl/projects/nose/ has a plugin http://somethingaboutorange.com/mrl/projects/nose/doc/writing_plugins.html
Found this: http://jeffwinkler.net/2006/04/27/keeping-your-nose-green/
You can use nodemon for the task, by watching .py files and execute manage.py test. The command will be: nodemon --ext py --exec "python manage.py test".
nodemon is an npm package however, I assume you have node installed.
Check out pytddmon. Here is a video demonstration of how to use it:
http://pytddmon.org/?page_id=33

Categories