Checking Python code correctness - python

In C++ I have compiler that tell me if something wrong with my code after refactoring. How to make sure that Python code is at least correct after changes? There may be some stupid error like wrong function name etc. that pretty easy to find in compile time.
Thanks

Looks like PyChecker or pylint are what you're looking for

use editor / IDE that supports code highlighting. E.g., Notepad++ has word-highlighting feature that I find very useful.
use unit tests
stupid errors will be weeded out first, so I wouldn't worry to much about this type of errors. it's "smart" error you should be afraid of.

Use tools such as pylint or PyChecker.
Write unit tests.

Unit test. http://docs.python.org/library/unittest.html
If your tests are written at a reasonable level of granularity, it can be as fast to unit test as it is to run lint or a compiler.

Static analysis (as from the IDE, or from tools like pyLint and pyChecker) is a very quick and effective way to check simple errors, and enforce a common style.
Unit tests are a great way to ensure the code stands for its contract.
Code reviews and pair programming are one of the best ways to find errors of all sorts, and to spread knowledge in a team.
All of the options require some time, to setup and to execute. However, the gains are tremendous, and far higher than the investment.

Eclipse has a good python plugin for doing the syntax highlighting and debugging.

Pylint is almost doing what you are looking for.
You can also force the compilation of your python files. That will show some basic syntax error (it doesn't have all the capability of a c++ compiler)
I've read this article and decided to make an automated build system with pyDev and ant. It does the compilation of the python files and is running the unit tests. Next step is to integrate pylint to that process.
I hope it helps

As with other languages, you should use assertions liberally throughout your code. Use assertions when you must rely on the predicate to be true for the program to run, not as exception/error handling. An assertion should be used to check for irrecoverable errors and force the program to crash. More on assertions (and python error checking in general)

You may need this:
python -m py_compile script.py

You might also want to check out PEP8 as a style guide for Python Code.

Related

how can I find python 3 codes only on github?

Is there a way to know which codes on github are in python 3? So far, I haven't any mentionned.
Tks
Very simply:
Yes, you can find Python3 code.
No, you cannot do so effectively.
Code on GitHub is not identified by language and version -- obviously, since you would have found that in your investigations before coding. Yes, you can generally determine the language of a code file, but only with detailed examination -- you would almost need to pass the file to a Python compiler and reject any with syntax errors. This is not an effective process.
You can reduce the search somewhat by gleaning *.py files and then look for frequent, 3-specific features, such as all print commands using parentheses (coding style in Python 2, mandated in Python 3). This merely reduces the problem; it does not give you a good request mechanism.

How to find unused code in Python web site?

We have been using Django for a long time. Some old code is not being used now. How can I find which code is not being used any more and remove them.
I used coverage.py with unit tests, which works fine and shows which part of code is never used, but the test covered is very low. Is there any way to use it with WSGI server to find which code have never served any web requests?
It depends on what you mean by unused code.
For unreachable dead code, like functions are were never called, classes that are never instantiated, you can use a pure static code analyzer to find them. Pylint is a good option. Bear in mind that this is not 100% accurate, false positive is possible:
# static analysis can't detect methods called this way
func = getattr(obj, "func_name")
func()
For code that are reachable, but never reached. You have to rely on tools like coverage.py, and improve your test coverage.
On a well tested project, coverage would be ideal but with some untested legacy code I don't think there is a magical tool.
You could write a big test loading all the pages and run coverage to get some indication.
Cowboy style:
If it's not some critical code and you're fairly sure it's unused (i.e. not handling payments, etc.). Comment it out, check that the tests pass, deploy and wait a week or so before removing it definitely (or putting it back if you got a notification).
As other answers indicate coverage.py accurately finds out which parts of the code are never executed, but coverage requires your code to be actually run to perform the analysis. Vulture on the other hand, runs static analysis for finding dead (unused code) for Python Programs. Also, if you run vulture on both your library and test suite, you might be able to find untested code.
Vulture is a standard PyPI package and can be installed using pip:
$ pip install vulture
Run vulture using the command:
$ vulture apps/ tests/ --exclude settings
Bear in mind that due to Python's dynamic nature, there may be some false positives, but they can be dealt with by the means of "Whitelists" - Please refer to this answer by Rahul for more information on how to use Vulture with django and tackle false positives.

A tool to validate the structure of a Python Package?

I started writing Python code not too long ago and everything just works, but I have been having problem writing a package. I was wondering if there is such a thing as a "package validation tool". I know, I could just start up a REPL and start importing the module but...is there a better way? Is there a tool that could tell me "you have these possible errors"?
Or maybe there is something in the middle: is there a way to test a Python's package structure?
As always, thanks in advance!
If you call a module using:
python -m module
Python will load/execute the module, so you should catch crude syntax errors. Also, if module has a block like:
if __name__ = "__main__":
do_something()
It will be called. For some small self-contained modules I often use this this block to run tests.
Given the very dynamic nature of Python, it is very hard to check for correctness if the module author is not using TTD. There is no silver bullet here. There are tools that will check for "code smells" and compliance with standards (dynamic languages tend to generate a profusion of linters).
pylint
PyChecker
PyFlakes
PEP8
A good IDE like PyCharm can help, if you like IDEs.
These tools can help, but are still far from the assurance of static languages where the compiler can catch many errors at compile time. For example, Go seems to be designed to have a very pedantic compiler. Haskell programs are said to be like mathematical proofs.
If you are coming from a Language with strong compile time checks, just relax. Python is kind of a "throw against the wall and see if it sticks", language. Some of the Python "macho" principles:
duck typing
EAFP
We are all consenting adults
There is no tool to test the package structure per se, and I'm unsure of what would be tested. Almost any structure is a valid structure...
But there are some tools to help you test your package data if you are distributing your module, they may be useful:
Pyroma will check the packages meta data.
check-manifest will check the MANIFEST.in file.
I have both of them installed and also uses zest.releaser which also has some basic sanity-checks. But none of these will check that the code is OK, so it won't look for the __init__ files, for example.

How can I ensure good test-coverage of my big Python proejct

I have a very large python project with a very large test suite. Recently we have decided to quantify the quality of our test-coverage.
I'm looking for a tool to automate the test coverage report generation. Ideally I'd like to have attractive, easy to read reports but I'd settle for less attractive reports if I could make it work quickly.
I've tried Nose, which is not good enough: It is incompatible with distribute / setuptools' namespace package feature. Unfortunately nose coverage will never work for us since we make abundant use of this feature. That's a real shame because Nose seems to work really nicely in Hudson (mostly)
As an alternative, I've heard that there's a way to do a Python coverage analysis in Eclipse, but I've not quite locked-down the perfect technique.
Any suggestions welcome!
FYI we use Python 2.4.4 on Windows XP 32bit
Have you tried using coverage.py? It underlies "nose coverage", but can be run perfectly well outside of nose if you need to.
If you run your tests with (hypothetically) python run_my_tests.py, then you can measure coverage with coverage run run_my_tests.py, then get HTML reports with coverage html.
From your description, I'm not sure what problem you had with nose, especially whether it was a nose issue, or a coverage.py issue. Provide some more details, and I'm sure we can work through them.
Ned has already mentioned his excellent coverage.py module.
If the problem you're having is something nose specific, you might want to consider using another test runner. I've used py.test along with the pytest_coverage plugin that lets you generate coverage statistics. It also has a pytest_nose plugin to help you migrate.
However, I don't understand exactly what the problem you're facing is. Can you elaborate a little on the "distribute / setuptools' namespace package feature" you mentioned? I'm curious to know what the problem is.

Are there any visual tools for Python unit tests?

I'm writing quite a few unit tests and using nosetests to run them. Nose certainly makes it nice and easy to run tests, but the output can be pretty cluttered at the best of times, and a downright mess at others, depending on warnings and errors.
I was wondering if there are any visual tools that wrap nose so that the feedback cleaner. A results grid showing test names, run times and indicating success/failure status with colours would be a huge visual aide. Better still, if it could split up the error messages and tracebacks on a case by case basis, it would really cut down on the amount of clutter when a large number of tests fail on a major change.
This is starting to read like a wishlist, but does anything even close to this exist?
There's a very similar question here.
It doesn't look like there's a standardized GUI for nosetests (as in, standardized for just the python interface). There seems to be a few GUI plugins for nosetests depending on which IDE you're using - a quick search brought up this link for Komodo.
However, there are many other GUI based unit testing frameworks out there for Python - have you tried PyUnit?
Well, what you can do is to use the jenkins integration server, we use it with py.test and phpunit (it is really simple to use it with any other testing framework), it provides us some fancy (and useful!) graphs containing
Number of tests that run
Number of tests passing/failing
The stack trace for the failed tests
Execution times for tests
Code coverage for the tests
You can also relate it to git/svn/mercurial...
Here you have an example of how to configure hudson (for the record, jenkins is a fork of hudson, so the instructions are still valid :), in order to provide the elements from your wishlist.
An easier option is to use a IDE as PyCharm, it has integration with nosetests, and will tell you which tests passed/failed and its times, however, jenkis stores historical data.
Good luck :)
Have you taken a look at the Nose plugins page? A quick glance revealed at least a couple that stated they could provide a GUI interface.

Categories