I am developing python in eclipse. As a result, python src files and test files are in different directories.
Question is: How do we run on command line specific test files in the test folder? These obviously depend on files in the src folder.
Cheers
Edit: if I run
python test/myTestFile.py
I get dependency errors, eg. ImportError: No module named SrcFile1
You need to make sure your PYTHONPATH is set correctly so the command-line interpreter can find your packages, or run your test cases from within Eclipse Pydev. Update: Another option: running your tests using nose might make things a bit easier, since it can auto-discover packages and test cases.
If your project is laid out like so:
/home/user/dev/
src/pkg1/
mod1.py
test/
mod1_test.py
Use: PYTHONPATH=$HOME/dev/src python test/mod1_test.py. I'd also recommend using distribute and virtualenv to set up your project for development.
Updated in response to question in comments:
This shows how the PYTHONPATH environment variable extends Python's package sear
ch path:
% PYTHONPATH=foo:bar python -c 'import sys; print sys.path[:3]'
['', '/home/user/foo', '/home/user/bar']
# exporting the variable makes it sticky for your current session. you can
# add this to your shell's resource file (e.g. ~/.profile) or source
# it from a textfile to save typing:
% export PYTHONPATH=bar:baz
% python -c 'import sys; print sys.path[:3]'
['', '/home/user/foo', '/home/user/bar']
% python -c 'import sys; print sys.path[:3]'
['', '/home/user/foo', '/home/user/bar']
The above should get you going in the short term. Using distribute and
virtualenv have a higher one-time setup cost but you get longer-term benefits
from using them. When you get a chance, read some of the many tutorials on SO for setting these up to see if they're a good fit for your project.
There are 2 principal solutions to this. Either, you need to use e.g. PYTHONPATH environment variable to tell the tests where the source is, or you need to make tests and production code part of the same module tree by inserting the relevant __init__.py files. In the latter approach, the tree may look something like this:
|-- qbit
| |-- __init__.py
| |-- master.py
| |-- policy.py
| |-- pool.py
| |-- synchronize.py
| `-- worker.py
`-- test
|-- __init__.py
|-- support.py
|-- test_policy.py
|-- test_synchronize.py
`-- test_worker.py
__init__.py can be an empty file.
Related
Context
While trying to generate type hints for all python files in some project named projectname, I am experiencing some difficulties applying the stubs. The project directory is:
projectname/
|-- src/
| |-- projectname/__main__.py
| |-- projectname/some_folder/some_script.py
|-- tests/
| |-- projectname/test_something.py
| |-- projectname/another_folder/test_another_thing.py
|
|-- setup.py
|-- README.md
The main code is executed with:
python -m src.projectname
and the tests are executed with:
python -m pytest
Output Attempt I
Based on this issue, I tried running:
monkeytype run src/snncompare/__main__.py
Which generates the monkeytype.sqlite3 file in the root directory of the project. However, when I try to apply the generated type hints with:
monkeytype stub src.snncompare.__main__.py
monkeytype apply src.snncompare.__main__.py
or:
monkeytype stub src.snncompare.__main__
monkeytype apply src.snncompare.__main__
I get:
No traces found for module src.projectname.__main__
No traces found for module src.projectname.__main__.py
respectively. And for:
monkeytype apply src.snncompare
It says:
No traces found for module src.snncompare
Output Attempt II
Based on this issue, in which it says one should apply the stub to the "modulename" I tried running:
monkeytype run src/projectname/some_folder/some_script.py
monkeytype run src/projectname.some_folder.some_script.py
monkeytype run src/projectname.some_folder.some_script
monkeytype run some_script.py
monkeytype run some_script
from the root directory of the script, and they all produce the same error. I do not yet know exactly how to determine what the "modulename" of my project is.
Attempt III
Based on this answer, I think I should create an additional file that calls the __main__.py file and executes its code, to be able to generate stubs for it, or the modules imported by it. However, that would seem in conflict with the quote:
We don't collect traces for main because it could be different on subsequent runs and it would be confusing to look up such traces.
Question
How can I apply the generated stubs to all the (touched) .py files in the project?
Misunderstanding
I think the key misunderstanding I had can be clarified with:
MonkeyType currently does not automatically create typings for a Python project.
Issue
As described in the issues and the readme, monkey type only generates type hints for the some.py files that a main.py file imports and uses. Even though I am relatively confident that a large fraction of my files are used when I run __main__.py, most of them are not directly imported by __main__.py. Instead, they are imported by the code that is imported by my __main__.py.
Manual-"Solution"
To "automatically"/semi-manually generate the type hints, you need to write another_python.py file that:
imports each other.py file for which you want to automatically create type-hints.
Calls each function in every other.py file for which you want to automatically create type-hints.
Bit of automation
Once could at least automate applying the stubs that you did find by walking over the directories and trying to apply the stubs if they exists:
monkeytype run src/projectname/__main__.py
# List all .py files in the project:
for f in $(find src/ -name '*.py'); do echo $f; done
# List all .py files in the project with `.` instead of `/`:
for f in $(find src/ -name '*.py'); do echo ${f//\//.}; done
# Apply monkeytype type hints to each file for which they were found.
for f in $(find src/ -name '*.py'); do monkeytype apply ${f//\//.}; done
Additional Issues
I had a file src/projectname/some_file.py which contained some_function(): which was definitely called by __main__.py yet no stubs like: some_function() -> None: were generated for this file. In total 1 out of 50 files were changed. I retried auto generating the stubs using the tests I wrote, with pyannotate by dropbox and that worked more effectively.
I use pants to manage a Python project that uses protocol buffers. Pants places the generated _pb2.py and _pb2.pyi files under a separate dist/codegen tree. Is it possible to get VS Code autocomplete to work when using the _pb2 modules?
The file tree looks like this:
.
|-- dist/
| `-- codegen/
| `-- src/
| `-- project/
| |-- data_pb2.py
| `-- data_pb2.pyi
`-- src/
`-- project/
|-- __init__.py
|-- code.py
`-- data.proto
And in code.py I have import statements like this:
from project import data_pb2
I've tried setting python.analysis.extraPaths to ["dist/codegen/src"] in settings.json. This makes pylance stop complaining that data_pb2 is missing. But autocomplete still does not work, and pylance has no type information for members of data_pb2.
Replace your python.analysis.extraPaths with the following extent:
"python.analysis.extraPaths": [
"./dist/codegen/src"
],
And adding the following code to your code.py:
import sys
sys.path.append(".\dist\codegen\src")
You can use Python implicit namespace packages (PEP 420) to make this work. Namespace packages are allowed to have modules within the same package reside in different directories. Which allows pylance and other tools to work correctly when code is split between src and dist/codegen/src.
To use implicit namespace packages, you just need to remove src/package/__init__.py, and leave "python.analysis.extraPaths" set to ["dist/codegen/src"].
See also the GitHub issue microsoft/pylance-release#2855, which describes using implicit namespace packages to make pylance work correctly in a similar situation.
I'm using PyCharm CE to develop a project with a structure similar to this:
test_python/
|-- app
| |-- __init__.py
| |-- mymodule.py
| |-- mymodule.pyc
| `-- test_mymodule.py
|-- config.py
`-- tests
|-- __init__.py
|-- test_config.py
`-- test_models.py
When I try to run my test scripts such as test_config.py, I get:
$ python tests/test_config.py
Traceback (most recent call last):
File "tests/test_config.py", line 1, in <module>
from config import app_config
ImportError: No module named config
I have read a lot of other SO posts that talk about needing a init.py file in all directories that are packages (which I have done already). Many also suggest messing around with sys.path. My problem with this latter approach is that I never had to meddle with the paths previously. I'm not sure if it's something that changed with my dev environment setup, but here it is:
Python 2.7 | macOS Sierra | PyCharm CE
I have tried to install a virtual environment with virtualenv but didn't see a difference. Here is the sample project on github if you'd like to run it yourself
It seems that there is a problem with folder depth. Replace from ../config import app_config.
I think it will work if you change the working directory on your run configuration to test_python. All the packages in your import statements are relative to some entry in your Python path, and the working directory is usually in the Python path.
I have a program with several submodules. I want the submodules to be usable independently, so that I can use some of them in other programs as well. However, the submodules have inter-dependency, requiring aspects of each other to run. What is the least problematic way to deal with this?
Right now, I have structured my program like this:
myapp/
|-- __init__.py
|-- app.py
|-- special_classes
| |-- __init__.py
| `-- tools
| `-- __init__.py
|-- special_functions
| |-- __init__.py
| `-- tools
| `-- __init__.py
`-- tools
|-- __init__.py
|-- classes.py
`-- functions.py
Where each submodule is a a git submodule of its parent.
The advantage of this is that I can manage and develop each submodule independently, and adding one of these submodules to a new project is as simple as git cloneing and git submodule adding it. Because I work in a managed, shared computing environment, this also makes it easy to run the program since user environment management & software versioning and installation are contentious issues.
The disadvantage is that in this example, I now have 3 copies of the tools submodule, which are independent of each other and have to be manually updated everytime there is a change in any of them. Doing any sort of development on the submodules becomes very cumbersome. Similarly, it has now tripled the number of unit tests I run, since tests get run in each submodule and there are 3 copies of the tools module.
I have seen various importing methods, such as those mentioned here but that does not seem like an ideal solution for this.
I have read about how to create a formal Python package here but this seems like a large undertaking and will make it much more difficult for my end users to actually install and run the program.
Another relevant question asked here
Better to have a single tool in the parent and import it to the submodule. that's by far feel best to me.
I have a small python application that I would like to make into a downloadable / installable executable for UNIX-like systems. I am under the impression that setuptools would be the best way to make this happen but somehow this doesn't seem to be a common task.
My directory structure looks like this:
myappname/
|-- setup.py
|-- myappname/
| |-- __init__.py
| |-- myappname.py
| |-- src/
| |-- __init__.py
| |-- mainclassfile.py
| |-- morepython/
| |-- __init__.py
| |-- extrapython1.py
| |-- extrapython2.py
The file which contains if __name__ == "__main__": is myappname.py. This file has a line at the top, import src.mainclassfile.
When this is downloaded, I would like for a user to be able to do something like:
$ python setup.py build
$ python setup.py install
And then it will be an installed executable which they can invoke from anywhere on the command line with:
$ myappname arg1 arg2
The important parts of my setup.py are like:
from setuptools import setup, find_packages
setup(
name='code2flow',
scripts=['myappname/myappname.py'],
package_dir={'myappname': 'myappname'},
packages=find_packages(),
)
Current state
By running:
$ sudo python setup.py install
And then in a new shell:
$ myapp.py
I am getting a No module named error
The problem here is that your package layout is broken.
It happens to work in-place, at least in 2.x. Why? You're not accessing the package as myappname—but the same directory that is that package's directory is also the top-level script directory, so you end up getting any of its siblings via old-style relative import.
Once you install things, of course, you'll end up with the myappname package installed in your site-packages, and then a copy of myappname.py installed somewhere on your PATH, so relative import can't possibly work.
The right way to do this is to put your top-level scripts outside the package (or, ideally, into a bin directory).
Also, your module and your script shouldn't have the same name. (There are ways you can make that work, but… just don't try it.)
So, for example:
myappname/
|-- setup.py
|-- myscriptname.py
|-- myappname/
| |-- __init__.py
| |-- src/
| |-- __init__.py
| |-- mainclassfile.py
Of course so far, all this makes it do is break in in-place mode the exact same way it breaks when installed. But at least that makes things easier to debug, right?
Anyway, your myscriptname.py then has to use an absolute import:
import myappname.src.mainclassfile
And your setup.py has to find the script in the right place:
scripts=['myscriptname.py'],
Finally, if you need some code from myscriptname.py to be accessible inside the module, as well as in the script, the right thing to do is to refactor it into two files—but if that's too difficult for some reason, you can always write a wrapper script.
See Arranging your file and directory structure and related sections in the Hitchhiker's Guide to Packaging for more details.
Also see PEP 328 for details on absolute vs. relative imports (but keep in mind that when it refers to "up to Python 2.5" it really means "up to 2.7", and "starting in 2.6" means "starting in 3.0".
For a few examples of packages that include scripts that get installed this way via setup.py (and, usually, easy_install and pip), see ipython, bpython, modulegraph, py2app, and of course easy_install and pip themselves.