In default installation of cedet-1.0 completion can only track global scope symbols in current file. This is not much differs from built-in completion functions (dabbrev-expand or hippie-expand).
It can complete symbols from neither imported modules, nor class properties.
Not saying it cannot handle 'self'.
Is it possible to tweak semantic to do the things?
P.S.
ECB code browser sucesfully sees all imports/base classess and stuff.
It is symbol completion workd incorrectly, or not properly set up.
CEDET support for each language is slightly different. In the case of python, the 1.0 release for CEDET hadn't been configured to convert a python import into a file-name. In addition, 'self' is similar to 'this' in c++, which needs to be added by completion logic since it isn't declared. These two features were added to the bzr repository in January of this year. I am not a python programmer, but I recall reports that this fixed a range of the most basic features of smart completion so that symbols from imported libraries works. There was also new code in bzr for python system paths.
Thus, I recommend downloading CEDET from bzr to get these features to see if it now does what you would expect for smart completion.
Related
I've created the class CustomErr (inherits from Python's Exception) long time ago and I'm using it on a few hundred places in code. It has a member method which is declared with def logErr(self).
After some time I found that this function should have mandatory keyword argument logger, so I've changed declaration of the function to def logErr(self, *, logger).
After this change I've expected a few hundred compiler errors, or at least warnings (so it will be easy for me to make needed changes), but it didn't happen - there is not a single error reported. I only get runtime exception when this function is called.
Did I expect too much from Python compiler and/or my working IDE (I'm using Eclipse with PyDev 8.1, Python 3.8.7)?
Do I need a Python linter ?
PyDev by default doesn't really provide this checking -- it does have code-analysis builtin, but it focuses on being very fast and on having very few false positives and it doesn't really delve in the type-checking territory.
In PyDev the recommended way to work with this is through an external linter or type-checker.
PyDev has integrations with Flake8, Mypy and PyLint (the related configuration in the preferences is under PyDev > Editor > Code Analysis > Flake8/Mypy/PyLint).
If you're using type hints in your code, enabling Mypy is probably a no-brainer and actually required to make use of such typing for code analysis in PyDev... out of the box PyDev will use that info for code-completion, but there aren't plans to add it for code analysis as I'm a bit worried about duplicating what's already available in Mypy anyways and I don't see much positives in having yet another type checker.
PyLint may also be interesting as it does get some interesting errors (but in my experience it usually also has a higher false positive ratio, so, it may need to be tweaked a bit more) -- if you don't use type-hints (and thus wouldn't really get much from Mypy), I think that using PyLint is interesting as it does try to cover more ground than PyDev.
As a note, if you're not using type-checking/MyPy, what you want is probably not achievable with 100% certainty by any tool because it's difficult to find out that something as parameter.logErr means the logErr in your CustomErr and not really some other logErr somewhere else (because discovering that the type of parameter is CustomErr is really hard).
p.s.: automatically running the type-checker/linter is done for each file as you save it, but you can also run it for the whole project by right-clicking a project/file and then selecting PyDev > Code Analysis.
The vscode will record all functions, classes, variables in a table?
Or How does vscode can find definitions and references?
I want to know how VScode build this "Go To Definition" function?
Go to Definition:
If a language supports it, you can go to the definition of a symbol by pressing F12.
If you press Ctrl and hover over a symbol, a preview of the declaration will appear:
Tip: You can jump to the definition with Ctrl+Click or open the definition to the side with Ctrl+Alt+Click.
Go to Type Definition:
Some languages also support jumping to the type definition of a symbol by running the Go to Type Definition command from either the editor context menu or the Command Palette. This will take you to the definition of the type of a symbol. The command editor.action.goToTypeDefinition is not bound to a keyboard shortcut by default but you can add your own custom keybinding.
the explanation is copy pasted from this page:
this
This is a very complicated process and there is a ton of research on software engineering related to this area. Most IDEs/Extensions usually do a best-effort job, and that is good enough for various use cases. It also helps if one follows a widely used build system, and sticks to widely accepted code organization.
Finding definition of a call is a complicated process and usually involve building AST (Abstract Syntax Tree) of the program and finding where a function is defined. There are two important consideration:
type system of the language
organization of code around the build system
For (mostly) statically typed languages like C/C++ etc. finding the function declaration is not too difficult but finding definition usually involves finding where the source file (.c/.cpp) file is? Without the knowledge of build system (cmake, qmake) it is very difficult and hence most IDEs have hard time supporting it.
For languages like rust, where it is customary to follow a rigid system of code organization, simple grepping with some type information is sufficient to get most of the function definitions.
For languages with dynamic type (Python, JavaScript), it is very hard to find the right definition and IDEs/Extensions usually support a well known coding style and module system. TLDR: It helps to follow the crowd in this case. In my experience, PyCharm does a very good job of finding definition of a function invocation.
I am currently developing a Python application which I continually performance test, simply by recording the runtime of various parts.
A lot of the code is related only to the testing environment and would not exist in the real world application, I have these separated into functions and at the moment I comment out these calls when testing. This requires me to remember which calls refer to test only components (they are quite interleaved so I cannot group the functionality).
I was wondering if there was a better solution to this, the only idea I have had so far is creation of a 'mode' boolean and insertion of If statements, though this feels needlessly messy. I was hoping there might be some more standardised testing method that I am naive of.
I am new to python so I may have overlooked some simple solutions.
Thank you in advance
There are libraries for testing like those in the development-section of the standard library. If you did not use such tools yet, you should start to do so - they help a lot with testing. (especially unittest).
Normally Python runs programs in debug mode with __debug__ set to True (see docs on assert) - you can switch off debug mode by setting the command-line switches -O or -OO for optimization (see docs).
There is something about using specifically assertions in the Python Wiki
I'd say if you're commenting out several parts of your code when switching between debug&release mode I think you're doing wrong. Take a look for example to the logging library, as you can see, with that library you can specify the logging level you want to use only by changing a single parameter.
Try to avoid commenting specific parts of your debug code by having one or more variables which controls the mode (debug, release, ...) your script will run. You could also use some builtin ones python already provides
I don't miss having to type out type declarations all the time - that's one of many reasons I love python.
But I like to at least have some "variable used before set", "variable set but never used" and "wrong number of arguments" checks, to avoid trivial errors.
pylint is reportedly the most stringent static analysis tool for Python, so I've been using it to get such checks.
However, I have a bit of a disappointment with pylint: EG, I have 3 modules that do compression in 3 different ways (subprocess, ctypes, bz2 module), and I want to add a 4th (the new lzma module in CPython 3.3, which hasn't yet been released). These modules all have slightly different portability and behavior. I want pylint to be able to check these alternatives, but it seems to get lost.
So I started using a tiny "dispatch module" for a while, that would do nothing but choose between the different compression alternatives, and that Wouldn't Be Pylint'd, and that sort of works, but not as well as I'd like. It allows me to pylint most of my code, but it seems to make pylint unable to check uses of the compression code.
Is there some way of getting static analysis for python that allows the checking of multiple modules all providing the same interface, as well as calls into those modules from the same code? Maybe pyflakes? Or a magic "# pylint:" hint I don't don't know about?
Thanks!
I'm noticing that even for system modules, code completion doesn't work too well.
For example, if I have a simple file that does:
import re
p = re.compile(pattern)
m = p.search(line)
If I type p., I don't get completion for methods I'd expect to see (I don't see search() for example, but I do see others, such as func_closure(), func_code()).
If I type m., I don't get any completion what so ever (I'd expect .groups(), in this case).
This doesn't seem to affect all modules.. Has any one seen this behaviour and knows how to correct it?
I'm running Vim 7.2 on WinXP, with the latest pythoncomplete.vim from vim.org (0.9), running python 2.6.2.
Completion for this kind of things is tricky, because it would need to execute the actual code to work.
For example p.search() could return None or a MatchObject, depending on the data that is passed to it.
This is why omni-completion does not work here, and probably never will. It works for things that can be statically determined, for example a module's contents.
I never got the builtin omnicomplete to work for any languages. I had the most success with pysmell (which seems to have been updated slightly more recently on github than in the official repo). I still didn't find it to be reliable enough to use consistently but I can't remember exactly why.
I've resorted to building an extensive set of snipMate snippets for my primary libraries and using the default tab completion to supplement.