Vim Python omni-completion failing to work on system modules - python

I'm noticing that even for system modules, code completion doesn't work too well.
For example, if I have a simple file that does:
import re
p = re.compile(pattern)
m = p.search(line)
If I type p., I don't get completion for methods I'd expect to see (I don't see search() for example, but I do see others, such as func_closure(), func_code()).
If I type m., I don't get any completion what so ever (I'd expect .groups(), in this case).
This doesn't seem to affect all modules.. Has any one seen this behaviour and knows how to correct it?
I'm running Vim 7.2 on WinXP, with the latest pythoncomplete.vim from vim.org (0.9), running python 2.6.2.

Completion for this kind of things is tricky, because it would need to execute the actual code to work.
For example p.search() could return None or a MatchObject, depending on the data that is passed to it.
This is why omni-completion does not work here, and probably never will. It works for things that can be statically determined, for example a module's contents.

I never got the builtin omnicomplete to work for any languages. I had the most success with pysmell (which seems to have been updated slightly more recently on github than in the official repo). I still didn't find it to be reliable enough to use consistently but I can't remember exactly why.
I've resorted to building an extensive set of snipMate snippets for my primary libraries and using the default tab completion to supplement.

Related

How can I feed keys in to a terminal for unittesting purposes

I'm working on a plug-in for Vim, and I'd like to test that it behaves correctly, under start-up, when users edit files e.t.c.
To do this, I'd like to start a terminal, and feed keys in to it.
I'm thinking of doing it all from a python script. Is there a way to do this?
In pseudo-python it might look something like this:
#start a terminal. Here konsole
konsole = os.system('konsole --width=200 --height=150')
#start vim in that terminal
konsole.feed_keys("vim\n")
#run the vim function to be tested
konsole.feed_keys(":let my_list = MyVimFunction()\n")
#save the return value to the file system
konsole.feed_keys(":writefile(my_list, '/tmp/result')\n")
#load result into python
with open('/tmp/result', 'r') as myfile:
data = myfile.read()
#validate the result
assertEqual('expect result', data)
I think you should verify the core functionality of your plugin inside Vim, using unit tests. There's a wide variety of Vim plugins, but most provide some additional mappings or commands, to be invoked by the user, and they usually leave behind some side effects in the buffer, or output, or opened windows. That can be verified from inside Vim. There are a various approaches for that, mine is the runVimTests test framework; the plugin page has links to several alternatives.
With the core functionality thus covered, there's little left to test "interactively". (I mean stuff like forgotten debug output, too long execution times, display mess-ups.) Since you're usually a heavy user of Vim and your plugin yourself, that mostly covers it.
Of course, if your plugin embeds itself tightly into Vim (like an "IDE for XXX"; though this is usually frowned upon), you may consider some external test driver. Maybe others will contribute pointers to some general-purpose, terminal-driven test frameworks. I'm almost sure such exist.
While I'm maintaining a plugin that permits to run unit tests on VimL functions and feed the quickfix window with the results, I use another couple of tools to check the state of the buffer after some actions, and even run the thing from travis -> vimrunner+rspec, and VimFlavour for installing the dependencies. (I vaguely remember a Python alternative inspired by vimrunner)
It mostly works well. Alas it uses the client-server feature and :redir (instead of the more recent execute() function). Even with the use of :silent, :redir catches noise which it returns to the client. Thus sometimes I fight tests that fail for very odd reasons. I also find myself inserting some pseudo-pauses to be sure that Vim has finished to interpret what I've feed it.
You'll find example of use in some of my plugins. See for instance lh-brackets or lh-cpp tests (.travis.yml file + .rspec/ directory + Rakefile + Gemfile + some helpers from vim-UT)

Development vs Release Python Code

I am currently developing a Python application which I continually performance test, simply by recording the runtime of various parts.
A lot of the code is related only to the testing environment and would not exist in the real world application, I have these separated into functions and at the moment I comment out these calls when testing. This requires me to remember which calls refer to test only components (they are quite interleaved so I cannot group the functionality).
I was wondering if there was a better solution to this, the only idea I have had so far is creation of a 'mode' boolean and insertion of If statements, though this feels needlessly messy. I was hoping there might be some more standardised testing method that I am naive of.
I am new to python so I may have overlooked some simple solutions.
Thank you in advance
There are libraries for testing like those in the development-section of the standard library. If you did not use such tools yet, you should start to do so - they help a lot with testing. (especially unittest).
Normally Python runs programs in debug mode with __debug__ set to True (see docs on assert) - you can switch off debug mode by setting the command-line switches -O or -OO for optimization (see docs).
There is something about using specifically assertions in the Python Wiki
I'd say if you're commenting out several parts of your code when switching between debug&release mode I think you're doing wrong. Take a look for example to the logging library, as you can see, with that library you can specify the logging level you want to use only by changing a single parameter.
Try to avoid commenting specific parts of your debug code by having one or more variables which controls the mode (debug, release, ...) your script will run. You could also use some builtin ones python already provides

How to dynamically interpose C functions from Python on Linux (without LD_PRELOAD)?

How do I, at run-time (no LD_PRELOAD), intercept/hook a C function like fopen() on Linux, a la Detours for Windows? I'd like to do this from Python (hence, I'm assuming that the program is already running a CPython VM) and also reroute to Python code. I'm fine with just hooking shared library functions. I'd also like to do this without having to change the way the program is run.
One idea is to roll my own tool based on ptrace(), or on rewriting code found with dlsym() or in the PLT, and targeting ctypes-generated C-callable functions, but I thought I'd ask here first. Thanks.
You'll find from one of ltrace developer a way to do this. See this post, which includes a full patch in order to catch dynamically loaded library. In order to call it from python, you'll probably need to make a C module.
google-perftools has their own implementation of Detour under src/windows/preamble_patcher* . This is windows-only at the moment, but I don't see any reason it wouldn't work on any x86 machine except for the fact that it uses win32 functions to look up symbol addresses.
A quick scan of the code and I see these win32 functions used, all of which have linux versions:
GetModuleHandle/GetProcAddress : get the function address. dlsym can do this.
VirtualProtect : to allow modification of the assembly. mprotect.
GetCurrentProcess: getpid
FlushInstructionCache (apparently a nop according to the comments)
It doesn't seem too hard to get this compiled and linked into python, but I'd send a message to the perftools devs and see what they think.

Python - When Is It Ok to Use os.system() to issue common Linux commands

Spinning off from another thread, when is it appropriate to use os.system() to issue commands like rm -rf, cd, make, xterm, ls ?
Considering there are analog versions of the above commands (except make and xterm), I'm assuming it's safer to use these built-in python commands instead of using os.system()
Any thoughts? I'd love to hear them.
Rule of thumb: if there's a built-in Python function to achieve this functionality use this function. Why? It makes your code portable across different systems, more secure and probably faster as there will be no need to spawn an additional process.
One of the problems with system() is that it implies knowledge of the shell's syntax and language for parsing and executing your command line. This creates potential for a bug where you didn't validate input properly, and the shell might interpet something like variable substitution or determining where an argument begins or ends in a way you don't expect. Also, another OS's shell might have divergent syntax from your own, including very subtle divergence that you won't notice right away. For reasons like these I prefer to use execve() instead of system() -- you can pass argv tokens directly and not have to worry about something in the middle (mis-)parsing your input.
Another problem with system() (this also applies to using execve()) is that when you code that, you are saying, "look for this program, and pass it these args". This makes a couple of assumptions which may lead to bugs. First is that the program exists and can be found in $PATH. Maybe on some system it won't. Second, maybe on some system, or even a future version of your own OS, it will support a different set of options. In this sense, I would avoid doing this unless you are absolutely certain the system you will run on will have the program. (Like maybe you put the callee program on the system to begin with, or the way you invoke it is mandated by something like POSIX.)
Lastly... There's also a performance hit associated with looking for the right program, creating a new process, loading the program, etc. If you are doing something simple like a mv, it's much more efficient to use the system call directly.
These are just a few of the reasons to avoid system(). Surely there are more.
Darin's answer is a good start.
Beyond that, it's a matter of how portable you plan to be. If your program is only ever going to run on a reasonably "standard" and "modern" Linux then there's no reason for you to re-invent the wheel; if you tried to re-write make or xterm they'd be sending the men in the white coats for you. If it works and you don't have platform concerns, knock yourself out and simply use Python as glue!
If compatibility across unknown systems was a big deal you could try looking for libraries to do what you need done in a platform independent way. Or you need to look into a way to call on-board utilities with different names, paths and mechanisms depending on which kind of system you're on.
The only time that os.system might be appropriate is for a quick-and-dirty solution for a non-production script or some kind of testing. Otherwise, it is best to use built-in functions.
Your question seems to have two parts. You mention calling commands like "xterm", "rm -rf", and "cd".
Side Note: you cannot call 'cd' in a sub-shell. I bet that was a trick question ...
As far as other command-level things you might want to do, like "rm -rf SOMETHING", there is already a python equivalent. This answers the first part of your question. But I suspect you are really asking about the second part.
The second part of your question can be rephrased as "should I use system() or something like the subprocess module?".
I have a simple answer for you: just say NO to using "system()", except for prototyping.
It's fine for verifying that something works, or for that "quick and dirty" script, but there are just too many problems with os.system():
It forks a shell for you -- fine if you need one
It expands wild cards for you -- fine unless you don't have any
It handles redirect -- fine if you want that
It dumps output to stderr/stdout and reads from stdin by default
It tries to understand quoting, but it doesn't do very well (try 'Cmd" > "Ofile')
Related to #5, it doesn't always grok argument boundaries (i.e. arguments with spaces in them might get screwed up)
Just say no to "system()"!
I would suggest that you only use use os.system for things that there are not already equivalents for within the os module. Why make your life harder?
The os.system call is starting to be 'frowned upon' in python. The 'new' replacement would be subprocess.call or subprocess.Popen in the subprocess module. Check the docs for subprocess
The other nice thing about subprocess is you can read the stdout and stderr into variables, and process that without having to redirect to other file(s).
Like others have said above, there are modules for most things. Unless you're trying to glue together many other commands, I'd stick with the things included in the library. If you're copying files, use shutil, working with archives you've got modules like tarfile/zipfile and so on.
Good luck.

Reading Command Line Arguments of Another Process (Win32 C code)

I need to be able to list the command line arguments (if any) passed to other running processes. I have the PIDs already of the running processes on the system, so basically I need to determine the arguments passed to process with given PID XXX.
I'm working on a core piece of a Python module for managing processes. The code is written as a Python extension in C and will be wrapped by a higher level Python library. The goal of this project is to avoid dependency on third party libs such as the pywin32 extensions, or on ugly hacks like calling 'ps' or taskkill on the command line, so I'm looking for a way to do this in C code.
I've Googled this around and found some brief suggestions of using CreateRemoteThread() to inject myself into the other process, then run GetCommandLine() but I was hoping someone might have some working code samples and/or better suggestions.
UPDATE: I've found full working demo code and a solution using NtQueryProcessInformation on CodeProject: http://www.codeproject.com/KB/threads/GetNtProcessInfo.aspx - It's not ideal since it's "unsupported" to cull the information directly from the NTDLL structures but I'll live with it. Thanks to all for the suggestions.
UPDATE 2: I managed through more Googling to dig up a C version that does not use C++ code, and is a little more direct/concisely pointed toward this problem. See http://wj32.wordpress.com/2009/01/24/howto-get-the-command-line-of-processes/ for details.
Thanks!
To answer my own question, I finally found a CodeProject solution that does exactly what I'm looking for:
http://www.codeproject.com/KB/threads/GetNtProcessInfo.aspx
As #Reuben already pointed out, you can use NtQueryProcessInformation to retrieve this information. Unfortuantely it's not a recommended approach, but given the only other solution seems to be to incur the overhead of a WMI query, I think we'll take this approach for now.
Note that this seems to not work if using code compiled from 32bit Windows on a 64bit Windows OS, but since our modules are compiled from source on the target that should be OK for our purposes. I'd rather use this existing code and should it break in Windows 7 or a later date, we can look again at using WMI. Thanks for the responses!
UPDATE: A more concise and C only (as opposed to C++) version of the same technique is illustrated here:
http://wj32.wordpress.com/2009/01/24/howto-get-the-command-line-of-processes/
The cached solution:
http://74.125.45.132/search?q=cache:-wPkE2PbsGwJ:windowsxp.mvps.org/listproc.htm+running+process+command+line&hl=es&ct=clnk&cd=1&gl=ar&client=firefox-a
in CMD
WMIC /OUTPUT:C:\ProcessList.txt PROCESS get Caption,Commandline,Processid
or
WMIC /OUTPUT:C:\ProcessList.txt path win32_process get Caption,Processid,Commandline
Also:
http://mail.python.org/pipermail/python-win32/2007-December/006498.html
http://tgolden.sc.sabren.com/python/wmi_cookbook.html#running_processes
seems to do the trick:
import wmi
c = wmi.WMI ()
for process in c.Win32_Process ():
print process.CommandLine
By using psutil ( https://github.com/giampaolo/psutil ):
>>> import psutil, os
>>> psutil.Process(os.getpid()).cmdline()
['C:\\Python26\\python.exe', '-O']
>>>
The WMI approach mentioned in another response is probably the most reliable way of doing this. Looking through MSDN, I spotted what looks like another possible approach; it's documented, but its not clear whether it's fully supported. In MSDN's language, it--
may be altered or unavailable in
future versions of Windows...
In any case, provided that your process has the right permissions, you should be able to call NtQueryProcessInformation with a ProcessInformationClass of ProcessBasicInformation. In the returned PROCESS_BASIC_INFORMATION structure, you should get back a pointer to the target process's process execution block (as field PebBaseAddress). The ProcessParameters field of the PEB will give you a pointer to an RTL_USER_PROCESS_PARAMETERS structure. The CommandLine field of that structure will be a UNICODE_STRING structure. (Be careful not too make too many assumptions about the string; there are no guarantees that it will be NULL-terminated, and it's not clear whether or not you'll need to strip off the name of the executed application from the beginning of the command line.)
I haven't tried this approach--and as I mentioned above, it seems a bit... iffy (read: non-portable)--but it might be worth a try. Best of luck...
If you aren't the parent of these processes, then this is not possible using documented functions :( Now, if you're the parent, you can do your CreateRemoteThread trick, but otherwise you will almost certainly get Access Denied unless your app has admin rights.

Categories