I'm trying to add pre and post-process actions when building a project with SCons.
The SConstruct and SConscript files are at the top of the project.
Pre-process actions:
Generating code(by calling different tools):
-> without knowing the exact files that will be generated after this pre-process (additional pre-process for deciding which files were generated can be created in order to feed SCons with them)
-> running external scripts(python, pearl scripts), executed before compilation
Post-process actions:
->running external tools, running external scripts that should be executed after linking
What I tried until now:
For pre-process:
To use os.system from python in order to run a cmd. ( works fine but I'm looking for a "SCons solution" )
To use AddPreAction(target, action) function from SCons. Unfortunately this function is executed after compiling the project as the SCons user manual states: "The specified pre_action would be executed before scons calls the link command that actually generates
the executable program binary foo, not before compiling the foo.c file into an object file."
For post-process:
To use AddPostAction(target, action) and this works fine, fortunately.
I'm looking for solutions that will make SCons somehow aware of this pre and post processes.
My question is the following:
What is the best approach, for the requirements stated above, using SCons ? Is there a way to execute pre-process actions before compilation using SCons built-in functions ?
You don't give very much detail about what you've tried to get your pre-processing part working. In general, you should try to create real Builders for the Code generation part...this will make the detection and handling of dependencies easier for SCons (and for you as the user ;) ). You may want to check out our Wiki at https://bitbucket.org/scons/scons/wiki/ToolsForFools , where we explain in large detail how to write new Builders.
If you need to run additional scripts on every build, you should be able to trigger these fine with the os.system() or an appropriate subprocess call right at the start of your top-level SConstruct for example. But what I get from your latest edit, and I'll refer mainly to the first of the questions you asked, is that you're trying to model some sort of "staged" build process. You think you need a "preprocess" stage, where you can hook into and create all the additional headers and sources you might need, by calling your scripts. My guess is, that you're trying to rewrite something like an original make/autotools setup and would like to reuse parts wherever possible, which isn't a bad idea of course. But SCons isn't stage-driven, it's dependency-driven...so your current approach is a bad fit and might lead to problems sooner or later.
The best thing you can do, is to forget Pre- and PostActions and get your dependencies straight. In addition to writing your own Builder(s) to replace your scripts, you'd have to implement a proper Emitter for each of these Builders. This Emitter (check the Tools guide mentioned above) would have to parse your input file that goes into the script, and return the list of filenames that will be generated when the script gets actually run. Like this, SCons will then know a priori which files get generated once the build script is run, and can use these names for resolving dependencies already (even if the actual files don't exist yet).
For the post-processing part: this is usually handled by using the standard Python atexit handler. See e.g. How do I run some code after every build in scons? for an example.
Related
I am interested in using doit to automate the build process of a python package.
If possible, I would like doit to re-execute a task if any of the user-created source files it depends on have changed.
From my understanding, the best way to accomplish this would be to use the file_dep key and a list of the dependent source files, however I am having a lot of trouble generating this list.
I've tried using sys.modules and inspect.getmembers(), but these solutions can't deal with import statements that do not import a module, such as from x import Y, which is unfortunately a common occurrence in the package I am developing.
Another route I investigated was to use the snakefood tool, which initially looks like it would do exactly what I wanted, generate a list of file dependencies for every file in a given path.
Unfortunately, this tool seems to have limited Python 3 support, making it useless for my package.
Does anyone have any insight into how to get snakefood-like features in Python 3, or is the only option to change all of my source code to only import modules?
doit tutorial itself is about creating a graph of python module imports!
It uses import_deps package, it is similar to snakefood.
Note that for your use-case you will need to modify file_dep itself during Task action's execution. To achieve that you need to pass the task parameter to your action (as described here).
I have a Python script that I would like to run in my Hype 3 build. The script takes an input (several) and outputs an answer based on on the input (but that’s handled in the script) how I could do that?
Tumult Hype’s Export Scripts infrastructure allows code to be run upon export and/or preview. This can be python code, in fact the sample scripts we have are all in Python.
General info: https://tumult.com/hype/export-scripts/
Developer docs & code: https://github.com/tumult/hype-export-scripts/
With this, you can arbitrarily modify Hype’s output however you see fit.
You’ll get a new File > Export as HTML5 > … menu item; this allows choosing a location to save. I don’t think there’s any way to bypass the save dialog at this point. You probably just ignore it. (Though I guess you could also be clever and have a Preview stick your document wherever you want, since previewing doesn’t have a prompt).
I'm working on a plug-in for Vim, and I'd like to test that it behaves correctly, under start-up, when users edit files e.t.c.
To do this, I'd like to start a terminal, and feed keys in to it.
I'm thinking of doing it all from a python script. Is there a way to do this?
In pseudo-python it might look something like this:
#start a terminal. Here konsole
konsole = os.system('konsole --width=200 --height=150')
#start vim in that terminal
konsole.feed_keys("vim\n")
#run the vim function to be tested
konsole.feed_keys(":let my_list = MyVimFunction()\n")
#save the return value to the file system
konsole.feed_keys(":writefile(my_list, '/tmp/result')\n")
#load result into python
with open('/tmp/result', 'r') as myfile:
data = myfile.read()
#validate the result
assertEqual('expect result', data)
I think you should verify the core functionality of your plugin inside Vim, using unit tests. There's a wide variety of Vim plugins, but most provide some additional mappings or commands, to be invoked by the user, and they usually leave behind some side effects in the buffer, or output, or opened windows. That can be verified from inside Vim. There are a various approaches for that, mine is the runVimTests test framework; the plugin page has links to several alternatives.
With the core functionality thus covered, there's little left to test "interactively". (I mean stuff like forgotten debug output, too long execution times, display mess-ups.) Since you're usually a heavy user of Vim and your plugin yourself, that mostly covers it.
Of course, if your plugin embeds itself tightly into Vim (like an "IDE for XXX"; though this is usually frowned upon), you may consider some external test driver. Maybe others will contribute pointers to some general-purpose, terminal-driven test frameworks. I'm almost sure such exist.
While I'm maintaining a plugin that permits to run unit tests on VimL functions and feed the quickfix window with the results, I use another couple of tools to check the state of the buffer after some actions, and even run the thing from travis -> vimrunner+rspec, and VimFlavour for installing the dependencies. (I vaguely remember a Python alternative inspired by vimrunner)
It mostly works well. Alas it uses the client-server feature and :redir (instead of the more recent execute() function). Even with the use of :silent, :redir catches noise which it returns to the client. Thus sometimes I fight tests that fail for very odd reasons. I also find myself inserting some pseudo-pauses to be sure that Vim has finished to interpret what I've feed it.
You'll find example of use in some of my plugins. See for instance lh-brackets or lh-cpp tests (.travis.yml file + .rspec/ directory + Rakefile + Gemfile + some helpers from vim-UT)
tl;dr:
How can I cache the results of a Python function to disk and in a later session use the cached value if and only if the function code and all of its dependencies are unchanged since I last ran it?
In other words, I want to make a Python caching system that automatically watches out for changed code.
Background
I am trying to build a tool for automatic memoization of computational results from Python. I want the memoization to persist between Python sessions (i.e. be reusable at a later time in another Python instance, preferrably even on another machine with the same Python version).
Assume I have a Python module mymodule with some function mymodule.func(). Let's say I already solved the problem of serializing/identifying the function arguments, so we can assume that mymodule.func() takes no arguments if it simplifies anything.
Also assume that I guarantee that the function mymodule.func() and all its dependencies are deterministic, so mymodule.func() == mymodule.func().
The task
I want to run the function mymodule.func() today and save its results (and any other information necessary to solve this task). When I want the same result at a later time, I would like to load the cached result instead of running mymodule.func() again, but only if the code in mymodule.func() and its dependencies are unchanged.
To simplify things, we can assume that the function is always run in a freshly started Python interpreter with a minimal script like this:
import some_save_function
import mymodule
result = mymodule.func()
some_save_function(result, 'filename')
Also, note that I don't want to be overly conservative. It is probably not too hard to use the modulefinder module to find all modules involved when running the first time, and then not use the cache if any module has changed at all. But this defeats my purpose, because in my use case it is very likely that some unrelated function in an imported module has changed.
Previous work and tools I have looked at
joblib memoizes results tied to the function name, and also saves the source code so we can check if it is unchanged. However, as far as I understand it does not check upstream functions (called by mymodule.func()).
The ast module gives me the Abstract Syntax Tree of any Python code, so I guess I can (in principle) figure it all out that way. How hard would this be? I am not very familiar with the AST.
Can I use any of all the black magic that's going on inside dill?
More trivia than a solution: IncPy, a finished/deceased research project, implemented a Python interpreter doing this by default, always. Nice idea, but never made it outside the lab.
Grateful for any input!
As programmers we read more than we write. I've started working at a company that uses a couple of "big" Python packages; packages or package-families that have a high KLOC. Case in point: Zope.
My problem is that I have trouble navigating this codebase fast/easily. My current strategy is
I start reading a module I need to change/understand
I hit an import which I need to know more of
I find out where the source code for that import is by placing a Python debug (pdb) statement after the imports and echoing the module, which tells me it's source file
I navigate to it, in shell or the Vim file explorer.
most of the time the module itself imports more modules and before I know it I've got 10KLOC "on my plate"
Alternatively:
I see a method/class I need to know more of
I do a search (ack-grep) for the definition of that method/class across the whole codebase (which can be a pain because the codebase is partly in ~/.buildout-eggs)
I find one or more pieces of code that define that method/class
I have to deduce which one of them is the one I need to read
This costs a lot of time, which is understandable for a big codebase. But I get the feeling that navigating a large and unknown Python codebase is a common enough problem.
So I'm looking for technical tools or strategic solutions for this problem.
...
I just can't imagine hardcore Python programmers using the strategies outlined above.
on Vim, I like NERDTree (a file browser) and taglist.vim (source code browser --> http://www.vim.org/scripts/script.php?script_id=273)
also in Vim, you can use CTRL-] to jump to a definition (:h CTRL-]):
download exuberant ctags http://ctags.sourceforge.net/
follow the install directions and put it somewhere on your PATH
from the 'root' directory of your source code, make a tags file from the shell: "ctags -R"
(make sure you have :set noautochdir, and make sure :pwd is the root directory from step 3)
go into Vim, cursor over some function or class name, hit CTRL-]
by default, if there's multiple matches for the tag, it shows you everywhere it was imported, and where it was declared
if the tag only has one match, it immediately jumps to it
...then use Ctrl+O and Ctrl+I to move back and forth from where you were
(repeat above steps for the source code of particular libraries you use, i usually keep a separate Vim window open to study stuff)
I use ipython's ?? command
You just need to figure out how to import the things you want to look for, then add ?? to the end of the module or class or function or method name to view their source code. And the command completion helps on figuring out long names as well.
Try red pill: https://github.com/klen/python-mode