I was trying to install autoclose.vim to Vim. I noticed I didn't have a ~/.vim/plugin folder, so I accidentally made a ~/.vim/plugins folder (notice the extra 's' in plugins). I then added au FileType python set rtp += ~/.vim/plugins to my .vimrc, because from what I've read, that will allow me to automatically source the scripts in that folder.
The plugin didn't load for me until I realized my mistake and took out the extra 's' from 'plugins'. I'm confused because this new path isn't even defined in my runtime path. I'm basically wondering why the plugin loaded when I had it in ~/.vim/plugin but not in ~/.vim/plugins?
:help load-plugins outlines how plugins are loaded.
Adding a folder to your rtp alone does not suffice; it must have a plugin subdirectory. For example, given :set rtp+=/tmp/foo, a file /tmp/foo/plugin/bar.vim would be detected and loaded, but neither /tmp/foo/plugins/bar.vim nor /tmp/foo/bar.vim would be.
You are on the right track with set rtp+=... but there's a bit more to it (rtp is non-recursive, help indexing, many corner cases) than what meets the eye so it is not a very good idea to do it by yourself. Unless you are ready for a months-long drop in productivity.
If you want to store all your plugins in a special directory you should use a proper runtimepath/plugin-management solution. I suggest Pathogen (rtp-manager) or Vundle (plugin-manager) but there are many others.
In addition to #Nikita Kouevda answer: modifying rtp on FileType event may be too late for vim to load any plugins from the modified runtimepath: if this event was launched after vimrc was sourced it is not guaranteed plugins from new addition will be loaded; if this event was launched after VimEnter event it is guaranteed plugins from new addition will not be sourced automatically.
If you want to source autoclose only when you edit python files you should use :au FileType python :source ~/.vim/macros/autoclose.vim (note: macros or any other subdirectory except plugin and directories found in $VIMRUNTIME or even any directory not found in runtimepath at all).
If you want to use autoclose only when you edit python files you should check out plugin source and documentation, there must be support on the plugin side for it to work.
// Or, if autoclose does not support this, use :au FileType command from above paragraph, but prepend source with something that records vim state (commands, mappings and autocommands), append same after source, find out differences in the state and delete the differences on each :au BufEnter if filetype is not python and restore them otherwise: hacky and may introduce strange bugs. The example of state-recording and diff-determining code may be found here.
All folders in the rtp (runtimepath) option need to have the same folder structure as your $VIMRUNTIME ($VIMRUNTIME is usually /usr/share/vim/vim{version}). So it should have the same subdirectory names e.g. autoload, doc, plugin (whichever you need, but having the same names is key). The plugins should be in their corresponding subdirectory.
Let's say you have /path/to/dir (in your case it's ~/.vim) is in your rtp, vim will
look for global plugins in /path/to/dir/plugin
look for file-type plugins in /path/to/dir/ftplugin
look for syntax files in /path/to/dir/syntax
look for help files in /path/to/dir/doc
and so on...
vim only looks for a couple of recognized subdirectories†in /path/to/dir. If you have some unrecognized subdirectory name in there (like /path/to/dir/plugins), vim won't see it.
†"recognized" here means that a subdirectory of the same name can be found in /usr/share/vim/vim{version} or wherever you have vim installed.
Related
I'm trying to build python from source and using the prefix option to control the target directory where it gets installed.
After successful installation, in some files in the target directory I see the entries of the working directory from where I actually built.
Example files which has entry for abs_srcdir & abs_builddir
lib/python3.9/_sysconfigdata__linux_x86_64-linux-gnu.py
lib/python3.9/config-3.9-x86_64-linux-gnu/Makefile
How can I avoid this?
I am a bit unfamiliar with the process in Python but I can tell that these are part of the Preset Output Variables
From docs:
Some output variables are preset by the Autoconf macros. Some of the Autoconf macros set additional output variables, which are mentioned in the descriptions for those macros. See Output Variable Index, for a complete list of output variables. See Installation Directory Variables, for the list of the preset ones related to installation directories. Below are listed the other preset ones, many of which are precious variables (see Setting Output Variables, AC_ARG_VAR).
You can see the variables you mentioned here - B.2 Output Variable Index. Since these are preset variables, I don't see how you can exclude them post-installation. Manually removing or creating some sort of a script seems like the only way you can solve this.
If this was done in GNU Make then you can use the filter-out text function
I need to export thousand of files with the GameFbXExporter Plugin from Maya, and I was wondering if there was any way to script those exports, knowing that the parameters are fine in every files. All I need to do is fill the path section and the name of the exported file in FBX, then launching the export itself with the plugin.
I'm kind of lost and doesn't know how to do this. Could someone help me understand how to reach that please?
Thank you
The game exporter is written in MEL, so you can interact with it from Python using the maya.mel module. This will open the dialog, for example:
import maya.mel as mel
mel.eval("gameFbxExporter();")
Unfortunately a quick look at the actual game exporter scripts (which are in your maya install directory in the scripts/others directory -- they all start with the prefix "gameFBX") make it look like the UI is hopelessly entangled with the actual act of exporting; it doesn't seem to expose anything which actually just exports the current file in a batch friendly way.
The operative procedure is called gameExp_FBXExport, defined in "gameFbxExporter.mel." It appears like the actual business of exporting is actually delegated to the regular FBX plugin -- all the other stuff in the game exporter is just managing fbx presets, selecting parts of the scene to export (if you have the scenes set that way) and then calling the fbx plugin. So, you may be able to batch the process using Python by looping over your files and calling FBXExport() from Python. This will export file to FBX:
import maya.cmds as cmds
cmds.FBXExport('-file', 'path/to/file.fbx')
It will just use whatever FBX settings are currently active, so you will need to be confident that the files are correctly set up. You'll be tempted to write it as cmds.FBXExport(f='path/to/file') but that won't work -- the FBX plugin commands don't use regular python syntax.
If your current settings rely on the export-selected functionality you'll need to figure out how to cache the correct selections -- if you're using the "export selections set" functionality you should be able to have your exporter find the set by name and the select it before exporting.
cmds.select("name_of_selection_set")
cmds.FBXExport('-file', 'path/to/file.fbx')
You can use the other FBX plugin commands -- documented here to inspect and manipulate the settings in your files as you go along.
Most professional users don't use the GameExport pipeline precisely because it's very opaque and not batch friendly. In the long run you'll probably want to write a simple system that provides standard settings for different file types and exports the FBXes directly without the GameExporter - while it's a not-trivial project it's going to be easier to maintain and expand than hacking your way around the edges of Autodesk's version which is, frankly, pretty lame.
If you're not already familiar with it http://tech-artists.org/ is a great place to look for pipeline help and advice.
I'm using python watchdog to keep track of what files have been changed locally. Because I'm not keeping track of an entire directory but specific files, I'm using watchdog's event.src_path to check if the changed file is the one I'm looking for.
I'm using the FileSystemEventHandler and on_modified, printing the src_path. However, when I edit a file that should have the path /home/user/project/test in gedit, I get two paths, one that looks like /home/user/project/.goutputstream-XXXXXX and one that looks something like this: home/user/project/. I never get the path I'm expecting. I thought there may have been something wrong with watchdog or my own code, but I tested the exact same process in vi, nano, my IDE (PyCharm), Sublime Text, Atom...and they all gave me the src_path I'm expecting.
I'm wondering if there is a workaround for gedit, since gedit is the default text editor for many Linux distributions...Thanks in advance.
From the Watchdog GitHub readme:
Vim does not modify files unless directed to do so. It creates backup files
and then swaps them in to replace the files you are editing on the
disk. This means that if you use Vim to edit your files, the
on-modified events for those files will not be triggered by watchdog.
You may need to configure Vim to appropriately to disable this
feature.
As the quote says your issue is due to how these text editors modify files. Basically rather than directly modifying the file, then create "buffer" files that store the edited data. In your case this file is probably .goutputstream-XXXXXX. When you hit save your original file is deleted and a the buffer file is renamed into its place. So your second path is probably the result of the original file being deleted. Sometimes these files serve as backups instead, but still cause similar issues..
By far the easiest method to solve this issue is to disable the weird way of saving in your chosen text editor. In gedit this is done by unchecking the "Create a backup copy of file before saving" option within preferences. This will stop those backup files from being created and simplify life for watchdog.
Image and preference info shamelessly stolen from this AskUbuntu question
For more information (and specific information for solving vim/vi) see this issue on the watchdog GitHub.
Basically for Vim you need to run these commands to disable the backup/swapping in feature:
:set nobackup
:set nowritebackup
You can add them to your .vimrc to automate the task
paster serve has --reload option to auto-restart serving wsgi application when any of Python source files or the CONFIG_FILE changes.
How to make paster initiate auto-restart also when some other file (not Python source file) changes?
UPDATE
watch_file() function suggested by mksh looks like the solution to the problem. However mksh suggested adding its invocation to the application's entry point which seems to be more invasive than it should. Can I (non-intrusively) extend Paste's serve command adding new option which would result in invocation of watch_file() with filenames read from the app's section in CONFIG_FILE?
See Paster source link
So, you can watch your non-source files as simple as putting such lines at bottom of your application`s entry points:
from paste.reloader import watch_file
#
# logic that puts list of your non-source file names suitable
# for open() into iterable non_source_file_list
#
for non_source_file in non_source_file_list:
watch_file(non_source_file_name)
In general, try to rely more on source code than documentation when working with such modern and really written in pythonic style frameworks as Paste, their code is mostly well documented and futhermore self-documenting .
[NOTE: This questions is similar to but not the same as this one.]
Visual Studio defines several dozen "Macros" which are sort of simulated environment variables (completely unrelated to C++ macros) which contain information about the build in progress. Examples:
ConfigurationName Release
TargetPath D:\work\foo\win\Release\foo.exe
VCInstallDir C:\ProgramFiles\Microsoft Visual Studio 9.0\VC\
Here is the complete set of 43 built-in Macros that I see (yours may differ depending on which version of VS you use and which tools you have enabled):
ConfigurationName IntDir RootNamespace TargetFileName
DevEnvDir OutDir SafeInputName TargetFramework
FrameworkDir ParentName SafeParentName TargetName
FrameworkSDKDir PlatformName SafeRootNamespace TargetPath
FrameworkVersion ProjectDir SolutionDir VCInstallDir
FxCopDir ProjectExt SolutionExt VSInstallDir
InputDir ProjectFileName SolutionFileName WebDeployPath
InputExt ProjectName SolutionName WebDeployRoot
InputFileName ProjectPath SolutionPath WindowsSdkDir
InputName References TargetDir WindowsSdkDirIA64
InputPath RemoteMachine TargetExt
Of these, only four (FrameworkDir, FrameworkSDKDir, VCInstallDir and VSInstallDir) are set in the environment used for build-events.
As Brian mentions, user-defined Macros can be defined such as to be set in the environment in which build tasks execute. My problem is with the built-in Macros.
I use a Visual Studio Post-Build Event to run a python script as part of my build process. I'd like to pass the entire set of Macros (built-in and user-defined) to my script in the environment but I don't know how. Within my script I can access regular environment variables (e.g., Path, SystemRoot) but NOT these "Macros". All I can do now is pass them on-by-one as named options which I then process within my script. For example, this is what my Post-Build Event command line looks like:
postbuild.py --t="$(TargetPath)" --c="$(ConfigurationName)"
Besides being a pain in the neck, there is a limit on the size of Post-Build Event command line so I can't pass dozens Macros using this method even if I wanted to because the command line is truncated.
Does anyone know if there is a way to pass the entire set of Macro names and values to a command that does NOT require switching to MSBuild (which I believe is not available for native VC++) or some other make-like build tool?
You might want to look into PropertySheets. These are files containing Visual C++ settings, including user macros. The sheets can inherit from other sheets and are attached to VC++ projects using the PropertyManager View in Visual Studio. When you create one of these sheets, there is an interface for creating user macros. When you add a macro using this mechanism, there is a checkbox for setting the user macro as an environment variable. We use this type of mechanism in our build system to rapidly set up projects to perform out-of-place builds. Our various build directories are all defined as user macros. I have not actually verified that the environment variables are set in an external script called from post-build. I tend to use these macros as command line arguments to my post-build scripts - but I would expect accessing them as environment variables should work for you.
This is a bit hacky, but it could work.
Why not call multiple .py scripts in a row?
Each scripts can pass in a small subset of the parameters, and the values to a temp text file. The final script will read and work off of the temp text file.
I agree that this method is filled with danger and WTF's, but sometimes you have to just hack stuff together.
As far as I can tell, the method described in the question is the only way to pass build variables to a Python script.
Perhaps Visual Studio 2010 has something better?