Python Paste and auto-restart on non Python source file change - python

paster serve has --reload option to auto-restart serving wsgi application when any of Python source files or the CONFIG_FILE changes.
How to make paster initiate auto-restart also when some other file (not Python source file) changes?
UPDATE
watch_file() function suggested by mksh looks like the solution to the problem. However mksh suggested adding its invocation to the application's entry point which seems to be more invasive than it should. Can I (non-intrusively) extend Paste's serve command adding new option which would result in invocation of watch_file() with filenames read from the app's section in CONFIG_FILE?

See Paster source link
So, you can watch your non-source files as simple as putting such lines at bottom of your application`s entry points:
from paste.reloader import watch_file
#
# logic that puts list of your non-source file names suitable
# for open() into iterable non_source_file_list
#
for non_source_file in non_source_file_list:
watch_file(non_source_file_name)
In general, try to rely more on source code than documentation when working with such modern and really written in pythonic style frameworks as Paste, their code is mostly well documented and futhermore self-documenting .

Related

Execute GameFbxExporter Maya with Python

I need to export thousand of files with the GameFbXExporter Plugin from Maya, and I was wondering if there was any way to script those exports, knowing that the parameters are fine in every files. All I need to do is fill the path section and the name of the exported file in FBX, then launching the export itself with the plugin.
I'm kind of lost and doesn't know how to do this. Could someone help me understand how to reach that please?
Thank you
The game exporter is written in MEL, so you can interact with it from Python using the maya.mel module. This will open the dialog, for example:
import maya.mel as mel
mel.eval("gameFbxExporter();")
Unfortunately a quick look at the actual game exporter scripts (which are in your maya install directory in the scripts/others directory -- they all start with the prefix "gameFBX") make it look like the UI is hopelessly entangled with the actual act of exporting; it doesn't seem to expose anything which actually just exports the current file in a batch friendly way.
The operative procedure is called gameExp_FBXExport, defined in "gameFbxExporter.mel." It appears like the actual business of exporting is actually delegated to the regular FBX plugin -- all the other stuff in the game exporter is just managing fbx presets, selecting parts of the scene to export (if you have the scenes set that way) and then calling the fbx plugin. So, you may be able to batch the process using Python by looping over your files and calling FBXExport() from Python. This will export file to FBX:
import maya.cmds as cmds
cmds.FBXExport('-file', 'path/to/file.fbx')
It will just use whatever FBX settings are currently active, so you will need to be confident that the files are correctly set up. You'll be tempted to write it as cmds.FBXExport(f='path/to/file') but that won't work -- the FBX plugin commands don't use regular python syntax.
If your current settings rely on the export-selected functionality you'll need to figure out how to cache the correct selections -- if you're using the "export selections set" functionality you should be able to have your exporter find the set by name and the select it before exporting.
cmds.select("name_of_selection_set")
cmds.FBXExport('-file', 'path/to/file.fbx')
You can use the other FBX plugin commands -- documented here to inspect and manipulate the settings in your files as you go along.
Most professional users don't use the GameExport pipeline precisely because it's very opaque and not batch friendly. In the long run you'll probably want to write a simple system that provides standard settings for different file types and exports the FBXes directly without the GameExporter - while it's a not-trivial project it's going to be easier to maintain and expand than hacking your way around the edges of Autodesk's version which is, frankly, pretty lame.
If you're not already familiar with it http://tech-artists.org/ is a great place to look for pipeline help and advice.

Code inside of a python file deleted

The code inside of a python file randomly deleted, is there anyway to restore? It is a file of 3.70 KB but when opened and put into a text document there is nothing.
Open with python to see what it contains
with open('deleted.py', 'rb') as f:
print(repr(f.read()))
Since you are a new user I am assuming you are new to code development etc. Therefore, you should look at some versioning control tools like:
SVN
Github
Gitlab
There are some more, but these are the most common ones. They are used to store your code and to revert you code if you mess up. They are also used to merge codes when different programmers are changing it. For the moment this will not help but will help in the future.
For now you may look at some restore tools but I highly doubt it that you are able to recreate the file. Another possiblity is: when you haven IDE to look at your command history. Maybe you executed the script and you can find the executed script as commands in the command history.

Python watchdog event not returning entire src_path

I'm using python watchdog to keep track of what files have been changed locally. Because I'm not keeping track of an entire directory but specific files, I'm using watchdog's event.src_path to check if the changed file is the one I'm looking for.
I'm using the FileSystemEventHandler and on_modified, printing the src_path. However, when I edit a file that should have the path /home/user/project/test in gedit, I get two paths, one that looks like /home/user/project/.goutputstream-XXXXXX and one that looks something like this: home/user/project/. I never get the path I'm expecting. I thought there may have been something wrong with watchdog or my own code, but I tested the exact same process in vi, nano, my IDE (PyCharm), Sublime Text, Atom...and they all gave me the src_path I'm expecting.
I'm wondering if there is a workaround for gedit, since gedit is the default text editor for many Linux distributions...Thanks in advance.
From the Watchdog GitHub readme:
Vim does not modify files unless directed to do so. It creates backup files
and then swaps them in to replace the files you are editing on the
disk. This means that if you use Vim to edit your files, the
on-modified events for those files will not be triggered by watchdog.
You may need to configure Vim to appropriately to disable this
feature.
As the quote says your issue is due to how these text editors modify files. Basically rather than directly modifying the file, then create "buffer" files that store the edited data. In your case this file is probably .goutputstream-XXXXXX. When you hit save your original file is deleted and a the buffer file is renamed into its place. So your second path is probably the result of the original file being deleted. Sometimes these files serve as backups instead, but still cause similar issues..
By far the easiest method to solve this issue is to disable the weird way of saving in your chosen text editor. In gedit this is done by unchecking the "Create a backup copy of file before saving" option within preferences. This will stop those backup files from being created and simplify life for watchdog.
Image and preference info shamelessly stolen from this AskUbuntu question
For more information (and specific information for solving vim/vi) see this issue on the watchdog GitHub.
Basically for Vim you need to run these commands to disable the backup/swapping in feature:
:set nobackup
:set nowritebackup
You can add them to your .vimrc to automate the task

Vim plugins don't always load?

I was trying to install autoclose.vim to Vim. I noticed I didn't have a ~/.vim/plugin folder, so I accidentally made a ~/.vim/plugins folder (notice the extra 's' in plugins). I then added au FileType python set rtp += ~/.vim/plugins to my .vimrc, because from what I've read, that will allow me to automatically source the scripts in that folder.
The plugin didn't load for me until I realized my mistake and took out the extra 's' from 'plugins'. I'm confused because this new path isn't even defined in my runtime path. I'm basically wondering why the plugin loaded when I had it in ~/.vim/plugin but not in ~/.vim/plugins?
:help load-plugins outlines how plugins are loaded.
Adding a folder to your rtp alone does not suffice; it must have a plugin subdirectory. For example, given :set rtp+=/tmp/foo, a file /tmp/foo/plugin/bar.vim would be detected and loaded, but neither /tmp/foo/plugins/bar.vim nor /tmp/foo/bar.vim would be.
You are on the right track with set rtp+=... but there's a bit more to it (rtp is non-recursive, help indexing, many corner cases) than what meets the eye so it is not a very good idea to do it by yourself. Unless you are ready for a months-long drop in productivity.
If you want to store all your plugins in a special directory you should use a proper runtimepath/plugin-management solution. I suggest Pathogen (rtp-manager) or Vundle (plugin-manager) but there are many others.
In addition to #Nikita Kouevda answer: modifying rtp on FileType event may be too late for vim to load any plugins from the modified runtimepath: if this event was launched after vimrc was sourced it is not guaranteed plugins from new addition will be loaded; if this event was launched after VimEnter event it is guaranteed plugins from new addition will not be sourced automatically.
If you want to source autoclose only when you edit python files you should use :au FileType python :source ~/.vim/macros/autoclose.vim (note: macros or any other subdirectory except plugin and directories found in $VIMRUNTIME or even any directory not found in runtimepath at all).
If you want to use autoclose only when you edit python files you should check out plugin source and documentation, there must be support on the plugin side for it to work.
// Or, if autoclose does not support this, use :au FileType command from above paragraph, but prepend source with something that records vim state (commands, mappings and autocommands), append same after source, find out differences in the state and delete the differences on each :au BufEnter if filetype is not python and restore them otherwise: hacky and may introduce strange bugs. The example of state-recording and diff-determining code may be found here.
All folders in the rtp (runtimepath) option need to have the same folder structure as your $VIMRUNTIME ($VIMRUNTIME is usually /usr/share/vim/vim{version}). So it should have the same subdirectory names e.g. autoload, doc, plugin (whichever you need, but having the same names is key). The plugins should be in their corresponding subdirectory.
Let's say you have /path/to/dir (in your case it's ~/.vim) is in your rtp, vim will
look for global plugins in /path/to/dir/plugin
look for file-type plugins in /path/to/dir/ftplugin
look for syntax files in /path/to/dir/syntax
look for help files in /path/to/dir/doc
and so on...
vim only looks for a couple of recognized subdirectories† in /path/to/dir. If you have some unrecognized subdirectory name in there (like /path/to/dir/plugins), vim won't see it.
† "recognized" here means that a subdirectory of the same name can be found in /usr/share/vim/vim{version} or wherever you have vim installed.

Configuring celery with a .conf file

Good day.
I set up a separate project from the main one called myproject-celery. It is a buildout based project which contains the async part of my project. For convenience I want to have a file, that will be containing this machine's configuration. I know that celery provides the python config file, but I do not like this configuration style.
Let's say I have a configuration in a Yaml config file named myproject.yaml
What I want to achieve:
./bin/celery worker --config /absolute/path/to/project/myproject.yaml --app myproject.celery
The problem really is that I want to specify the file's location, because it can change. I tried writing a custom loader class, but I failed, cause I do not even know why and when the many custom methods of this class are called (the only doc that I found is http://docs.celeryproject.org/en/latest/reference/celery.loaders.base.html?highlight=loader#id1 and It's no help for me). I tried to do something on import phase for the app module, but I can not pass the filepath to that module's code... The only solution that I came up with was using a custom ENV param that will contain the path, but I do not see why can't it be a launch param like in most apps, that I use(refering to pyramid with it's paster serve myproject.ini)
So the question:
What do I have to do to set up the config from a file that I could specify by an absolute path?
EDIT:
The question was not answered, sow I posted an issue on celery's github. Will wait for a response.
https://github.com/celery/celery/issues/1100
Looking at celery.loaders.base it looks like the method you want to override is read_configuration:
from celery.datastructures import DictAttr
from celery.loaders.base import BaseLoader
class YAMLLoader(BaseLoader):
def read_configuration():
# Load YAML file here and return a DictAttr instance

Categories