I am writing a script to export alembic caches of animation in a massive project containing lots of maya files. Our main character is having an issue; along the way his eyes somehow ended up with the same name. This has created issues with the alembic export. Dose maya already have a sort of clean up function that can correct matching names?
Any two objects can have the same names, but never the same DAG paths. In your script, make sure all your ls, listRelatives calls etc. Have the full path or longName or long flags set so you always operate on the full DAG paths as opposed to the possibly conflicting short names.
To my knowledge maya (and its python api) does not offer anything like that.
You'll have to run a snippet on export to check for duplicates before export.
Or, alternatively use an already existing script and run that.
Related
I'm trying to build python from source and using the prefix option to control the target directory where it gets installed.
After successful installation, in some files in the target directory I see the entries of the working directory from where I actually built.
Example files which has entry for abs_srcdir & abs_builddir
lib/python3.9/_sysconfigdata__linux_x86_64-linux-gnu.py
lib/python3.9/config-3.9-x86_64-linux-gnu/Makefile
How can I avoid this?
I am a bit unfamiliar with the process in Python but I can tell that these are part of the Preset Output Variables
From docs:
Some output variables are preset by the Autoconf macros. Some of the Autoconf macros set additional output variables, which are mentioned in the descriptions for those macros. See Output Variable Index, for a complete list of output variables. See Installation Directory Variables, for the list of the preset ones related to installation directories. Below are listed the other preset ones, many of which are precious variables (see Setting Output Variables, AC_ARG_VAR).
You can see the variables you mentioned here - B.2 Output Variable Index. Since these are preset variables, I don't see how you can exclude them post-installation. Manually removing or creating some sort of a script seems like the only way you can solve this.
If this was done in GNU Make then you can use the filter-out text function
I need to export thousand of files with the GameFbXExporter Plugin from Maya, and I was wondering if there was any way to script those exports, knowing that the parameters are fine in every files. All I need to do is fill the path section and the name of the exported file in FBX, then launching the export itself with the plugin.
I'm kind of lost and doesn't know how to do this. Could someone help me understand how to reach that please?
Thank you
The game exporter is written in MEL, so you can interact with it from Python using the maya.mel module. This will open the dialog, for example:
import maya.mel as mel
mel.eval("gameFbxExporter();")
Unfortunately a quick look at the actual game exporter scripts (which are in your maya install directory in the scripts/others directory -- they all start with the prefix "gameFBX") make it look like the UI is hopelessly entangled with the actual act of exporting; it doesn't seem to expose anything which actually just exports the current file in a batch friendly way.
The operative procedure is called gameExp_FBXExport, defined in "gameFbxExporter.mel." It appears like the actual business of exporting is actually delegated to the regular FBX plugin -- all the other stuff in the game exporter is just managing fbx presets, selecting parts of the scene to export (if you have the scenes set that way) and then calling the fbx plugin. So, you may be able to batch the process using Python by looping over your files and calling FBXExport() from Python. This will export file to FBX:
import maya.cmds as cmds
cmds.FBXExport('-file', 'path/to/file.fbx')
It will just use whatever FBX settings are currently active, so you will need to be confident that the files are correctly set up. You'll be tempted to write it as cmds.FBXExport(f='path/to/file') but that won't work -- the FBX plugin commands don't use regular python syntax.
If your current settings rely on the export-selected functionality you'll need to figure out how to cache the correct selections -- if you're using the "export selections set" functionality you should be able to have your exporter find the set by name and the select it before exporting.
cmds.select("name_of_selection_set")
cmds.FBXExport('-file', 'path/to/file.fbx')
You can use the other FBX plugin commands -- documented here to inspect and manipulate the settings in your files as you go along.
Most professional users don't use the GameExport pipeline precisely because it's very opaque and not batch friendly. In the long run you'll probably want to write a simple system that provides standard settings for different file types and exports the FBXes directly without the GameExporter - while it's a not-trivial project it's going to be easier to maintain and expand than hacking your way around the edges of Autodesk's version which is, frankly, pretty lame.
If you're not already familiar with it http://tech-artists.org/ is a great place to look for pipeline help and advice.
it's a kind of open question but please bear with me.
I am working on several projects (mainly with pandas) and I have created my standard approach to manage them:
1. create a main folder for all files in a project
2. create a data folder
3. have all the output in another folder
and so on.
One of my main activities is data cleaning, and in order to standardize it I have created a dictionary file where I store the various translation of the same entity, e.g. USA, US, United States, and so on, so that the files I am producing are consistent.
Every time I create a new project, I copy the dictionary file in the data directory and then:
xls = pd.ExcelFile(r"data/dictionary.xlsx")
df_area = xls.parse("area")
and after, to translate the country name into my standard, I call:
join_column, how_join = "country", "inner"
df_ct = pd.concat([
df_ct.merge(df_area, left_on=join_column, right_on="country_name", how=how_join),
df_ct.merge(df_area, left_on=join_column, right_on="alternative01", how=how_join),
and finally I check that I am not losing an record with a miss-join.
Over and over the same thing.
I would like to have a way to remove all this unnecessary cut and paste (of the file and of the code). Also, the file I used on the first projects are already deprecated and I need to update them (and sometime the code) when I need to process new data. Sometimes I also lose track of where is the latest dictionary file! Overall it's a lot of maintenance, which I believe might be saved.
Creating my own package is the way to go or is it a little too much ambitious?
Is there another shortcut? Overall it's not a lot of code, but multiplied by several projects.
Thanks for any insight, your time going through this is appreciated.
At the end I decided to create my own package.
It required some time so I am happy to share the details about the process (I run python on jupyter and windows).
The first step is to decide where to store the code.
In my case it was C:\Users\my_user\Documents
You need to add this directory to the list of the directories where python is looking for packages. this is achieved running the following statement:
import sys
sys.path.append("C:\\Users\\my_user\\Documents")
In order to run the above statement each time you start python, it must be included into a file in the directory (this directory might vary depending on your installation):
C:\Users\my_user\.ipython\profile_default\startup
the file can be named "00-first.py" ("50-middle.py" or "99-last.py" will also work)
To verify everything is working, restart python and run the command:
print(sys.path)
you should be able to see your directory at this point.
create a folder with the package name in your directory, and a subfolder (I prefer not to have code in the main package folder)
C:\Users\my_user\Documents\my_package\my_subfolder
put an empty file named "_ _init__.py" (note that there should be no space between underscores, but I do not know how to achieve it with the editor) in each of the two folders: my package and my_subfolder. At this point you should be able already to import your empty package from python
import my_package as my_pack
inside my_subfolder create a file (my_code.py) which will store the actual code
def my_function(name):
print("Hallo " + name)
modify the outer _ _init__.py file to include shortcuts. Add the following:
from my_package.my_subfolder.my_code import my_function
You should be able now to run the following in python:
my_pack.my_function("World!")
Hope you find it useful!
I developed a Python library for merging large numbers of XML files in a very specific way. These XML files are split up and altered by multiple users in my group and it would be much easier to put everything into a Git repo and have git-merge manage everything via my Python code.
It seems that implementing my code for git-mergetool is possible, but I would have to write my own code to manage the conflict returns for the internal git-merge (i.e. parse the >>>>>>> <<<<<<< ======= identifiers), which would be more time consuming.
So, is there a way to have Git's merge command automatically use my Python code instead of its internal git-merge?
You can implement a custom merge driver that's used for certain filetypes instead of Git's default merge driver.
Relevant documentation in gitattributes(5)
Some related StackOverflow questions:
Git - how to force merge conflict and manual merge on selected file
How do I tell git to always select my local version for conflicted merges on a specific file?
I want to automate the entire process of creating ngs,bit and mcs files in xilinx and have these files be automatically be associated with certain folders in the svn repository. What I need to know is that is there a log file that gets created in the back end of the Xilinx gui which records all the commands I run e.g open project,load file,synthesize etc.
Also the other part that I have not been able to find is a log file that records the entire process of synthesis, map,place and route and generate programming file. Specially record any errors that the tool encountered during these processes.
If any of you can point me to such files if they exist it would be great. I haven't gotten much out of my search but maybe I didn't look enough.
Thanks!
Well, it is definitely a nice project idea but a good amount of work. There's always a reason why an IDE was built – a simple search yields the "Command Line Tools User Guide" for various versions of Xilinx ISE, like for 14.3, 380 pages about
Overview and list of features
Input and output files
Command line syntax and options
Report and message information
ISE is a GUI for various command line executables, most of them are located in the subfolder 14.5/ISE_DS/ISE/bin/lin/ (in this case: Linux executables for version 14.5) of your ISE installation root. You can review your current parameters for each action by right clicking the item in the process tree and selecting "Process properties".
On the Python side, consider using the subprocess module:
The subprocess module allows you to spawn new processes, connect to their input/output/error pipes, and obtain their return codes.
Is this the entry point you were looking for?
As phineas said, what you are trying to do is quite an undertaking.
I've been there done that, and there are countless challenges along the way. For example, if you want to move generated files to specific folders, how do you classify these files in order to figure out which files are which? I've created a project called X-MimeTypes that attempts to classify the files, but you then need a tool to parse the EDA mime type database and use that to determine which files are which.
However there is hope, so to answer the two main questions you've pointed out:
To be able to automatically move generated files to predetermined paths. From what you are saying it seems like you want to do this to make the versioning process easier? There is already a tool that does this for you based on "design structures" that you create and that can be shared within a team. The tool is called Scineric Workspace so check it out. It also have built in Git and SVN support which ignores things according to the design structure and in most cases it filters all generated things by vendor tools without you having to worry about it.
You are looking for a log file that shows all commands that were run. As phineas said, you can check out the Command Line Tools User guides for ISE, but be aware that the commands to run have changed again in Vivado. The log file of each process also usually states the exact command with its parameters that have been called. This should be close to the top of the report. If you look for one log file that contains everything, that does not exist. Again, Scineric Workspace supports evoking flows from major vendors (ISE, Vivado, Quartus) and it produces one log file for all processes together while still allowing each process to also create its own log file. Errors, warning etc. are also marked properly in this big report. Scineric has a tcl shell mode as well, so your python tool can run it in the background and parse the complete log file it creates.
If you have more questions on the above, I will be happy to help.
Hope this helps,
Jaco