I have been trying to figure out how to script my own bake script for substance materials to files in maya, or find some documentation somewhere that gives me the commands and the format it should be used in. Has anyone made a script using the substance commands that I could look at fro reference? All I have found is a list of these commands that I found in the substance plugin information:
sbs_IsSubstanceRelocalized()
sbs_SetBakeFormat()
sbs_GetGlobalTextureHeight()
sbs_GetEditionModeScale()
sbs_GetChannelsNamesFromSubstanceNode()
sbs_AffectTheseAttributes()
sbs_GetSubstanceBuildVersion()
sbs_SetEditionModeScale()
sbs_GetBakeFormat()
sbs_GetEngine()
sbs_GetGlobalTextureWidth()
sbs_GoToMarketPlace()
sbs_GetGraphsNamesFromSubstanceNode()
sbs_GetAllInputsFromSubstanceNode()
sbs_AffectedByAllInputs()
sbs_EditSubstance()
sbs_GetPackageFullPathNameFromSubstanceNode()
sbs_SetGlobalTextureWidth()
sbs_SetEngine()
sbs_SetGlobalTextureHeight()
Please help!
Here is a short script I use to load a substance in mel script
// load plugin should be "substance" on windows
loadPlugin "libSubstance";
// create sbs node
string $sbsnode = `shadingNode -asTexture substance`;
// load sbsar with absolute path
setAttr($sbsnode+".package") -type "string" "/usr/autodesk/maya/substance/substances/Aircraft_Metal.sbsar";
Related
After building and installing the Python engine shipped with Matlab 2019b in Anaconda
(TestEnvironment) PS C:\Program Files\MATLAB\R2019b\extern\engines\python> C:\Users\USER\Anaconda3\envs\TestEnvironment\python.exe .\setup.py build -b C:\Users\USER\MATLAB\build_temp install
for Python 3.7 I wrote a simple script to test a couple of features I'm interested in:
import matlab.engine as ml_e
# Start Matlab engine
eng = ml_e.start_matlab()
# Load MAT file into engine. The result is a dictionary
mat_file = "samples/lena.mat"
lenaMat = eng.load("samples/lena.mat")
print("Variables found in \"" + mat_file + "\"")
for key in lenaMat.keys():
print(key)
# print(lenaMat["lena512"])
# Use the variable from the MAT file to display it as an image
eng.imshow(lenaMat["lena512"], [])
I have a problem with imshow() (or any similar function that displays a figure in the Matlab GUI on the screen) namely that it shows quickly and then disappears, which - I guess - at least confirms that it is possible to use it. The only possibility to keep it on the screen is to add an infinite loop at the end:
while True:
continue
For obvious reasons this is not a good solution. I am not looking for a conversion of Matlab data to NumPy or similar and displaying it using matplotlib or similar third party libraries (I am aware that SciPy can load MAT files for example). The reason is simple - I would like to use Matlab (including loading whole environments) and for debugging purposes I'd like to be able to show this and that result without having to go through loops and hoops of converting the data manually.
I'm trying to load the English model for StanfordNLP (python) from my local machine, but am unable to find the proper import statements to do so. What commands can be used? Is there a pip installation available to load the english model?
I have tried using the download command to do so, however my machine requires all files to be added locally. I downloaded the english jar files from https://stanfordnlp.github.io/CoreNLP/ but am unsure if I need both the English and the English KBP version.
directory set for model download is /home/sf
pip install stanfordnlp # install stanfordnlp
import stanfordnlp
stanfordnlp.download("en") # here after 'Y' one set custom directory path
local_dir_store_model = "/home/sf"
english_model_dir = "/home/sf/en_ewt_models"
tokienizer_en_pt_file = "/home/sf/en_ewt_models/en_ewt_tokenizer.pt"
nlp = stanfordnlp.Pipeline(models_dir=local_dir_store_model,processors = 'tokenize,mwt,lemma,pos')
doc = nlp("""One of the most wonderful things in life is to wake up and enjoy a cuddle with somebody; unless you are in prison"""")
doc.sentences[0].print_tokens()
I am unclear what you want to do.
If you want to run the all-Python pipeline, you can download the files and run them in Python code by specifying the paths for each annotator as in this example.
import stanfordnlp
config = {
'processors': 'tokenize,mwt,pos,lemma,depparse', # Comma-separated list of processors to use
'lang': 'fr', # Language code for the language to build the Pipeline in
'tokenize_model_path': './fr_gsd_models/fr_gsd_tokenizer.pt', # Processor-specific arguments are set with keys "{processor_name}_{argument_name}"
'mwt_model_path': './fr_gsd_models/fr_gsd_mwt_expander.pt',
'pos_model_path': './fr_gsd_models/fr_gsd_tagger.pt',
'pos_pretrain_path': './fr_gsd_models/fr_gsd.pretrain.pt',
'lemma_model_path': './fr_gsd_models/fr_gsd_lemmatizer.pt',
'depparse_model_path': './fr_gsd_models/fr_gsd_parser.pt',
'depparse_pretrain_path': './fr_gsd_models/fr_gsd.pretrain.pt'
}
nlp = stanfordnlp.Pipeline(**config) # Initialize the pipeline using a configuration dict
doc = nlp("Van Gogh grandit au sein d'une famille de l'ancienne bourgeoisie.") # Run the pipeline on input text
doc.sentences[0].print_tokens()
If you want to run the Java server with the Python interface, you need to download the Java jar files and start the server. Full info here: https://stanfordnlp.github.io/CoreNLP/corenlp-server.html
Then you can access the server with the Python interface. Full info here: https://stanfordnlp.github.io/stanfordnlp/corenlp_client.html
But just to be clear, the jar files should not be used with the pure Python pipeline. Those are for running the Java server.
I am trying to produce a quick reworking of some educational materials on music showing how it may be able to create the associated media assets (images, audio files) from "code" in a Jupyter notebook using the Python music21 package.
It seems the simplest steps are the hardest. For example, how do I create an empty staff:
or a staff populated by notes but without a clef at the start?
If I do something like:
from music21 import *
s = stream.Stream()
s.append(note.Note('G4', type='whole'))
s.append(note.Note('A4', type='whole'))
s.append(note.Note('B4', type='whole'))
s.append(note.Note('C5', type='whole'))
s.show()
I get the following?
Try creating a stream.Measure object, so that barlines before the notes don't appear.
Music21 puts barlines and clefs, etc., in by default. You can manually put in a time signature of 4/1 and a treble clef and set them with ".style.hideObjectOnPrint" (or just ".hideObjectOnPrint" on older m21 versions). You will probably need to also set .rightBarline = bar.Barline('none') or something like that for the end.
It is possible, but I haven't ever fully tried all the parts of it.
I want to make some operationi automatic.But I met some trouble in export the image after I bake it.At first I try to use "bpy.ops.object.bake_image()" to bake the image.But the result image can not be active in uv editor.
The bake was success,but the result image didn't appear in the uv editor.It need selected so that I could export the file.
So I search the document , and found the other command "bpy.ops.object.bake()".It have a parameter "save_mode",but I still met some obstacle in using this command.It always point me out that " RuntimeError: error: No active image found in material "material" (0) for object "1.001" ".
Here is the official document about this two command:
https://docs.blender.org/api/blender_python_api_2_78a_release/bpy.ops.object.html?highlight=bake#bpy.ops.object.bake
Can anyone try to give me some solution or some advice that how can I make this thing right.
Many of blenders operators require a certain context to be right before they will work, for bpy.ops.image.save() that includes the UV/Image editor having an active image. While there are ways to override the current context to make them work, it can often be easier to use other methods.
The Image object can save() itself. If it is a new image you will first need to set it's filepath, you may also want to set it's file_format.
img = bpy.data.images['imagename']
img.filepath = '/path/to/save/imagename.png'
img.file_format = 'PNG'
img.save()
I've got a Microsoft Word document with an embedded macro. I've managed to load a document using this example Loading a document on OpenOffice using an external Python program
Now I'm trying to get macros code from my document, but can't figure, how to do this. I've stumbled upon interface that probably can be used (http://www.openoffice.org/api/docs/common/ref/com/sun/star/document/XEmbeddedScripts.html) though it's unclear to me how to use it in Python.
So how can I extract macros text from document using Python UNO?
Which version of LO you are using?
Normally, i would do something like
doc = desktop.loadComponentFromURL(url, "_blank", 0, () )
# the Basic Script Library/Libraries
the_basic_libs = doc.BasicLibraries
if the_basic_libs.hasElements():
the_standard = the_basic_libs.getByName("Standard")
the_one = the_standard.getByName("Module1")
print(the_one)
But my version (LO 4.1.3.2) gives me a "no such element exception", though I can see and access the element using MRI (or the GUI).
Maybe a flaw in LO, uno ... or the fact, that we test with a *.doc