What I want to do is duplicate a controller to other side and rename/replace _L to _R. So I just have to select controller and it will create a group and then another group to mirror it on right side and renaming that other group to _R. Then unparent first group to world. thats all I want to do. but I'm stuck on renaming. I know I have to sort list in reverse order to rename it but whenever I do it Maya says:
More than one object matches name
Duplicated object has different parent name and same children name. Please tell me how should I do it and what I'm missing.
import maya.cmds as cmds
list = cmds.ls(sl=1)
grp = cmds.group(em=1, name=("grp" + list[0]))
# creating constraint to match transform and deleting it
pc = cmds.pointConstraint(list, grp, o=[0,0,0], w=1)
oc = cmds.orientConstraint(list, grp, o=[0,0,0], w=1)
cmds.delete(pc, oc)
# parenting it to controller
cmds.parent(list, grp)
# creating new group to reverse it to another side
Newgrp = cmds.group(em=1)
cmds.parent(grp, Newgrp)
Reversedgrp = cmds.duplicate(Newgrp)
cmds.setAttr(Reversedgrp[0] +'.sx', -1)
selection = cmds.ls(Reversedgrp, long=1)
selection.sort(key=len, reverse=1)
Renaming in Maya is very annoying, because the names are your only handle to the objects themselves.
The usually trick is basically:
Duplicate the items with the rr flag, so you only get the top nodes
Use listRelatives with the ad and full flags to get all the children of the duplicated top node in long form like |Parent|Child|Grandchild. In this form the where the entire hierarchy above the name is listed in order (you can get this form with cmds.ls(l=True) on objects as well)
Sort that list and then reverse it. This will put the longest path names first, so you can start with the leaf nodes and work your way upwards
Now loop through the items and apply your renaming pattern
So something like this, though you probably want to replace the selection here with something you control:
import maya.cmds as cmds
dupes = cmds.duplicate(cmds.ls(sl=True), rr=True) # duplicate, return only roots
dupes += cmds.listRelatives(dupes, ad=True, f=True) # add children as long names
longnames = cmds.ls(dupes, l=True) # make sure we have long name for root
longnames.sort() # usually these sort automatically, but's good to be safe
for item in longnames[::-1]: # this is shorthand for 'walk through the list backwards'
shortname = item.rpartition("|")[-1] # get the last bit of the name
cmds.rename(item, shortname.replace("r","l")) # at last, rename the item
thanks "theodox" it was very usefull. but still little bit confused in sorting, long names, short names and .rpartition... but anyway i have created this script finally.
import maya.cmds as cmds
_list = cmds.ls(sl=1)
grp = cmds.group(em=1, name=("grp_"+ _list[0]))
#creating constraint to match transfor and deleting it.
pc=cmds.pointConstraint( _list, grp, o=[0,0,0],w=1 )
oc=cmds.orientConstraint( _list, grp, o=[0,0,0],w=1 )
cmds.delete(pc,oc)
cmds.parent( _list, grp )
Newgrp=cmds.group(em=1)
cmds.parent(grp,Newgrp)
#duplicating new group and reversing it to negative side
dupes = cmds.duplicate(cmds.ls(Newgrp,s=0), rr=True) # duplicate, return only roots
cmds.setAttr( dupes[0] +'.sx', -1 )
#renaming
dupes += cmds.listRelatives(dupes, ad=True, f=True) # add children as long names
longnames = cmds.ls(dupes, l=True,s=0) # make sure we have long name for root
longnames.sort() # usually these sort automatically, but's good to be safe
print longnames
for item in longnames[::-1]: # this is shorthand for 'walk through the list backwards'
shortname = item.rpartition("|")[-1] # get the last bit of the name
cmds.rename(item, shortname.replace("_L","_R")) # at last, rename the item
#ungrouping back to world and delting unused nodes
cmds.parent( grp, world=True )
duplicatedGrp=cmds.listRelatives(dupes[0], c=True)
cmds.parent( duplicatedGrp, world=True )
cmds.delete(dupes[0],Newgrp)
anyone can use this code for mirroring controllers just change "l","r" in rename command.
thank you.
Related
How do I pass the following commands into the latex environment?
\centering (I need landscape tables to be centered)
and
\caption* (I need to skip for a panel the table numbering)
In addition, I would need to add parentheses and asterisks to the t-statistics, meaning row-specific formatting on the dataframes.
For example:
Current
variable
value
const
2.439628
t stat
13.921319
FamFirm
0.114914
t stat
0.351283
founder
0.154914
t stat
2.351283
Adjusted R Square
0.291328
I want this
variable
value
const
2.439628
t stat
(13.921319)***
FamFirm
0.114914
t stat
(0.351283)
founder
0.154914
t stat
(1.651283)**
Adjusted R Square
0.291328
I'm doing my research papers in DataSpell. All empirical work is in Python, and then I use Latex (TexiFy) to create the pdf within DataSpell. Due to this workflow, I can't edit tables in latex code while they get overwritten every time I run the jupyter notebook.
In case it helps, here's an example of how I pass a table to the latex environment:
# drop index to column
panel_a.reset_index(inplace=True)
# write Latex index and cut names to appropriate length
ind_list = [
"ageFirm",
"meanAgeF",
"lnAssets",
"bsVol",
"roa",
"fndrCeo",
"lnQ",
"sic",
"hightech",
"nonFndrFam"
]
# assign the list of values to the column
panel_a["index"] = ind_list
# format column names
header = ["", "count","mean", "std", "min", "25%", "50%", "75%", "max"]
panel_a.columns = header
with open(
os.path.join(r"/.../tables/panel_a.tex"),"w"
) as tf:
tf.write(
panel_a
.style
.format(precision=3)
.format_index(escape="latex", axis=1)
.hide(level=0, axis=0)
.to_latex(
caption = "Panel A: Summary Statistics for the Full Sample",
label = "tab:table_label",
hrules=True,
))
You're asking three questions in one. I think I can do you two out of three (I hear that "ain't bad").
How to pass \centering to the LaTeX env using Styler.to_latex?
Use the position_float parameter. Simplified:
df.style.to_latex(position_float='centering')
How to pass \caption*?
This one I don't know. Perhaps useful: Why is caption not working.
How to apply row-specific formatting?
This one's a little tricky. Let me give an example of how I would normally do this:
df = pd.DataFrame({'a':['some_var','t stat'],'b':[1.01235,2.01235]})
df.style.format({'a': str, 'b': lambda x: "{:.3f}".format(x)
if x < 2 else '({:.3f})***'.format(x)})
Result:
You can see from this example that style.format accepts a callable (here nested inside a dict, but you could also do: .format(func, subset='value')). So, this is great if each value itself is evaluated (x < 2).
The problem in your case is that the evaluation is over some other value, namely a (not supplied) P value combined with panel_a['variable'] == 't stat'. Now, assuming you have those P values in a different column, I suggest you create a for loop to populate a list that becomes like this:
fmt_list = ['{:.3f}','({:.3f})***','{:.3f}','({:.3f})','{:.3f}','({:.3f})***','{:.3f}']
Now, we can apply a function to df.style.format, and pop/select from the list like so:
fmt_list = ['{:.3f}','({:.3f})***','{:.3f}','({:.3f})','{:.3f}','({:.3f})***','{:.3f}']
def func(v):
fmt = fmt_list.pop(0)
return fmt.format(v)
panel_a.style.format({'variable': str, 'value': func})
Result:
This solution is admittedly a bit "hacky", since modifying a globally declared list inside a function is far from good practice; e.g. if you modify the list again before calling func, its functionality is unlikely to result in the expected behaviour or worse, it may throw an error that is difficult to track down. I'm not sure how to remedy this other than simply turning all the floats into strings in panel_a.value inplace. In that case, of course, you don't need .format anymore, but it will alter your df and that's also not ideal. I guess you could make a copy first (df2 = df.copy()), but that will affect memory.
Anyway, hope this helps. So, in full you add this as follows to your code:
fmt_list = ['{:.3f}','({:.3f})***','{:.3f}','({:.3f})','{:.3f}','({:.3f})***','{:.3f}']
def func(v):
fmt = fmt_list.pop(0)
return fmt.format(v)
with open(fname, "w") as tf:
tf.write(
panel_a
.style
.format({'variable': str, 'value': func})
...
.to_latex(
...
position_float='centering'
))
I need to find all projects and shared projects within a Gitlab group with subgroups. I managed to list the names of all projects like this:
group = gl.groups.get(11111, lazy=True)
# find all projects, also in subgroups
projects=group.projects.list(include_subgroups=True, all=True)
for prj in projects:
print(prj.attributes['name'])
print("")
What I am missing is to list also the shared projects within the group. Or maybe to put this in other words: find out all projects where my group is a member. Is this possible with the Python API?
So, inspired by the answer of sytech, I found out that it was not working in the first place, as the shared projects were still hidden in the subgroups. So I came up with the following code that digs through all various levels of subgroups to find all shared projects. I assume this can be written way more elegant, but it works for me:
# group definition
main_group_id = 11111
# create empty list that will contain final result
list_subgroups_id_all = []
# create empty list that act as temporal storage of the results outside the function
list_subgroups_id_stored = []
# function to create a list of subgroups of a group (id)
def find_subgroups(group_id):
# retrieve group object
group = gl.groups.get(group_id)
# create empty lists to store id of subgroups
list_subgroups_id = []
#iterate through group to find id of all subgroups
for sub in group.subgroups.list():
list_subgroups_id.append(sub.id)
return(list_subgroups_id)
# function to iterate over the various groups for subgroup detection
def iterate_subgroups(group_id, list_subgroups_id_all):
# for a given id, find existing subgroups (id) and store them in a list
list_subgroups_id = find_subgroups(group_id)
# add the found items to the list storage variable, so that the results are not overwritten
list_subgroups_id_stored.append(list_subgroups_id)
# for each found subgroup_id, test if it is already part of the total id list
# if not, keep store it and test for more subgroups
for test_id in list_subgroups_id:
if test_id not in list_subgroups_id_all:
# add it to total subgroup id list (final results list)
list_subgroups_id_all.append(test_id)
# check whether test_id contains more subgroups
list_subgroups_id_tmp = iterate_subgroups(test_id, list_subgroups_id_all)
#if so, append to stored subgroup list that is currently checked
list_subgroups_id_stored.append(list_subgroups_id_tmp)
return(list_subgroups_id_all)
# find all subgroup and subsubgroups, etc... store ids in list
list_subgroups_id_all = iterate_subgroups(main_group_id , list_subgroups_id_all)
print("***ids of all subgroups***")
print(list_subgroups_id_all)
print("")
print("***names of all subgroups***")
list_names = []
for ids in list_subgroups_id_all:
group = gl.groups.get(ids)
group_name = group.attributes['name']
list_names.append(group_name)
print(list_names)
#print(list_subgroups_name_all)
print("")
# print all directly integrated projects of the main group, also those in subgroups
print("***integrated projects***")
group = gl.groups.get(main_group_id)
projects=group.projects.list(include_subgroups=True, all=True)
for prj in projects:
print(prj.attributes['name'])
print("")
# print all shared projects
print("***shared projects***")
for sub in list_subgroups_id_all:
group = gl.groups.get(sub)
for shared_prj in group.shared_projects:
print(shared_prj['path_with_namespace'])
print("")
One question that remains - at the very beginning I retrieve the main group by its id (here: 11111), but can I actually also get this id by looking for the name of the group? Something like: group_id = gl.group.get(attribute={'name','foo'}) (not working)?
You can get the shared projects by the .shared_projects attribute:
group = gl.groups.get(11111)
for proj in group.shared_projects:
print(proj['path_with_namespace'])
However, you cannot use the lazy=True argument to gl.groups.get.
>>> group = gl.groups.get(11111, lazy=True)
>>> group.shared_projects
AttributeError: shared_projects
I'm writing some basic script to create basic charts using python's svgwrite. I have successfully been able to create groups with other items, such as circles, paths and lines. However when adding several text elements into a group those are not properly shown in a group when I open the svg figure with Inkscape. The text shows up all right, but it is just not grouped.
This is my piece of code:
# Create group for constellation names
const_names = dwg.add(dwg.g(id='constellation_names',
stroke='none',
fill=config.constellation_name_font.color.get_hex_rgb(),
fill_opacity=config.constellation_name_font.color.get_float_alpha(),
font_size=config.constellation_name_font.size*pt,
font_family=config.constellation_name_font.font_family))
log.warning("Constellation name groups are not working!")
if config.constellation_name_enable:
w, h = constellation.get_mean_position()
# Add every text item into the group
const_names.add(dwg.text(constellation.name,
insert=(w*pt, h*pt),
)
)
Turns out this was a type-8 error (I had a bug on the code). This is how my code ended up looking like. All text instances are grouped on a single group.
def _add_constellation_names(dwg, constellations, config):
const_names = dwg.add(dwg.g(id='constellation_names',
stroke='none',
fill=config.constellation_name_font.color.get_hex_rgb(),
fill_opacity=config.constellation_name_font.color.get_float_alpha(),
font_size=config.constellation_name_font.size*pt,
font_family=config.constellation_name_font.font_family))
for constellation in constellations:
kwargs = {}
if constellation.custom_color != None:
kwargs["fill"] = constellation.custom_color.get_hex_rgb()
kwargs["fill_opacity"] = constellation.custom_color.get_float_alpha()
w, h = constellation.get_mean_position()
const_names.add(dwg.text(constellation.get_display_name(),
insert=(w*pt, h*pt),
text_anchor="middle",
**kwargs,
)
)
Sorry; I know there are a thousand 'make unique list' threads. I've tried to solve this on my own, or to hack another "make unique list" solution, but I've been unsuccessful with my not amazing python skills.
I have a list of video file names (these are shots in a movie). For any given shot I want to remove duplicates, based on part of the path (circled in red in the image below); only the one with the highest tk_value should end up in the final list.
e.g In the image below, for shot de05_001 only tk_3 should end up in the list.
Input (with duplicates):
raw_list = ['D:\\de05\\de05_001\\postvis\\tk_2\\blasts\\tb205_de05_001.POSTVIS.mov',
'D:\\de05\\de05_001\\postvis\\tk_3\\blasts\\tb205_de05_001.POSTVIS.mov',
'D:\\de05\\de05_002\\postvis\\tk_1\\blasts\\tb205_de05_002.POSTVIS.mov',
'D:\\de05\\de05_017\\postvis\\tk_2\\blasts\\tb205_de05_017.POSTVIS.mov',
'D:\\de05\\de05_019\\postvis\\tk_2\\blasts\\tb205_de05_019.POSTVIS.mov',
'D:\\de05\\de05_019\\postvis\\tk_3\\blasts\\tb205_de05_019.POSTVIS.mov',
'D:\\de05\\de05_019\\postvis\\tk_4\\blasts\\tb205_de05_019.POSTVIS.mov',
'D:\\de05\\de05_019\\postvis\\tk_1\\blasts\\tb205_de05_019.POSTVIS.mov', ]
Output (duplicates removed, only highest tk_ numbers remain):
outputList = ['D:\\de05\\de05_001\\postvis\\tk_3\\blasts\\tb205_de05_001.POSTVIS.mov',
'D:\\de05\\de05_002\\postvis\\tk_1\\blasts\\tb205_de05_002.POSTVIS.mov',
'D:\\de05\\de05_017\\postvis\\tk_2\\blasts\\tb205_de05_017.POSTVIS.mov',
'D:\\de05\\de05_019\\postvis\\tk_4\\blasts\\tb205_de05_019.POSTVIS.mov', ]
Any help would be great. Thank you.
One way would be to create a dictionary and keep reassigning the keys, so you only end up with the last value in the directory:
import os
raw_list1 = [
'D:\\\\de05\\de05_019\\postvis\\tk_2\\blasts\\tb205_de05_019.POSTVIS.mov',
'D:\\\\de05\\de05_019\\postvis\\tk_3\\blasts\\tb205_de05_019.POSTVIS.mov',
'D:\\\\de05\\de05_019\\postvis\\tk_4\\blasts\\tb205_de05_019.POSTVIS.mov',
'D:\\\\de05\\de05_019\\postvis\\tk_1\\blasts\\tb205_de05_019.POSTVIS.mov',
'D:\\\\tw05\\tw05_036\\postvis\\tk_9\\blasts\\tb205_tw05_036.POSTVIS.mov',
'D:\\\\tw05\\tw05_036\\postvis\\tk_13\\blasts\\tb205_tw05_036.POSTVIS.mov'
]
raw_list2 = [
'D:\\de05\\de05_001\\postvis\\tk_2\\blasts\\tb205_de05_001.POSTVIS.mov',
'D:\\de05\\de05_001\\postvis\\tk_3\\blasts\\tb205_de05_001.POSTVIS.mov',
'D:\\de05\\de05_002\\postvis\\tk_1\\blasts\\tb205_de05_002.POSTVIS.mov',
'D:\\de05\\de05_017\\postvis\\tk_2\\blasts\\tb205_de05_017.POSTVIS.mov',
'D:\\de05\\de05_019\\postvis\\tk_2\\blasts\\tb205_de05_019.POSTVIS.mov',
'D:\\de05\\de05_019\\postvis\\tk_3\\blasts\\tb205_de05_019.POSTVIS.mov',
'D:\\de05\\de05_019\\postvis\\tk_4\\blasts\\tb205_de05_019.POSTVIS.mov',
'D:\\de05\\de05_019\\postvis\\tk_1\\blasts\\tb205_de05_019.POSTVIS.mov',
]
def path_split(p, folders=None):
folders = folders or []
head, tail = os.path.split(p)
if not tail:
return folders
return path_split(head, [tail] + folders)
for raw_list in (raw_list1, raw_list2):
results = {}
for p in raw_list:
# Split your path accordingly
# For something simple you could have just done s.split('\\'), but since we're working with paths, we might as well use os.path.split
shot1, shot2, folder1, take, folder2, file_name = path_split(p)
# If something like 'de05_019' defines your shot, make that the key
key = shot2
# Extract the take number into an integer
new_take_num = int(take.split('_')[-1])
# Try finding the take you already stored (default to Nones)
existing_take_num, existing_path = results.get(key, (None, None))
# See if the new take is bigger than the existing one, based on the take number.
# Lambda is there for comparison, meaning I'm only comparing the take numbers, not the paths. I'll link the docs to max in the comments.
value = max((existing_take_num, existing_path), (new_take_num, p), key=lambda take_num_and_path: take_num_and_path[0])
# Assign the value (which is either the existing take, or the new take)
results[key] = value
for res in sorted(results.values()):
print res
print '*' * 80
This outputs (you could also just print res[1] to only print the path):
(4, 'D:\\\\de05\\de05_019\\postvis\\tk_4\\blasts\\tb205_de05_019.POSTVIS.mov')
(13, 'D:\\\\tw05\\tw05_036\\postvis\\tk_13\\blasts\\tb205_tw05_036.POSTVIS.mov')
********************************************************************************
(1, 'D:\\de05\\de05_002\\postvis\\tk_1\\blasts\\tb205_de05_002.POSTVIS.mov')
(2, 'D:\\de05\\de05_017\\postvis\\tk_2\\blasts\\tb205_de05_017.POSTVIS.mov')
(3, 'D:\\de05\\de05_001\\postvis\\tk_3\\blasts\\tb205_de05_001.POSTVIS.mov')
(4, 'D:\\de05\\de05_019\\postvis\\tk_4\\blasts\\tb205_de05_019.POSTVIS.mov')
********************************************************************************
I've got a function which parses a sentence by building up a chart. But Python holds on to whatever memory was allocated during that function call. That is, I do
best = translate(sentence, grammar)
and somehow my memory goes up and stays up. Here is the function:
from string import join
from heapq import nsmallest, heappush
from collections import defaultdict
MAX_TRANSLATIONS=4 # or choose something else
def translate(f, g):
words = f.split()
chart = {}
for col in range(len(words)):
for row in reversed(range(0,col+1)):
# get rules for this subspan
rules = g[join(words[row:col+1], ' ')]
# ensure there's at least one rule on the diagonal
if not rules and row==col:
rules=[(0.0, join(words[row:col+1]))]
# pick up rules below & to the left
for k in range(row,col):
if (row,k) and (k+1,col) in chart:
for (w1, e1) in chart[row, k]:
for (w2, e2) in chart[k+1,col]:
heappush(rules, (w1+w2, e1+' '+e2))
# add all rules to chart
chart[row,col] = nsmallest(MAX_TRANSLATIONS, rules)
(w, best) = chart[0, len(words)-1][0]
return best
g = defaultdict(list)
g['cela'] = [(8.28, 'this'), (11.21, 'it'), (11.57, 'that'), (15.26, 'this ,')]
g['est'] = [(2.69, 'is'), (10.21, 'is ,'), (11.15, 'has'), (11.28, ', is')]
g['difficile'] = [(2.01, 'difficult'), (10.08, 'hard'), (10.19, 'difficult ,'), (10.57, 'a difficult')]
sentence = "cela est difficile"
best = translate(sentence, g)
I'm using Python 2.7 on OS X.
Within the function, you set rules to an element of grammar; rules then refers to that element, which is a list. You then add items to rules with heappush, which (as lists are mutable) means grammar holds on to the pushed values via that list. If you don't want this to happen, use copy when assigning rules or deepcopy on the grammar at the start of translate. Note that even if you copy the list to rules, the grammar will record an empty list every time you retrieve an element for a missing key.
Try running gc.collect after you run the function.