New to python, I'm trying to fine tuning the plotnine graph, and explore what can be done in the theme() function. I'm just wondering what is the general way to find out what else is available for me to play with.
theme(plot_title = element_text(size=10, text='tile'),
axis_title_y = element_text(size=7, text='tlab'),
axis_title_x = element_text(size=7,text='xlab'))
In plotnine, they are called themeables.
Related
I'm getting very confused trying to setup my simulation correctly in PyDrake. What I want is to have an actuated robot (with e.g. an InverseDynamicsController on it) together with an object in the scene that the robot will manipulate. However, I'm struggling to sort out how to create and use the MultibodyPlant, SceneGraph, Context, Simulator combination correctly.
Here is roughly what I've tried to do:
builder = DiagramBuilder()
plant, scene_graph = AddMultibodyPlantSceneGraph(builder, time_step=1e-4)
parser = Parser(plant, scene_graph)
# Add my robot
robot = parser.AddModelFromFile(robot_urdf)
robot_base = plant.GetFrameByName('robot_base')
plant.WeldFrames(plant.world_frame(), robot_base)
# Add my object
parser.AddModelFromFile(FindResourceOrThrow("drake/my_object.urdf"))
plant.finalize()
# Add my controller
Kp = np.full(6, 100)
Ki = 2 * np.sqrt(Kp)
Kd = np.full(6, 1)
controller = builder.AddSystem(InverseDynamicsController(plant, Kp, Ki, Kd, False))
controller.set_name("sim_controller");
builder.Connect(plant.get_state_output_port(robot),
controller.get_input_port_estimated_state())
builder.Connect(controller.get_output_port_control(),
plant.get_actuation_input_port())
# Get the diagram, simulator, and contexts
diagram = builder.Build()
simulator = Simulator(diagram)
context = simulator.get_mutable_context()
plant_context = plant.GetMyContextFromRoot(context)
However, this has some undesirable qualities. First, as long as I've added the object, then I get this error:
Failure at systems/controllers/inverse_dynamics_controller.cc:32 in SetUp(): condition 'num_positions == dim' failed.
Second, with the object added, the object pose becomes part of my InverseKinematics problem, and when I do SetPositions with plant_context, I have to set both my arm joints AND the pose of the object, when I feel like I should only be setting the robot's joint positions with SetPositions.
I realize I've done something wrong with this setup, and I'm just wondering what is the correct way to have an instance of Simulator that I can run simulations with that has both an actuated robot, and a manipulable object? Am I supposed to create multiple plants? Multiple contexts? Who shares what with who?
I'd really appreciate some advice on this, or a pointer to an example. Drake is great, but I struggle to find minimal examples that do what I want.
Yes, you can add a separate MultibodyPlant for control. See https://github.com/RobotLocomotion/drake/blob/master/examples/planar_gripper/planar_gripper_simulation.cc for an example. The setup is similar to yours, though it's in C++. You can try mimicking the way the diagram is wired up there.
When you do have two plants, you want to call SetPositions on the simulation plant (not the control plant). You can set only the robot positions by using ModelInstanceIndex.
# Add my robot
robot = parser.AddModelFromFile(robot_urdf)
...
plant.SetPositions(plant_context, robot, robot_positions)
i can't find the way to assign a material to a CachedGemotry with python scripting.
On staticGeometry i can do it with ".set_material" but the function dont exist on CachedGeometry.
Do you have a solution?
Thank you!
I assume you mean GeometryCache when you talk about CachedGeometry or am I mistaken?
Assuming I'm not (apologies if I am) then you will be able to do this by modifying the 'materials' property on your GeometryCache object.
import unreal
# Gets Pre-existing objects
cached_geo_asset = unreal.load_asset(‘<GEO_CACHE_PATH>’)
mat_asset = unreal.load_asset(‘<MATERIAL_PATH’)
# Display materials before update.
print(‘Before’)
mats = cached_geo_asset.get_editor_property('materials')
print(mats)
# Override the existing materials to use only the one loaded
# above.
cached_geo_asset.set_editor_property('materials', [mat_asset])
# Display materials after update.
print(‘After’)
mats = cached_geo_asset.get_editor_property('materials')
print(mats)
I hope this helps! I haven't used GeometryCache much so I would be curious to hear if this solutions works out for you.
For a task for school, I have to write my own fit function by using the least squares method. The problem is I don't know how to do that, specifically I don't know how to minimize my function to calculate my fit parameters.The problem here is also that my fit function is not linear, so my book says I have to try guess some values for my fit parameters and then minimize my function. But still I don't know how do to that. The code that you can find below is my code right now, I got it from somebody but I don't understand what it does so :).
Thanks in advance!
def fit(x,mu,gamma,back,A):
return A*(gamma/((x-mu)**2+gamma**2))+back
def Ls_rechte(y):
Ls = 0
for i in range(len(Positie)):
Ls = Ls + (Intensiteit[i]- fit(Positie[i],y[0],y[1],y[2],y[3]))**2/(FoutI[i]**2)
return Ls
nu = len(Positie)-4
mini = minimize(Ls_rechte,(150,0,100,1))
display(mini)
I am trying to create a script that would help me automate the creation of a spine rig, but I am running into a problem. I am following the tutorial provided here and I am working on the step where you skin the curve to the IK joints.
However, when I try to use mc.bindSkin(), I keep getting an error:
Error: RuntimeError: file[directory]/maya/2016.5/scripts\createRigSpine.py line 200: Maya command error)
It's too late right now to for me to do much experimenting, but I was hoping someone could help me, or tell me if I'm using the wrong commands.
mc.select(crvSpine, jntIkMidSpine, jntIkChest)
mc.bindSkin(crvSpine, jntIkMidSpine, jntIkChest, tsb=True)
(have also tried mc.bindSkin() and mc.bindSkin(tsb=True))
Ideally, I want the settings to be:
Bind To: Selected Joints
Bind Method: Closest Distance
Skinning Method: Classic Linear
Normalize Weights: Interactive
Edit: I wanted to use skinCluster, not bindSkin.
you should use the skinCluster command to bind your curve to the joints - and you can actually do it without selecting anything!
Try this:
import maya.cmds as mc
influences = [jntIkMidSpine, jntIkChest]
scls = mc.skinCluster(influences, crvSpine, name='spine_skinCluster', toSelectedBones=True, bindMethod=0, skinMethod=0, normalizeWeights=1)[0]
# alternatively, if you don't want such a long line of code:
#
influences = [jntIkMidSpine, jntIkChest]
kwargs = {
'name': 'spine_skinCluster', # or whatever you want to call it...
'toSelectedBones': True,
'bindMethod': 0,
'skinMethod': 0,
'normalizeWeights': 1
}
scls = mc.skinCluster(influences, crvSpine, **kwargs)[0]
# OR just use the short names for the kwargs...
#
influences = [jntIkMidSpine, jntIkChest]
scls = mc.skinCluster(influences, crvSpine, n='spine_skinCluster', tsb=True, bm=0, sm=0, nw=1)[0]
If you wanted to, you could also explicitly set the weights you want for each cv of the curve. You could use the skinPercent command, or even just use setAttr for the various weight attrs in the skinCluster (that's a little more difficult, but not much)
cmds.bindSkin() command made for binding bones to geometry. It's not suitable for binding to IK's only. So you need to assign what joint you need to bind to.
For example:
import maya.cmds as mc
mc.select('ikHandle1','nurbsCircle1','joint5')
mc.bindSkin('ikHandle1','nurbsCircle1','joint5')
# the order of selection is vital
For constraining selected objects use the commands like this:
mc.pointConstraint('ikHandle1','nurbsCircle1', weight=5.0)
To find out what constraints are available to you, use Rigging module – Constrain menu – Parent, Point, Orient, Scale, Aim, Pole Vector.
I was using the wrong command. mc.skinCluster is what I wanted to use, not mc.bindSkin.
I am writing a program that tries to compare two methods. I would like to generate Control flow graphs (CFG) for all matched methods and use either a topological sort to compare the two graphs.
RPython, the translation toolchain behind PyPy, offers a way of grabbing the flow graph (in the pypy/rpython/flowspace directory of the PyPy project) for type inference.
This works quite well in most cases but generators are not supported. The result will be in SSA form, which might be good or bad, depending on what you want.
There's a Python package called staticfg which does exactly the this -- generation of control flow graphs from a piece of Python code.
For instance, putting the first quick sort Python snippet from Rosseta Code in qsort.py, the following code generates its control flow graph.
from staticfg import CFGBuilder
cfg = CFGBuilder().build_from_file('quick sort', 'qsort.py')
cfg.build_visual('qsort', 'png')
Note that it doesn't seem to understand more advanced control flow like comprehensions.
I found py2cfg has a better representation of Control Flow Graph (CFG) than one from staticfg.
https://gitlab.com/classroomcode/py2cfg
https://pypi.org/project/py2cfg/
Let's take this function in Python:
def fib():
a, b = 0, 1
while True:
yield a
a, b = b, a + b
fib_gen = fib()
for _ in range(10):
next(fib_gen)
Image from StaticCFG:
Image from PY2CFG:
http://pycallgraph.slowchop.com/ looks like what you need.
Python trace module also have option --trackcalls that can be an entrypoint for call tracing machinery in stdlib.