I am beginner in tensorflow and I have run into a problem: how to manually change Variable? More precisely, I want to add some noise to my Weights tensor, see how good it does, and based on that, apply/ignore the change.
W = tf.Variable(tf.randomNormal([xsize,ysize]))
TempW = W + tf.randomNormal([xsize,ysize])
compute = x*TempW
#initialize, run the computation etc.
# how can I make W = TempW now?
After kratenko pointed it out, I figured that there are methods like
tf.Variable.assign(value)
tf.Variable.assign_add(value)
tf.Variable.assign_subtract(value)
In my case, usage was:
#initialisation
apply = W.assign(TempW)
#usage
sess.run(apply)
So if anyone also skipped these ones in docs, I hope it helps.
Related
I'm quite a newbie with networkx and it seems that i'm having RAM issues when running a function that merges two different graphs. This function adds up the weight of edges that are common to both graphs.
I have a list of 8~9 graphs each containing about 1000-2000 nodes which I merge in this loop:
FinalGraph = nx.Graph()
while len(graphs_list)!=0:
FinalGraph = merge_graphs(FinalGraph,graphs_list.pop())
using this function
def merge_graphs(graph1, graph2):
edges1 = graph1.edges
edges2 = graph2.edges
diff1 = edges1 - edges2
if diff1==edges1:
return nx.compose(graph1,graph2)
else:
common_edges = list(edges1 - diff1)
for edges in common_edges:
graph1[edges[0]][edges[1]]['weight'] += graph2[edges[0]][edges[1]]['weight']
return nx.compose(graph2, graph1)
When running my script, my computer will always freeze when reaching this loop. Am i doing some kind of bad reference cycle or something ? Am i missing something in the networkx doc more effective that could help me not use this function for my purpose ?
Thanks for reading me, I hope i'm making sense
There seems to be a lot of extra work going on here caused by you trying to check if the conditions allow you to use compose. This may be contributing to the trouble. I think it might work better to just iterate through the edges and nodes of the graph. The following looks like a more direct way to do it (and doesn't require creating as many variables, which might be contributing to the memory issues)
final_graph = nx.Graph()
for graph in graphs_list:
final_graph.add_nodes_from(graph.nodes())
for u, v, w in graph.edges(data=True):
if final_graph.has_edge(u,v):
final_graph[u][v]['weight'] += w
else:
final_graph.add_edge(u,v,weight = w)
I'm working on a variational auto-encoder and I'd like the prior used in the KL-divergence regularization of the latent distribution to have its loc (mean) and scale (stddev) updated.
The below snippet is a contrived minimal example demonstrating what I'm trying to achieve. This starts to work but then just freezes after some random number of epochs (sometimes 1, sometimes 200, but usually around 7 or 8). There's no error message or anything.
loc = tf.Variable(tf.random.normal([ndim], stddev=0.1, dtype=tf.float32))
scale = tfp.util.TransformedVariable(
tf.random.normal([ndim], mean=1.0, stddev=0.1, dtype=tf.float32),
bijector=tfb.Chain([tfb.Shift(1e-5), tfb.Softplus(), tfb.Shift(0.5413)]))
prior = tfd.Independent(tfd.Normal(loc=loc, scale=scale), reinterpreted_batch_ndims=1)
_input = tfkl.Input(shape=(1,))
_loc = tfkl.Dense(ndim, name="loc_params")(_input)
_scale = tfkl.Dense(ndim, name="untransformed_scale_params")(_input)
_scale = tf.math.softplus(_scale + np.log(np.exp(1) - 1)) + 1e-5
_output = tfpl.DistributionLambda(
make_distribution_fn=lambda t: tfd.Independent(tfd.Normal(loc=t[0], scale=t[1])),
activity_regularizer=tfpl.KLDivergenceRegularizer(prior, use_exact_kl=True, weight=0.1)
)([_loc, _scale])
model = tf.keras.Model(_input, _output)
model.compile(optimizer='adam', loss=lambda y_true, model_out: -model_out.log_prob(y_true))
hist = model.fit(ds, epochs=N_EPOCHS, verbose=2)
I have a runnable gist here.
A more concrete example, and an architecture close to what I'm trying to update and simplify, is the tfp example for disentangled_vae. In its manual training loop, a new tfd.MultivariateNormalDiag is instantiated on every loop, though it is parameterized using persistent tf.Variables. I'm trying my best to avoid manual training loops, and I'm also trying to move to more Keras-like syntax, so I'd rather not do a direct port of this example.
Any advice is greatly appreciated. Thanks!
Edit: The activity_regularizer seems to work fine when attached to a latent (bottleneck) distribution. I have a more complete example in this Colab notebook. As this works in my architecture, I'm no longer in need of an answer.
However, I highly doubt having model fitting freeze is desirable behaviour, so this remains a problem.
As the machinery works in most circumstances, just not the contrived freezing example above, I no longer consider this a question that needs an answer.
I have reported the errorless freezing behaviour via the tensorflow-probability repository issues page. See here.
For a task for school, I have to write my own fit function by using the least squares method. The problem is I don't know how to do that, specifically I don't know how to minimize my function to calculate my fit parameters.The problem here is also that my fit function is not linear, so my book says I have to try guess some values for my fit parameters and then minimize my function. But still I don't know how do to that. The code that you can find below is my code right now, I got it from somebody but I don't understand what it does so :).
Thanks in advance!
def fit(x,mu,gamma,back,A):
return A*(gamma/((x-mu)**2+gamma**2))+back
def Ls_rechte(y):
Ls = 0
for i in range(len(Positie)):
Ls = Ls + (Intensiteit[i]- fit(Positie[i],y[0],y[1],y[2],y[3]))**2/(FoutI[i]**2)
return Ls
nu = len(Positie)-4
mini = minimize(Ls_rechte,(150,0,100,1))
display(mini)
I-m trying to run my python program it seems that it should run smoothly however I encounter an error that I haven't seen before it says:
free(): invalid pointer
Aborted (core dumped)
However I'm not sure how to try and fix error since it doesn't give me too much information about the problem itself.
At first I thought it should be a problem with the sizes of the tensor in my network however they are completely fine. I've google the problem a little and found that I can see that is a problem with allocating memory where I shouldn't, but I don't know how to fix this problem
My code is divided in two different files, and I use two libraries to be able to use Sinkhorn loss function and make sample randomly a mesh.
import argparse
import point_cloud_utils as pcu
import time
import numpy as np
import torch
import torch.nn as nn
from fml.nn import SinkhornLoss
import common
def main():
# x is a tensor of shape [n, 3] containing the positions of the vertices that
x = torch._C.from_numpy(common.loadpointcloud("sphere.txt"))
# t is a tensor of shape [n, 3] containing a set of nicely distributed samples in the unit cube
v, f = common.unit_cube()
t = torch._C.sample_mesh_lloyd(pcu.lloyd(v,f,x.shape[0]).astype(np.float32)) # sample randomly a point cloud (cube for now?)
# The model is a simple fully connected network mapping a 3D parameter point to 3D
phi = common.MLP(in_dim=3, out_dim=3)
# Eps is 1/lambda and max_iters is the maximum number of Sinkhorn iterations to do
emd_loss_fun = SinkhornLoss(eps=1e-3, max_iters=20,
stop_thresh=1e-3, return_transport_matrix=True)
mse_loss_fun = torch.nn.MSELoss()
# Adam optimizer at first
optimizer = torch.optim.Adam(phi.parameters(), lr= 10e-3)
fit_start_time = time.time()
for epoch in range(100):
optimizer.zero_grad()
# Do the forward pass of the neural net, evaluating the function at the parametric points
y = phi(t)
# Compute the Sinkhorn divergence between the reconstruction*(using the francis library) and the target
# NOTE: The Sinkhorn function expects a batch of b point sets (i.e. tensors of shape [b, n, 3])
# since we only have 1, we unsqueeze so x and y have dimension [1, n, 3]
with torch.no_grad():
_, P = emd_loss_fun(phi(t).unsqueeze(0), x.unsqueeze(0))
# Project the transport matrix onto the space of permutation matrices and compute the L-2 loss
# between the permuted points
loss = mse_loss_fun(y[P.squeeze().max(0)[1], :], x)
# loss = mse_loss_fun(P.squeeze() # y, x) # Use the transport matrix directly
# Take an optimizer step
loss.backward()
optimizer.step()
print("Epoch %d, loss = %f" % (epoch, loss.item()))
fit_end_time = time.time()
print("Total time = %f" % (fit_end_time - fit_start_time))
# Plot the ground truth, reconstructed points, and a mesh representing the fitted function, phi
common.visualitation(x,t,phi)
if __name__ == "__main__":
main()
The error message is:
free(): invalid pointer
Aborted (core dumped)
That again doesn't help me that much. I'll appreciate it a lot if someone has any idea what is happening or if you know more about this error.
Edit: The cause is actually known. The recommended solution is to build both packages from source.
There is a known issue with importing both open3d and PyTorch. The cause is unknown. https://github.com/pytorch/pytorch/issues/19739
A few possible workarounds exist:
(1) Some people have found that changing the order in which you import the two packages can resolve the issue, though in my personal testing both ways crash.
(2) Other people have found compiling both packages from source to help.
(3) Still others have found that moving open3d and PyTorch to be called from separate scripts resolves the issue.
Note for future readers: This bug was filed as issue #21018.
This is not a problem in your Python code. It is a bug in PyTorch (probably) or in Python itself (unlikely, but possible).
free(3) is a C function that releases dynamically allocated memory when it is no longer needed. You cannot (easily) call it from Python, because memory management is a low-level implementation detail normally handled by the Python interpreter. However, you are also using PyTorch, which is written in C++ and C, and does have the ability to directly allocate and free memory.
In this case, some C code has tried to release a block of memory, but the block of memory it tried to release was not dynamically allocated in the first place, which is an error. You should report this behavior to the PyTorch developers. Include as much detail as possible, including the shortest code you can find that reproduces the problem, and the complete output of that program.
I am trying to create a script that would help me automate the creation of a spine rig, but I am running into a problem. I am following the tutorial provided here and I am working on the step where you skin the curve to the IK joints.
However, when I try to use mc.bindSkin(), I keep getting an error:
Error: RuntimeError: file[directory]/maya/2016.5/scripts\createRigSpine.py line 200: Maya command error)
It's too late right now to for me to do much experimenting, but I was hoping someone could help me, or tell me if I'm using the wrong commands.
mc.select(crvSpine, jntIkMidSpine, jntIkChest)
mc.bindSkin(crvSpine, jntIkMidSpine, jntIkChest, tsb=True)
(have also tried mc.bindSkin() and mc.bindSkin(tsb=True))
Ideally, I want the settings to be:
Bind To: Selected Joints
Bind Method: Closest Distance
Skinning Method: Classic Linear
Normalize Weights: Interactive
Edit: I wanted to use skinCluster, not bindSkin.
you should use the skinCluster command to bind your curve to the joints - and you can actually do it without selecting anything!
Try this:
import maya.cmds as mc
influences = [jntIkMidSpine, jntIkChest]
scls = mc.skinCluster(influences, crvSpine, name='spine_skinCluster', toSelectedBones=True, bindMethod=0, skinMethod=0, normalizeWeights=1)[0]
# alternatively, if you don't want such a long line of code:
#
influences = [jntIkMidSpine, jntIkChest]
kwargs = {
'name': 'spine_skinCluster', # or whatever you want to call it...
'toSelectedBones': True,
'bindMethod': 0,
'skinMethod': 0,
'normalizeWeights': 1
}
scls = mc.skinCluster(influences, crvSpine, **kwargs)[0]
# OR just use the short names for the kwargs...
#
influences = [jntIkMidSpine, jntIkChest]
scls = mc.skinCluster(influences, crvSpine, n='spine_skinCluster', tsb=True, bm=0, sm=0, nw=1)[0]
If you wanted to, you could also explicitly set the weights you want for each cv of the curve. You could use the skinPercent command, or even just use setAttr for the various weight attrs in the skinCluster (that's a little more difficult, but not much)
cmds.bindSkin() command made for binding bones to geometry. It's not suitable for binding to IK's only. So you need to assign what joint you need to bind to.
For example:
import maya.cmds as mc
mc.select('ikHandle1','nurbsCircle1','joint5')
mc.bindSkin('ikHandle1','nurbsCircle1','joint5')
# the order of selection is vital
For constraining selected objects use the commands like this:
mc.pointConstraint('ikHandle1','nurbsCircle1', weight=5.0)
To find out what constraints are available to you, use Rigging module – Constrain menu – Parent, Point, Orient, Scale, Aim, Pole Vector.
I was using the wrong command. mc.skinCluster is what I wanted to use, not mc.bindSkin.