Graph Tool's edge_gradient property - python

I would like to use the edge_gradient property on Graph Tool's gt.graph_draw() in order to better visualize the direction of connections in plots which are too crowded for markers such as arrows.
From the description in the docs, it seems this is what this property should do. Currently, however, it only lets me set the edges to a solid color.
I am using the property like so:
egradient = g.new_edge_property('vector<double>')
g.edge_properties['egradient'] = egradient
e = g.add_edge(v1, v2)
egradient[e] = (0.9, 0.329,0.282,0.478,1)
...
gt.graph_draw(g, ... edge_gradient=g.edge_properties["egradient"])
The appearance remains unchanged if I modify the first value in (0.9, 0.329,0.282,0.478,1) - and if I try to pass it a list of tuples I get this from the graph tool internals:
TypeError: float() argument must be a string or a number
How can I achieve what I am looking for in graph tool? If I can't, then what else is the first value in the edge gradient 5-tuple actually good for?

edge_gradient actually expects a list of integers, not a list of tuples. I made the same mistake at first.
Example: If you want to go from white to black, your `edge_gradient parameter should look like this:
# o r g b a o r g b a
edge_gradient=[0, 1, 1, 1, 1, 1, 0, 0, 0, 1]
That's what the docs mean by, "Each group of 5 elements is interpreted as [o, r, g, b, a] where o is the offset in the range [0, 1] and the remaining values specify the colors.
It gets a little tough to read, so I separate my stop points and format them like this:
# offset r g b a
edge_gradient=[0, 1, 1, 1, 1, \
0.5, 0, 0, 0, 1, \
1, 1, 0, 0, 1]
Which fades from white to black to red. ...In theory, at least. I have had trouble getting edge_gradient to work with more than two gradient stop points. I always end up with some edges coloured like the list I pass to the edge_gradient property, and the rest with strange behaviour, like having the final colour in the middle.

# Set the gradients [must be same shape, not ragged array ex: (1, 15)]
num_edges = 2
grad_length = 15
## 3 Stops: red to grey to blue
egrad_1 = np.asarray([ 0, 1, 0, 0, 1,
0.5, 0.8, 0.8, 0.8, 1,
1, 0, 0, 1, 1])
## 3 Stops: grey to grey to grey
egrad_2 = np.asarray([ 0, 0.8, 0.8, 0.8, 1,
0.5, 0.8, 0.8, 0.8, 1,
1, 0.8, 0.8, 0.8, 1])
# Place into array of shape (num_edges, grad_length)
gradient_list = np.asarray([egrad_1, egrad_2])
# Create graph and add vertices and edges
g1 = gt.Graph(directed=False)
g1.ep.edge_gradient = g1.new_edge_property("vector<double>")
g1v1 = g1.add_vertex()
g1v2 = g1.add_vertex()
e1 = g1.add_edge(g1v1, g1v2)
e2 = g1.add_edge(g1v1, g1v1)
# Set property map
g1.ep.edge_gradient.set_2d_array(np.transpose(gradient_list))
# Draw the graph
gt.graph_draw(g1, edge_gradient=g1.ep.edge_gradient)
Graph Result

Related

Can you translate a flashing light into morse code?

I have made a morse code translator and I want it to be able to record a flashing light and make it into morse code. I think I will need OpenCV or a light sensor, but I don't know how to use either of them. I haven't got any code for it yet, as I couldn't find any solutions anywhere else.
The following is just a concept of what you could try. Yes, you could also train a neural network for this but if your setup is simple enough, some engineering will do.
We first create a "toy-video" to work with:
import numpy as np
import matplotlib.pyplot as plt
# Create a toy "video"
image = np.asarray([
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
[0, 0, 1, 2, 2, 1],
[0, 0, 2, 4, 4, 2],
[0, 0, 2, 4, 4, 2],
[0, 0, 1, 2, 2, 1],
])
signal = np.asarray([0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0])
x = list(range(len(signal)))
signal = np.interp(np.linspace(0, len(signal), 100), x, signal)[..., None]
frames = np.einsum('tk,xy->txyk', signal, image)[..., 0]
Plot a few frames:
fig, axes = plt.subplots(1, 12, sharex='all', sharey='all')
for i, ax in enumerate(axes):
ax.matshow(frames[i], vmin=0, vmax=1)
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
ax.set_title(i)
plt.show()
Now that you have this kind of toy video, it's pretty straight forward to convert it back to some sort of binary signal. You'd simply compute the average brightness of each frame:
reconstructed = frames.mean(1).mean(1)
reconstructed_bin = reconstructed > 0.5
plt.plot(reconstructed, label='original')
plt.plot(reconstructed_bin, label='binary')
plt.title('Reconstructed Signal')
plt.legend()
plt.show()
From here we only have to determine the length of each flash.
# This is ugly, I know. Just for understanding though:
# 1. Splits the binary signal on zero-values
# 2. Filters out the garbage (accept only lists where len(e) > 1)
# 3. Gets the length of the remaining list == the duration of each flash
tmp = np.split(reconstructed_bin, np.where(reconstructed_bin == 0)[0][1:])
flashes = list(map(len, filter(lambda e: len(e) > 1, tmp)))
We can now take a look at how long flashes take:
print(flashes)
gives us
[5, 5, 5, 10, 9, 9, 5, 5, 5]
So.. "short" flashes seem to take 5 frames, "long" around 10. With this we can classify each flash as either being "long" or "short" by defining a sensible threshold of 7 like so:
# Classify each flash-duration
flashes_classified = list(map(lambda f: 'long' if f > 7 else 'short', flashes))
And let's repeat for pauses
# Repeat for pauses
tmp = np.split(reconstructed_bin, np.where(reconstructed_bin != False)[0][1:])
pauses = list(map(len, filter(lambda e: len(e) > 1, tmp)))
pauses_classified = np.asarray(list(map(lambda f: 'w' if f > 6 else 'c', pauses)))
pauses_indices, = np.where(np.asarray(pauses_classified) == 'w')
Now we can visualize the results.
fig = plt.figure()
ax = fig.gca()
ax.bar(range(len(flashes)), flashes, label='Flash duration')
ax.set_xticks(list(range(len(flashes_classified))))
ax.set_xticklabels(flashes_classified)
[ax.axvline(idx-0.5, ls='--', c='r', label='Pause' if i == 0 else None) for i, idx in enumerate(pauses_indices)]
plt.legend()
plt.show()
It somewhat depends on your environment. You might try inexpensively with a Raspberry Pi Zero (£9) or even a Pico (£4) or Arduino and an attached LDR - Light Dependent Resistor for £1 rather than a £100 USB camera.
Your program would then come down to repeatedly measuring the resistance (which depends on the light intensity) and making it into long and short pulses.
This has the benefit of being cheap and not requiring you to learn OpenCV, but Stefan's idea is far more fun and has my vote!

Is there a way to make multiple IndexedVertexLists refer to the same vertices while having different lists of indices

I want to have two (or more) IndexedVertexLists which refer to the same vertices while having different lists of indices. The problem I am having is that, if I were to create two (or more) IndexedVertexLists with the same vertices, it would take up twice the amount of the GPU's memory compared to what it actually needs.
What I mean:
import pyglet
vertices = [
0, 0,
0, 0.5,
0.5, 0,
0.5, 0.5
]
indices1 = [0, 1, 2]
indices2 = [0, 2, 3]
vertex_list_indexed_1 = pyglet.graphics.vertex_list_indexed(4, indices1, ('v2f', vertices))
vertex_list_indexed_2 = pyglet.graphics.vertex_list_indexed(4, indices2, ('v2f', vertices))
What I want would be something like this (this does not work, obviously):
import pyglet
vertices = [
0, 0,
0, 0.5,
0.5, 0,
0.5, 0.5
]
indices1 = [0, 1, 2]
indices2 = [0, 2, 3]
vertex_list = pyglet.graphics.vertex_list(4, ('v2f', vertices))
vertex_list_indexed_1 = pyglet.graphics.vertex_list_indexed(4, indices1, vertex_list)
vertex_list_indexed_2 = pyglet.graphics.vertex_list_indexed(4, indices2, vertex_list)
I couldn't find anything in the pyglet documentation that would solve my problem.

Crop empty arrays (padding) from a volume

What I want to do is crop a volume to remove all irrelevant data. For example, say I have a 100x100x100 volume filled with zeros, except for a 50x50x50 volume within that is filled with ones.
How do I obtain the cropped 50x50x50 volume from the original ?
Here's the naive method I came up with.
import numpy as np
import tensorflow as tf
test=np.zeros((100,100,100)) # create an empty 100x100x100 volume
rand=np.random.rand(66,25,34) # create a 66x25x34 filled volume
test[10:76, 20:45, 30:64] = rand # partially fill the empty volume
# initialize the cropping coordinates
minx=miny=minz=0
maxx=maxy=maxz=0
maxx,maxy,maxz=np.subtract(test.shape,1)
# compute the optimal cropping coordinates
dimensions=test.shape
while(tf.reduce_max(test[minx,:,:]) == 0): # check for empty slices along the x axis
minx+=1
while(tf.reduce_max(test[:,miny,:]) == 0): # check for empty slices along the y axis
miny+=1
while(tf.reduce_max(test[:,:,minz]) == 0): # check for empty slices along the z axis
minz+=1
while(tf.reduce_max(test[maxx,:,:]) == 0):
maxx-=1
while(tf.reduce_max(test[:,maxy,:]) == 0):
maxy-=1
while(tf.reduce_max(test[:,:,maxz]) == 0):
maxz-=1
maxx,maxy,maxz=np.add((maxx,maxy,maxz),1)
crop = test[minx:maxx,miny:maxy,minz:maxz]
print(minx,miny,minz,maxx,maxy,maxz)
print(rand.shape)
print(crop.shape)
This prints:
10 20 30 76 45 64
(66, 25, 34)
(66, 25, 34)
, which is correct. However, it takes too long and is probably suboptimal. I'm looking for better ways to achieve the same thing.
NB:
The subvolume wouldn't necessarily be a cuboid, it could be any shape.
I want to keep gaps within the subvolume, only remove what's "outside" the shape to be cropped.
(Edit)
Oops, I hadn't seen the comment about keeping the so-called "gaps" between elements! This should be the one, finally.
def get_nonzero_sub(arr):
arr_slices = tuple(np.s_[curr_arr.min():curr_arr.max() + 1] for curr_arr in arr.nonzero())
return arr[arr_slices]
While you wait for a sensible response (I would guess this is a builtin function in an image processing library somewhere), here's a way
y, x = np.where(np.any(test, 0))
z, _ = np.where(np.any(test, 1))
test[min(z):max(z)+1, min(y):max(y)+1, min(x):max(x)+1]
I think leaving tf out of this should up your performance.
Explanation (based on 2D array)
test = np.array([
[0, 0, 0, 0, 0, ],
[0, 0, 1, 2, 0, ],
[0, 0, 3, 0, 0, ],
[0, 0, 0, 0, 0, ],
[0, 0, 0, 0, 0, ],
])
We want to crop it to get
[[1, 2]
[3, 0]]
np.any(..., 0) this will 'iterate' over axis 0 and return True if any of the elements in the slice are truthy. I show the result of this in the comments here:
np.array([
[0, 0, 0, 0, 0, ], # False
[0, 0, 1, 2, 0, ], # True
[0, 0, 3, 0, 0, ], # True
[0, 0, 0, 0, 0, ], # False
[0, 0, 0, 0, 0, ], # False
])
i.e. it returns np.array([False, True, True, False, False])
np.any(..., 1) does the same as step 2 but over axis 1 instead of axis zero i.e.
np.array([
[0, 0, 0, 0, 0, ],
[0, 0, 1, 2, 0, ],
[0, 0, 3, 0, 0, ],
[0, 0, 0, 0, 0, ],
[0, 0, 0, 0, 0, ],
# False False True True False
])
Note that in the case of a 3D array, these steps return 2D arrays
(x,) = np.where(...) this returns the index values of the truthy values in an array. So np.where([False, True, True, False, False]) returns (array([1, 2]),). Note that this is a tuple so in the 2D case we would need to call (x,) = ... so x is just the array array([1, 2]). The syntax is nicer in the 2D case as we can use tuple-unpacking i.e x, y = ...
Note that in the 3D case, np.where can give us the value for 2 axes at a time. I chose to do x-y in one go and then z-? in the second go. The ? is either x or y, I can't be bothered to work out which and since we don't need it I throw it away in a variable named _ which by convention is a reasonable place to store junk output you don't actually want. Note I need to do z, _ = as I want the tuple-unpacking and not just z = otherwise z become the tuple with both arrays.
Well, this step is pretty much the same as what you did at the end of your answer so I assume you understand it. Simple slicing in each dimension from the first element with a value in that dimension to the last. You need the + 1 because slicing in python are not inclusive of the index after the :.
Hopefully that's clear?

Semantic Segmentation to Bounding Boxes

Suppose you are performing semantic segmentation. For simplicity, let's assume this is 1D segmentation rather than 2D (i.e. we only care about finding objects with width).
So the desired output of our model might be something like:
[
[0, 0, 0, 0, 1, 1, 1], # label channel 1
[1, 1, 1, 0, 0, 1, 1], # label channel 2
[0, 0, 0, 1, 1, 1, 0], # label channel 3
#...
]
However, our trained imperfect model might be more like
[
[0.1, 0.1, 0.1, 0.4, 0.91, 0.81, 0.84], # label channel 1
[0.81, 0.79, 0.85, 0.1, 0.2, 0.61, 0.91], # label channel 2
[0.3, 0.1, 0.24, 0.87, 0.62, 1, 0 ], # label channel 3
#...
]
What would be a performant way, using python, for getting the boundaries of the labels (or bounding box)
e.g. (zero-indexed)
[
[[4, 6]], # "objects" of label 1
[[0, 2], [5, 6]] # "objects" of label 2
[[3, 5]], # "objects" of label 3
]
if it helps, perhaps transforming it to a binary mask would be of more use?
def binarize(arr, cutoff=0.5):
return (arr > cutoff).astype(int)
with a binary mask we just need to find the consecutive integers of the indices of nonzero values:
def consecutive(data, stepsize=1):
return np.split(data, np.where(np.diff(data) != stepsize)[0]+1)
find "runs" of labels:
def binary_boundaries(labels, cutoff=0.5):
return [consecutive(channel.nonzero()[0]) for channel in binarize(labels, cutoff)]
name objects according to channel name:
def binary_objects(labels, cutoff=0.5, channel_names=None):
if channel_names == None:
channel_names = ['channel {}'.format(i) for i in range(labels.shape[0])]
return dict(zip(channel_names, binary_boundaries(labels, cutoff)))
Your trained model returned the float image and not the int image you were looking for (and it's not 'imperfect' if decimals were bothering you) and Yes! you do need to threshold it to get binary image.
Once you do have the binary image, lets do some work with skimage.
label_mask = measure.label(mask)
props = measure.regionprops(label_mask)
mask is your binary image and here you do have props the properties of all the regions which are detected objects actually.
Among these properties, there exists bounding box!

Superimpose objects on a video stream using Python and POVRAY

I am using Vapory which is a wrapper Python library for Povray. It allows using Python functions to manipulate typical Povray operations.
I want to superimpose 3D models in every frame of my video stream. The way to do this in Vapory is the following:
from vapory import *
from moviepy.video.io.ffmpeg_writer import ffmpeg_write_image
light = LightSource([10, 15, -20], [1.3, 1.3, 1.3])
wall = Plane([0, 0, 1], 20, Texture(Pigment('color', [1, 1, 1])))
ground = Plane( [0, 1, 0], 0,
Texture( Pigment( 'color', [1, 1, 1]),
Finish( 'phong', 0.1,
'reflection',0.4,
'metallic', 0.3)))
sphere1 = Sphere([-4, 2, 2], 2.0, Pigment('color', [0, 0, 1]),
Finish('phong', 0.8,
'reflection', 0.5))
sphere2 =Sphere([4, 1, 0], 1.0, Texture('T_Ruby_Glass'),
Interior('ior',2))
scene = Scene( Camera("location", [0, 5, -10], "look_at", [1, 3, 0]),
objects = [ ground, wall, sphere1, sphere2, light],
included=["glass.inc"] )
def embed_in_scene(image):
ffmpeg_write_image("__temp__.png", image)
image_ratio = 1.0*image.shape[1]/image.shape[0]
screen = Box([0, 0, 0], [1, 1, 0], Texture(
Pigment( ImageMap('png', '"__temp__.png"', 'once')),
Finish('ambient', 1.2) ),
'scale', [10, 10/image_ratio,1],
'rotate', [0, 20, 0],
'translate', [-3, 1, 3])
new_scene = scene.add_objects([screen])
return new_scene.render(width=800, height=480, antialiasing=0.001)
clip = (VideoFileClip("bunny.mp4") # File containing the original video
.subclip(23, 47) # cut between t=23 and 47 seconds
.fl_image(embed_in_scene) # <= The magic happens
.fadein(1).fadeout(1)
.audio_fadein(1).audio_fadeout(1))
clip.write_videofile("bunny2.mp4",bitrate='8000k')
which results with a video stream as follows:
What I want, however, is that movie box being the whole scene, and spheres to remain where they are. The first thought was to remove the rotation function from the code and it did work, however I still cannot stretch the movie frame to the end corners of the actual scene.
Any thoughts?
EDIT: So I was able to move the camera, get the object to the center. However I still could not get the movie full screen. This is because the camera object is told to look towards the coordinates, and I don't know what coordinates the camera should be directed at, in order to get the picture in full screen. See:

Categories