I'm working on a manim challenge from this video: https://youtu.be/HKPm8FZYaqI?t=700. The challenge is to code the animation which starts at 11:40 and ends at 11:49.
I got up to the point where the triangles are rotated and copied over to the second square, but for some reason they are just the ones that I had to rotate are just ever so slightly off, while the ones I didn't have to rotate seem to be perfect.
Look at this image:
The triangles fit perfectly inside the square to the right. But in the right square, the ones that were rotated (1 and 4) do not. Below is a closeup of what I mean for triangle number 1:
Of course, this is how I want it to look:
The dimensions of the shapes and maybe the colours are a little different, but that is because this is the solution of the author of the video, and the previous was my attempt. I don't care about that, I only care about why the triangles don't fit perfectly in my attempt like they do here.
Zooming in on this picture, we see that the triangles do indeed fit perfectly:
Any insight into why this is happening would be very much appreciated!
The source code for my animation is this:
class Pythagoras(Scene):
def construct(self):
title = TextMobject("Pythagorean Theorem")
title.to_edge(UL)
pre_square = Polygon(
[-2, 2, 0],
[2, 2, 0],
[2, -2, 0],
[-2, -2, 0],
color=WHITE
)
self.wait()
square2 = Polygon(
[-1.41, 1.41, 0],
[1.41, 1.41, 0],
[1.41, -1.41, 0],
[-1.41, -1.41, 0]
)
square2.rotate(PI/6)
triangle1 = Polygon(
[-2, 2, 0],
[-2 + math.sqrt(6), 2, 0],
[-2, 2 - math.sqrt(2), 0],
color=YELLOW
)
triangle2 = Polygon(
[2, 2, 0],
[-2 + math.sqrt(6), 2, 0],
[2, 2 - math.sqrt(6), 0],
color=YELLOW
)
triangle3 = Polygon(
[2, 2 - math.sqrt(6), 0],
[2, -2, 0],
[2 - math.sqrt(6), -2, 0],
color=YELLOW
)
triangle4 = Polygon(
[-2, 2 - math.sqrt(2), 0],
[-2, -2, 0],
[2 - math.sqrt(6), -2, 0],
color=YELLOW
)
triangles = [triangle1, triangle2, triangle3, triangle4]
for triangle in triangles:
triangle.set_fill(YELLOW, 0.6)
self.play(Write(title), ShowCreation(pre_square), ShowCreation(triangle1), ShowCreation(triangle2), ShowCreation(triangle3), ShowCreation(triangle4))
self.wait()
group = VGroup(pre_square, triangle1, triangle2, triangle3, triangle4)
self.play(ApplyMethod(group.to_edge, LEFT, {"buff": 1.6}))
self.wait()
square3 = pre_square.copy()
self.play(ApplyMethod(square3.shift, RIGHT * 7))
triangle2.generate_target()
triangle2.target.shift(RIGHT * (7- math.sqrt(6)))
triangle1.generate_target()
triangle1.target = triangle2.target.copy().rotate(PI)
triangle3.generate_target()
triangle3.target.shift(RIGHT * 7)
triangle4.generate_target()
triangle4.target = triangle3.target.copy().rotate(PI)
self.play(MoveToTarget(triangle1.copy()), MoveToTarget(triangle2.copy()), MoveToTarget(triangle3.copy()), MoveToTarget(triangle4.copy()))
self.wait()
The problem is the thickness of the VMobjects, by default it is 4, if you change it to 2 or 1 (in the solution that I give is 1) those corners are removed. Add this in your for:
for triangle in triangles:
triangle.set_fill(YELLOW, 0.6)
triangle.set_stroke(None,1.5)
#or
#triangle.set_stroke(width=1.5)
#it is the same
There is actually nothing wrong with the code just how these triangles are drawn. The border around the triangle have width which causes the artifact. If you remove the border or consider the border as the range of the triangle the problem will go away.
Related
I have made a morse code translator and I want it to be able to record a flashing light and make it into morse code. I think I will need OpenCV or a light sensor, but I don't know how to use either of them. I haven't got any code for it yet, as I couldn't find any solutions anywhere else.
The following is just a concept of what you could try. Yes, you could also train a neural network for this but if your setup is simple enough, some engineering will do.
We first create a "toy-video" to work with:
import numpy as np
import matplotlib.pyplot as plt
# Create a toy "video"
image = np.asarray([
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
[0, 0, 1, 2, 2, 1],
[0, 0, 2, 4, 4, 2],
[0, 0, 2, 4, 4, 2],
[0, 0, 1, 2, 2, 1],
])
signal = np.asarray([0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0])
x = list(range(len(signal)))
signal = np.interp(np.linspace(0, len(signal), 100), x, signal)[..., None]
frames = np.einsum('tk,xy->txyk', signal, image)[..., 0]
Plot a few frames:
fig, axes = plt.subplots(1, 12, sharex='all', sharey='all')
for i, ax in enumerate(axes):
ax.matshow(frames[i], vmin=0, vmax=1)
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
ax.set_title(i)
plt.show()
Now that you have this kind of toy video, it's pretty straight forward to convert it back to some sort of binary signal. You'd simply compute the average brightness of each frame:
reconstructed = frames.mean(1).mean(1)
reconstructed_bin = reconstructed > 0.5
plt.plot(reconstructed, label='original')
plt.plot(reconstructed_bin, label='binary')
plt.title('Reconstructed Signal')
plt.legend()
plt.show()
From here we only have to determine the length of each flash.
# This is ugly, I know. Just for understanding though:
# 1. Splits the binary signal on zero-values
# 2. Filters out the garbage (accept only lists where len(e) > 1)
# 3. Gets the length of the remaining list == the duration of each flash
tmp = np.split(reconstructed_bin, np.where(reconstructed_bin == 0)[0][1:])
flashes = list(map(len, filter(lambda e: len(e) > 1, tmp)))
We can now take a look at how long flashes take:
print(flashes)
gives us
[5, 5, 5, 10, 9, 9, 5, 5, 5]
So.. "short" flashes seem to take 5 frames, "long" around 10. With this we can classify each flash as either being "long" or "short" by defining a sensible threshold of 7 like so:
# Classify each flash-duration
flashes_classified = list(map(lambda f: 'long' if f > 7 else 'short', flashes))
And let's repeat for pauses
# Repeat for pauses
tmp = np.split(reconstructed_bin, np.where(reconstructed_bin != False)[0][1:])
pauses = list(map(len, filter(lambda e: len(e) > 1, tmp)))
pauses_classified = np.asarray(list(map(lambda f: 'w' if f > 6 else 'c', pauses)))
pauses_indices, = np.where(np.asarray(pauses_classified) == 'w')
Now we can visualize the results.
fig = plt.figure()
ax = fig.gca()
ax.bar(range(len(flashes)), flashes, label='Flash duration')
ax.set_xticks(list(range(len(flashes_classified))))
ax.set_xticklabels(flashes_classified)
[ax.axvline(idx-0.5, ls='--', c='r', label='Pause' if i == 0 else None) for i, idx in enumerate(pauses_indices)]
plt.legend()
plt.show()
It somewhat depends on your environment. You might try inexpensively with a Raspberry Pi Zero (£9) or even a Pico (£4) or Arduino and an attached LDR - Light Dependent Resistor for £1 rather than a £100 USB camera.
Your program would then come down to repeatedly measuring the resistance (which depends on the light intensity) and making it into long and short pulses.
This has the benefit of being cheap and not requiring you to learn OpenCV, but Stefan's idea is far more fun and has my vote!
I'm using Manim CE 0.8.0, and I'm trying to fade in the axes behind the existing objects in the scene; I found no way to accomplish that. Here's a POC:
from manim import *
class FadeBehind(Scene):
def construct(self):
myDot = Dot(
point = [0, 0, 0],
radius = 3,
color = RED,
)
self.play(
FadeIn(myDot),
)
myLine = Line(
start = [-5, 0, 0],
end = [5, 0, 0],
stroke_color = BLUE,
stroke_width = 30,
)
myLine.z_index = myDot.z_index - 1
self.play(
FadeIn(myLine) # works as expected (the blue line is shown behind the dot)
)
self.wait()
ax = Axes(
x_range=[-7, 7, 1],
y_range=[-5, 5, 1],
)
ax.z_index = myLine.z_index - 1
self.play(
FadeIn(ax) # doesn't work as expected (the axes are overlayed on top of everything in the scene)
)
The problem is, that the default z_index is 0: print(myDot.z_index) gives 0.
And z_index have to be positive.
Here is the script that works:
class FadeBehind(Scene):
def construct(self):
myDot = Dot(
point = [0, 0, 0],
radius = 2,
color = RED,
)
self.play(
FadeIn(myDot),
)
myDot.z_index=1
myLine = Line(
start = [-5, 0, 0],
end = [5, 0, 0],
stroke_color = BLUE,
stroke_width = 30,
)
myLine.z_index = 0
self.play(
FadeIn(myLine) # works as expected (the blue line is shown behind the dot)
)
ax = Axes(
x_range=[-7, 7, 1],
y_range=[-5, 5, 1],
)
ax.z_index = 0
self.play(
FadeIn(ax) # now works as expected since lower z-index
)
I am using the following example from :
from scipy import spatial
x, y = np.mgrid[0:5, 2:8]
tree = spatial.KDTree(list(zip(x.ravel(), y.ravel())))
pts = np.array([[0, 0], [2.1, 2.9]])
idx = tree.query(pts)[1]
data = tree.data[??????????]
If I input two arbitrary points (see variable pts), I am looking to return all pairs of coordinates that lie within the rectangle defined by the two points (KDTree finds the closest neighbour). So in this case:
array([[0, 0],
[0, 1],
[0, 2],
[1, 0],
[1, 1],
[1, 2],
[2, 0],
[2, 1],
[2, 2]])
How can I achieve that from the tree data?
Seems that I found a solution:
from scipy import spatial
import numpy as np
x, y = np.mgrid[0:5, 0:5]
tree = spatial.KDTree(list(zip(x.ravel(), y.ravel())))
pts = np.array([[0, 0], [2.1, 2.2]])
idx = tree.query(pts)[1]
data = tree.data[[idx[0], idx[1]]]
rectangle = tree.data[np.where((tree.data[:,0]>=min(data[:,0])) & (tree.data[:,0]<=max(data[:,0])) & (tree.data[:,1]>=min(data[:,1])) & (tree.data[:,1]<=max(data[:,1])))]
However, I would love to see a solution using the query option!
I am using Vapory which is a wrapper Python library for Povray. It allows using Python functions to manipulate typical Povray operations.
I want to superimpose 3D models in every frame of my video stream. The way to do this in Vapory is the following:
from vapory import *
from moviepy.video.io.ffmpeg_writer import ffmpeg_write_image
light = LightSource([10, 15, -20], [1.3, 1.3, 1.3])
wall = Plane([0, 0, 1], 20, Texture(Pigment('color', [1, 1, 1])))
ground = Plane( [0, 1, 0], 0,
Texture( Pigment( 'color', [1, 1, 1]),
Finish( 'phong', 0.1,
'reflection',0.4,
'metallic', 0.3)))
sphere1 = Sphere([-4, 2, 2], 2.0, Pigment('color', [0, 0, 1]),
Finish('phong', 0.8,
'reflection', 0.5))
sphere2 =Sphere([4, 1, 0], 1.0, Texture('T_Ruby_Glass'),
Interior('ior',2))
scene = Scene( Camera("location", [0, 5, -10], "look_at", [1, 3, 0]),
objects = [ ground, wall, sphere1, sphere2, light],
included=["glass.inc"] )
def embed_in_scene(image):
ffmpeg_write_image("__temp__.png", image)
image_ratio = 1.0*image.shape[1]/image.shape[0]
screen = Box([0, 0, 0], [1, 1, 0], Texture(
Pigment( ImageMap('png', '"__temp__.png"', 'once')),
Finish('ambient', 1.2) ),
'scale', [10, 10/image_ratio,1],
'rotate', [0, 20, 0],
'translate', [-3, 1, 3])
new_scene = scene.add_objects([screen])
return new_scene.render(width=800, height=480, antialiasing=0.001)
clip = (VideoFileClip("bunny.mp4") # File containing the original video
.subclip(23, 47) # cut between t=23 and 47 seconds
.fl_image(embed_in_scene) # <= The magic happens
.fadein(1).fadeout(1)
.audio_fadein(1).audio_fadeout(1))
clip.write_videofile("bunny2.mp4",bitrate='8000k')
which results with a video stream as follows:
What I want, however, is that movie box being the whole scene, and spheres to remain where they are. The first thought was to remove the rotation function from the code and it did work, however I still cannot stretch the movie frame to the end corners of the actual scene.
Any thoughts?
EDIT: So I was able to move the camera, get the object to the center. However I still could not get the movie full screen. This is because the camera object is told to look towards the coordinates, and I don't know what coordinates the camera should be directed at, in order to get the picture in full screen. See:
This is the Matlab code I want to replicate in OpenCv
e[~, threshold] = edge(I, 'sobel');
fudgeFactor = .5;
BWs = edge(I,'sobel', threshold * fudgeFactor);
figure, imshow(BWs), title('binary gradient mask');
This is my test image:
Cell image
I have tried things like
blurred_gray = cv2.GaussianBlur(gray_image,(3,3),0)
sobelx = cv2.Sobel(blurred_gray,cv2.CV_8U,1,0,ksize=3)
sobely = cv2.Sobel(blurred_gray,cv2.CV_8U,0,1,ksize=3)[2]
And the output I got is:
sobelx
sobely
I tried adding sobelx and sobely because I read they're partial derivatives, but the result image looks same as the above and varying the ksize didn't help.
This is the output I need:
edge image
Could someone please tell me what I'm doing wrong and what I should do to get the same result image?
The MATLAB implementation of the sobel edge detection isn't visible so we can only guess exactly what is happening. The only hint we get is from the documentation on edge states that when the 'sobel' option is used then
Finds edges at those points where the gradient of the image I is
maximum, using the Sobel approximation to the derivative.
It's not stated, but taking the maximum of the gradient is more complicated than simply taking the local maximums in the image. Instead we want to find local maximums with respect to the gradient direction. Unfortunately the actual code used by MATLAB for this operation is hidden.
Looking at the code that is available in edge it appears that they use 4*mean(magnitude) for the threshold in the thinning operation so I'm using this combined with your fudge factor. The orientated_non_max_suppression function is far from optimal but I wrote it for readability over performance.
import cv2
import numpy as np
import scipy.ndimage.filters
gray_image = cv2.imread('cell.png', cv2.IMREAD_GRAYSCALE).astype(dtype=np.float32)
def orientated_non_max_suppression(mag, ang):
ang_quant = np.round(ang / (np.pi/4)) % 4
winE = np.array([[0, 0, 0],
[1, 1, 1],
[0, 0, 0]])
winSE = np.array([[1, 0, 0],
[0, 1, 0],
[0, 0, 1]])
winS = np.array([[0, 1, 0],
[0, 1, 0],
[0, 1, 0]])
winSW = np.array([[0, 0, 1],
[0, 1, 0],
[1, 0, 0]])
magE = non_max_suppression(mag, winE)
magSE = non_max_suppression(mag, winSE)
magS = non_max_suppression(mag, winS)
magSW = non_max_suppression(mag, winSW)
mag[ang_quant == 0] = magE[ang_quant == 0]
mag[ang_quant == 1] = magSE[ang_quant == 1]
mag[ang_quant == 2] = magS[ang_quant == 2]
mag[ang_quant == 3] = magSW[ang_quant == 3]
return mag
def non_max_suppression(data, win):
data_max = scipy.ndimage.filters.maximum_filter(data, footprint=win, mode='constant')
data_max[data != data_max] = 0
return data_max
# compute sobel response
sobelx = cv2.Sobel(gray_image, cv2.CV_32F, 1, 0, ksize=3)
sobely = cv2.Sobel(gray_image, cv2.CV_32F, 0, 1, ksize=3)
mag = np.hypot(sobelx, sobely)
ang = np.arctan2(sobely, sobelx)
# threshold
fudgefactor = 0.5
threshold = 4 * fudgefactor * np.mean(mag)
mag[mag < threshold] = 0
# non-maximal suppression
mag = orientated_non_max_suppression(mag, ang)
# alternative but doesn't consider gradient direction
# mag = skimage.morphology.thin(mag.astype(np.bool)).astype(np.float32)
# create mask
mag[mag > 0] = 255
mag = mag.astype(np.uint8)
Results on the Cell
Python
MATLAB
Results on MATLAB's peppers.png (built-in)
Python
MATLAB
The MATLAB implementation must use something a little different but it looks like this gets pretty close.