Consecutive update figure matplotlib - python

I am plotting pairs of images from two different folders, for each pair I plotted them one after the other using: plt.imshow() and save the fig after each loop.
In order to speed it up I want to use the option .set_data so that I only update the figure and not redraw everything. Since I have to do it two times it seems to use only the last one and no update is done on the first call. I am doing somthing like this:
data=np.arange(9).reshape(3,3)*10
im1=plt.imshow([data]*4) # simulating an rgb image that i read from file
mask=np.ma.array(np.arange(9).reshape(3,3)*10, mask=np.eye(3)) # 1D map
im2=plt.imshow(mask, vmin=2, vmax=7)
for i in range(10):
data=np.arange(9).reshape(3,3)*10 + np.random.randint(0,100,size=(3,3))
im1.set_data([data]*4) # simulating another rgb image read from file
mask=np.ma.array(np.arange(9).reshape(3,3)*10+np.random.randint(0,100,size=(3,3)), mask=np.eye(3)) # 1D map
im2.set_data(mask)
plt.savefig("{}.png".format(i))
with this code, only the mask is being updated but not the background image. Is there a way to make matplotlib use both updates of the data?

Related

How to display 2 intensity histograms of images side-by-side in Python

I am creating 2 histograms and trying to display them side by side. I have tried displaying them next to each other as is, tried converting the histograms into png files and concatenate them that way, and I've tried using plt.savefig() but I don't think I've implemented any of these approaches correctly... I've had no luck. This is what my related functions look like:
def intensity_histogram(img):
img = io.imread(img)
ax = plt.hist(img.ravel(), bins = 256, histtype = 'step', color = 'b')
plt.title('Histogram')
# plt.show()
def main():
img1 = intensity_histogram('html/images/lenna.png')
img2 = intensity_histogram('html/images/lenna_gray.png')
Maybe I have to make a for loop in my intensity_histogram? But I wouldn't know what that would look like (not very familiar with Python). By uncommenting my plt.show(), my code displays them, but it first displays the histogram for img1, then I have to close the window, then it'd open up the histogram for the second one automatically. Please help me display the two histograms that I've created to by side-by-side instead. If there's also a way to have separate titles for these, please let me know as well.

Programming a picture maker template in Python possible?

I'm looking for a library that enables to "create pictures" (or even videos) with the following functions:
Accepting picture inputs
Resizing said inputs to fit given template / scheme
Positioning the pictures in pre-set up layers or coordinates
A rather schematic approach to look at this:
whereas the red spots are supposed to represent e.g. text, picture (or if possible video) elements.
The end goal would be to give the .py script multiple input pictures and the .py creating a finished version like mentioned above.
Solutions I tried were looking into Python PIL, but I wasn't able to find what I was looking for.
Yes, it is possible to do this with Python.
The library you are looking for is OpenCV([https://opencv.org][1]/).
Some basic OpenCV python tutorials (https://docs.opencv.org/master/d9/df8/tutorial_root.html).
1) You can use imread() function to read images from files.
2) You can use resize() function to resize the images.
3) You can create a empty master numpy array matching the size and depth(color depth) of the black rectangle in the figure you have shown, resize your image and copy the contents into the empty array starting from the position you want.
Below is a sample code which does something close to what you might need, you can modify this to suit your actual needs. (Since your requirements are not clear I have written the code like this so that it can at least guide you.)
import numpy as np
import cv2
import matplotlib.pyplot as plt
# You can store most of these values in another file and load them.
# You can modify this to set the dimensions of the background image.
BG_IMAGE_WIDTH = 100
BG_IMAGE_HEIGHT = 100
BG_IMAGE_COLOR_DEPTH = 3
# This will act as the black bounding box you have shown in your figure.
# You can also load another image instead of creating empty background image.
empty_background_image = np.zeros(
(BG_IMAGE_HEIGHT, BG_IMAGE_WIDTH, BG_IMAGE_COLOR_DEPTH),
dtype=np.int
)
# Loading an image.
# This will be copied later into one of those red boxes you have shown.
IMAGE_PATH = "./image1.jpg"
foreground_image = cv2.imread(IMAGE_PATH)
# Setting the resize target and top left position with respect to bg image.
X_POS = 4
Y_POS = 10
RESIZE_TARGET_WIDTH = 30
RESIZE_TARGET_HEIGHT = 30
# Resizing
foreground_image= cv2.resize(
src=foreground_image,
dsize=(RESIZE_TARGET_WIDTH, RESIZE_TARGET_HEIGHT),
)
# Copying this into background image
empty_background_image[
Y_POS: Y_POS + RESIZE_TARGET_HEIGHT,
X_POS: X_POS + RESIZE_TARGET_WIDTH
] = foreground_image
plt.imshow(empty_background_image)
plt.show()

Turn samples into Mesh

i want to load a mesh file (.obj), then want to use the trimesh.sample.sample_surface_even() function to get some points on the surface, turn the resulting points back into a mesh and save them back as an .obj file.
My problem is, that i dont know how to turn the samples back into a mesh that can be saved. Can somebody tell me what i should do step by step, to achieve that goal?
Here is my code so far:
import numpy as np
import trimesh
mesh = trimesh.load_mesh('mesh10.obj')
sampledmesh= trimesh.sample.sample_surface_even(mesh,500)
#? How to turn sampledmesh back into a mesh?
sampledmesh.export('mesh10_export.obj')
You can use the submesh function on the sampled face indices, which is the second element in the returned tuple:
sampledmesh = trimesh.sample.sample_surface_even(mesh,500)
sampled_submesh = mesh.submesh([sampledmesh[1]])[0]
submesh returns an array of meshes, but here we just have one, so we take the first mesh.

How to read saved image and locate it in coordinates without any distortins?

I can't overcome maybe very simple obstacle. First, I am doing some spatial operations with shape files, plot the results and save the image:
# read different shape-files, overlaying them, sjoining them`
...
# plotting results:
fig, ax = plt.subplots(figsize=[10, 10])
ax.set_xlim(left=9686238.14, right=9727068.02)
ax.set_ylim(bottom=7070076.66, top=7152463.12)
# various potting like objects.plot(ax=ax, column = 'NAME', cmap='Pastel2', k=6, legend=False) and many others
plt.axis('equal')
plt.savefig('back11.png', dpi=300)
plt.close()
Thus I got such a nice picture back11.png:
Second, I am reading that picture and (in the same cordinates) want to see absolutlely identical one map11.png:
fig, ax = plt.subplots(figsize=[10, 10])
ax.set_xlim(left=9686238.14, right=9727068.02)
ax.set_ylim(bottom=7070076.66, top=7152463.12)
back = plt.imread('back11.png')
ax.imshow(back, extent=[9686238.14, 9727068.02, 7070076.66, 7152463.12])
plt.axis('equal')
plt.savefig('map11.png', dpi=300)
plt.close()
But really I got something else (map11.png):
What is the origin of such a strange mismatch?
When matplotlib is showing an image using plt.imshow, it automatically adds axis and white space around it (regardless of the image content). While your image is accidentally another plot, which contains axis and white space itself. To solve that problem, use
plt.subplots_adjust(0, 0, 1, 1)
plt.axis('off')
which should output nothing but the image.
But on the other hand, you have to specify plt.figure(figsize=xxx, dpi=xxx) correctly in order to get THE stored image (correct size, no interpolation or re-sampling). If you simply want to see the image using python (and you are in jupyter notebook), you can use Pillow. If you convert the image to a PIL.Image object, it is by itself displayable by jupyter REPL.
If you are not inside jupyter, you might also directly open the image using os image viewer. It is at least more convenient than matplotlib to display the "exact" image.
BTW, when displaying the image, the same parameters do not apply any more (since its an image and all parameters are hidden inside the content of it). Therefore, there's no need (and it's wrong) to write all those magic numbers. Also if you want to save the image without white border and axis, use the code above before calling plt.savefig

How to present numpy array into pygame surface?

I'm writing a code that part of it is reading an image source and displaying it on the screen for the user to interact with. I also need the sharpened image data. I use the following to read the data and display it in pyGame
def image_and_sharpen_array(file_name):
#read the image data and return it, with the sharpened image
image = misc.imread(file_name)
blurred = ndimage.gaussian_filter(image,3)
edge = ndimage.gaussian_filter(blurred,1)
alpha = 20
out = blurred + alpha*(blurred - edge)
return image,out
#get image data
scan,sharpen = image_and_sharpen_array('foo.jpg')
w,h,c = scan.shape
#setting up pygame
pygame.init()
screen = pygame.display.set_mode((w,h))
pygame.surfarray.blit_array(screen,scan)
pygame.display.update()
And the image is displayed on the screen only rotated and inverted. Is this due to differences between misc.imread and pyGame? Or is this due to something wrong in my code?
Is there other way to do this? The majority of solution I read involved saving the figure and then reading it with ``pyGame''.
I often use the numpy swapaxes() method:
In this case we only need to invert x and y axis (axis number 0 and 1) before displaying our array :
return image.swapaxes(0,1),out
I thought technico provided a good solution - just a little lean on info. Assuming get_arr() is a function that returns the pixel array:
pixl_arr = get_arr()
pixl_arr = numpy.swapaxes(pixl_arr, 0, 1)
new_surf = pygame.pixelcopy.make_surface(pixl_arr)
screen.blit(new_surf, (dest_x, dest_y))
Alternatively, if you know that the image will always be of the same dimensions (as in iterating through frames of a video or gif file), it would be more efficient to reuse the same surface:
pixl_arr = get_arr()
pixl_arr = numpy.swapaxes(pixl_arr, 0, 1)
pygame.pixelcopy.array_to_surface(old_surf, pixl_arr)
screen.blit(old_surf, (dest_x, dest_y))
YMMV, but so far this is working well for me.
Every lib has its own way of interpreting image arrays. By 'rotated' I suppose you mean transposed. That's the way PyGame shows up numpy arrays. There are many ways to make it look 'correct'. Actually there are many ways even to show up the array, which gives you full control over channel representation and so on. In pygame version 1.9.2, this is the fastest array rendering that I could ever achieve. (Note for earlier version this will not work!).
This function will fill the surface with array:
def put_array(surface, myarr): # put array into surface
bv = surface.get_view("0")
bv.write(myarr.tostring())
If that is not working, use this, should work everywhere:
# put array data into a pygame surface
def put_arr(surface, myarr):
bv = surface.get_buffer()
bv.write(myarr.tostring(), 0)
You probably still get not what you want, so it is transposed or have swapped color channels. The idea is, manage your arrays in that form, which suites this surface buffer. To find out what is correct channel order and axes order, use openCV library (cv2.imread(filename)). With openCV you open images in BGR order as standard, and it has a lot of conversion functions. If I remember correctly, when writing directly to surface buffer, BGR is the correct order for 24 bit and BGRA for a 32 bit surface. So you can try to put the image array which you get out of file with this function and blit to the screen.
There are other ways to draw arrays e.g. here is whole set of helper functions http://www.pygame.org/docs/ref/surfarray.html
But I would not recommend using it, since surfaces are not for direct pixel manipulating, you will probably get lost in references.
Small tip: To do 'signalling test' use a picture, like this. So you will immediately see if something is wrong, just load as array and try to render.
My suggestion is to use the pygame.transform module. There are the flip and rotate methods, which you can use to however your transformation is. Look up the docs on this.
My recommendation is to save the output image to a new Surface, and then apply the transformations, and blit to the display.
temp_surf = pygame.Surface((w,h))
pygame.surfarray.blit(temp_surf, scan)
'''transform temp_surf'''
screen.blit(temp_surf, (0,0))
I have no idea why this is. It is probably something to do with the order in which the axes are transferred from a 2d array to a pygame Surface.

Categories