PyQt Freezing while many graphs plotting - python

def initPlots(self):
print("init_Plots")
for S in range(self.sensor_num):
globals()["self.plot{0}".format(S)] = pg.PlotWidget()
globals()["self.plot{0}".format(S)].setYRange(-30,30)
self.layout.addWidget(globals()["self.plot{0}".format(S)],S//5,S%5)
def showPlot(self,number,x_axis):
print("init_showPlot")
'''x_axis : time(s) / y_axis : data(rgb)'''
globals()["self.plot{0}".format(number)].clear()
globals()["self.plot{0}".format(number)].plot(x=x_axis,y=self.y_domain_r[number],
pen=pg.mkPen(width=2,color='r'),name="sensor_"+str(number)) # R
globals()["self.plot{0}".format(number)].plot(x=x_axis,y=self.y_domain_g[number],
pen=pg.mkPen(width=2,color='g'),name="sensor_"+str(number)) # G
globals()["self.plot{0}".format(number)].plot(x=x_axis,y=self.y_domain_b[number],
pen=pg.mkPen(width=2,color='b'),name="sensor_"+str(number)) # B
#QtCore.pyqtSlot(np.ndarray)
def run(self, arr):
print("init_run(plot)")
self.time += 1
self.data = arr
self.t_domain.append(self.time)
for S in range(self.sensor_num):
self.y_domain_r[S].append(self.data[S][0])
self.y_domain_g[S].append(self.data[S][1])
self.y_domain_b[S].append(self.data[S][2])
self.showPlot(S,self.t_domain)
This is a program that receives the rgb change value of an image every frame and displays a graph for that value.
np.ndarray of delta RGBs Signal -> Slot
This program often shuts down when I press run. How to reduce memory usage?
Is that okay with that I'm using many globals()?
Is it an inevitable flaw in Python?
help me guys~

For anyone who see this, I delete my globals and I made a plot list.
It worked for me.

Related

How to get return value from thread in Python?

I do some computationally expensive tasks in python and found the thread module for parallelization. I have a function which does the computation and returns a ndarray as result. Now I want to know how I can parallize my function and get back the calculated Arrays from each thread.
The followed example is strongly simplified with light functions and calculations.
import numpy as np
def calculate_result(input):
a=np.linspace(1.0, 1000.0, num=10000) # just an example
result = input*a
return(result)
input =[1,2,3,4]
for i in range(0,len(input(i))):
t.Thread(target=calculate_result, args=(input))
t. start()
#Here I want to receive the return value from the thread
I am looking for a way to get the return value from the thread / function for each thread, because in my task each thread calculates different values.
I found an other Question (how to get the return value from a thread in python?) where someone is looking for a similar problem (no ndarrays) and which is handled with ThreadPool and async...
-------------------------------------------------------------------------------
Thanks for your answers !
Due to your help now I am looking for a way to solve my problem with the multiprocessing modul. To give you a better understanding what I do, see my following Explanation.
Explanation:
My 'input_data' is an ndarray with 282240 elements of type uint32
In the 'calculation_function()'I use a for loop to calculate from
every 12 bit a result and put it into the 'output_data'
Because this is very slow, I split my input_data into e.g. 4 or 8
parts and calculate each part in the calculation_function().
Now I am looking for a way, how to parallize the 4 or 8 function
calls
The order of the data is elementary, because the data is in image and
each pixel have to be at the correct Position. So function call no. 1
calculates the first and the last function call the last pixel of the
image.
The calculations work fine and the image can be completly rebuilt
from my algo but I need the parallelization to speed up for time
critical aspects.
Summary:
One input ndarray is devided into 4 or 8 parts. In each part are 70560 or 35280 uint32 values. From each 12 bit I calculate one Pixel with 4 or 8 function calls. Each function returns one ndarray with 188160 or 94080 pixel. All return values will be put together in a row and reshaped into an image.
What allready works:
Calculations are allready working and I can reconstruct my image
Problem:
Function calls are done seriall and in a row but each image reconstruction is very slow
Main Goal:
Speed up the function calls by parallize the function calls.
Code:
def decompress(payload,WIDTH,HEIGHT):
# INPUTS / OUTPUTS
n_threads = 4
img_input = np.fromstring(payload, dtype='uint32')
img_output = np.zeros((WIDTH * HEIGHT), dtype=np.uint32)
n_elements_part = np.int(len(img_input) / n_threads)
input_part=np.zeros((n_threads,n_elements_part)).astype(np.uint32)
output_part =np.zeros((n_threads,np.int(n_elements_part/3*8))).astype(np.uint32)
# DEFINE PARTS (here 4 different ones)
start = np.zeros(n_threads).astype(np.int)
end = np.zeros(n_threads).astype(np.int)
for i in range(0,n_threads):
start[i] = i * n_elements_part
end[i] = (i+1) * n_elements_part -1
# COPY IMAGE DATA
for idx in range(0,n_threads):
input_part [idx,:] = img_input[start[idx]:end[idx]+1]
for idx in range(0,n_threads): # following line is the function_call that should be parallized
output_part[idx,:] = decompress_part2(input_part[idx],output_part[idx])
# COPY PARTS INTO THE IMAGE
img_output[0 : 188160] = output_part[0,:]
img_output[188160: 376320] = output_part[1,:]
img_output[376320: 564480] = output_part[2,:]
img_output[564480: 752640] = output_part[3,:]
# RESHAPE IMAGE
img_output = np.reshape(img_output,(HEIGHT, WIDTH))
return img_output
Please dont take care of my beginner programming style :)
Just looking for a solution how to parallize the function calls with the multiprocessing module and get back the return ndarrays.
Thank you so much for your help !
You can use process pool from the multiprocessing module
def test(a):
return a
from multiprocessing.dummy import Pool
p = Pool(3)
a=p.starmap(test, zip([1,2,3]))
print(a)
p.close()
p.join()
kar's answer works, however keep in mind that he's using the .dummy module which might be limited by the GIL. Heres more info on it:
multiprocessing.dummy in Python is not utilising 100% cpu

How to speed up string splitting in Python

I am receiving data from an Arduino with a Tkinter GUI and need to receive 8 different values at 20 samples per second and graph them. I am plotting 4 on one graph and 4 on another graph. The code on the Arduino side works fine and is sending at the correct rate using the following format.
Serial.println(String(val1) + "," + String(val2) + ...
On the Python side I am receiving and graphing like this:
def update_graph(self, i):
self.xdata.append(i)
while (self.arduinoData.inWaiting()==0):
pass
x = self.arduinoData.readline()
split_data = x.split(",")
print split_data[1]
self.ydata1.append(int(split_data[0]))
self.ydata2.append(int(split_data[1]))
self.ydata3.append(int(split_data[2]))
self.ydata4.append(int(split_data[3]))
self.ydata5.append(int(split_data[4]))
self.ydata6.append(int(split_data[5]))
self.ydata7.append(int(split_data[6]))
self.ydata8.append(int(split_data[7]))
self.line1.set_data(self.xdata, self.ydata1)
self.line2.set_data(self.xdata, self.ydata2)
self.line3.set_data(self.xdata, self.ydata3)
self.line4.set_data(self.xdata, self.ydata4)
self.ax1.set_ylim(min(self.ydata1), max(self.ydata4))
self.ax1.set_xlim(min(self.xdata), max(self.xdata))
self.line5.set_data(self.xdata, self.ydata5)
self.line6.set_data(self.xdata, self.ydata6)
self.line7.set_data(self.xdata, self.ydata7)
self.line8.set_data(self.xdata, self.ydata8)
self.ax2.set_ylim(min(self.ydata5), max(self.ydata8))
self.ax2.set_xlim(min(self.xdata), max(self.xdata))
if i >= self.points - 1:
self.running = False
self.ani = None
return self.line1,
return self.line2,
return self.line3,
return self.line4,
return self.line5,
return self.line6,
return self.line7,
return self.line8,
This has proved to be way too slow to keep up with the incoming data. Is there a faster way receive and parse the data?
I agree with #gre_gor that the parsing is not the slowest part. A while back I was doing a similar project and found that setting the arduino to a higher serial speed did the trick.
void setup(){
Serial.begin(115200);
}

How to trigger callback with "on_change" on some list variable in Bokeh?

I am training a neural network that gives me a value of the correctly identified items. Simply put, I have a list ("res") that is being appended by a new integer each 10s.
I want to visualise this in an interactive way using Bokeh, but my callback function doesn't ever get ran. Here is a simple snippet:
p = figure()
r = p.line(x=[],y=[], line_width=2)
ds = r.data_source
# this is where I keep my data that are being updated
s = ColumnDataSource(data=dict(x= res, y= res))
def callback(attr,old,new):
global i
ds.data['x'].append(res[i])
ds.data['y'].append(res[i])
ds.trigger('data', ds.data, ds.data)
i += 1
s.on_change('data',callback) # run callback if anything changes in s
curdoc().add_root(p)
Any ideas?

How do you make a custom audio filter using moviepy?

I am trying to write my own custom audio filter for moviepy.
I am looking at audio_fadein as an example, but I am having trouble understanding the expected type of (t) the input variable.
Could anyone explain what the expected type of t is? Or where in the moviepy code I can look to see what libraries this t comes from or is used by? Thank you for any help, it is greatly appreciated.
#audio_video_fx
def audio_fadein(clip, duration):
"""Return an audio (or video) clip that is first mute, then the
sound arrives progressively over ``duration`` seconds."""
def fading(gf,t):
gft = gf(t)
if np.isscalar(t):
factor = min(1.0 * t / duration, 1)
factor = np.array([factor,factor])
else:
factor = np.minimum(1.0 * t / duration, 1)
factor = np.vstack([factor,factor]).T
return factor * gft
return clip.fl(fading, keep_duration = True)
The format of t was an array of 2000 equidistant values.
These values were actually all times since the start of the clip, and the 2000 were just giving a window of times, presumably so that in the audio processing you could look at more values than just the current 'moment'.
This array case is handled by the 'else' clause. I am not sure when this function receives t as a simple scalar value.

depth-first algorithm in python does not work

I have some project which I decide to do in Python. In brief: I have list of lists. Each of them also have lists, sometimes one-element, sometimes more. It looks like this:
rules=[
[[1],[2],[3,4,5],[4],[5],[7]]
[[1],[8],[3,7,8],[3],[45],[12]]
[[31],[12],[43,24,57],[47],[2],[43]]
]
The point is to compare values from numpy array to values from this rules (elements of rules table). We are comparing some [x][y] point to first element (e.g. 1 in first element), then, if it is true, value [x-1][j] from array with second from list and so on. Five first comparisons must be true to change value of [x][y] point. I've wrote sth like this (main function is SimulateLoop, order are switched because simulate2 function was written after second one):
def simulate2(self, i, j, w, rule):
data = Data(rule)
if w.world[i][j] in data.c:
if w.world[i-1][j] in data.n:
if w.world[i][j+1] in data.e:
if w.world[i+1][j] in data.s:
if w.world[i][j-1] in data.w:
w.world[i][j] = data.cc[0]
else: return
else: return
else: return
else: return
else: return
def SimulateLoop(self,w):
for z in range(w.steps):
for i in range(2,w.x-1):
for j in range(2,w.y-1):
for rule in w.rules:
self.simulate2(i,j,w,rule)
Data class:
class Data:
def __init__(self, rule):
self.c = rule[0]
self.n = rule[1]
self.e = rule[2]
self.s = rule[3]
self.w = rule[4]
self.cc = rule[5]
NumPy array is a object from World class. Rules is list as described above, parsed by function obtained from another program (GPL License).
To be honest it seems to work fine, but it does not. I was trying other possibilities, without luck. It is working, interpreter doesn't return any errors, but somehow values in array changing wrong. Rules are good because it was provided by program from which I've obtained parser for it (GPL license).
Maybe it will be helpful - it is Perrier's Loop, modified Langton's loop (artificial life).
Will be very thankful for any help!
)
I am not familiar with Perrier's Loop, but if you code something like famous "game life" you would have done simple mistake: store the next generation in the same array thus corrupting it.
Normally you store the next generation in temporary array and do copy/swap after the sweep, like in this sketch:
def do_step_in_game_life(world):
next_gen = zeros(world.shape) # <<< Tmp array here
Nx, Ny = world.shape
for i in range(1, Nx-1):
for j in range(1, Ny-1):
neighbours = sum(world[i-1:i+2, j-1:j+2]) - world[i,j]
if neighbours < 3:
next_gen[i,j] = 0
elif ...
world[:,:] = next_gen[:,:] # <<< Saving computed next generation

Categories