I'm doing project on RPi and came across problem, that I can't solve. I'm using multiprocess to calculate wind speed with anemometer, and servo based platform to move around pv panel. In the main function I declare processes and objects:
def main():
ane = Anemometer(pin=38, radius=65, anamometer_factor=1, action_time=5,
measurment_fraquency=0.01, max_wind_speed=40, mesurment_time=30)
prb = PhotoresistorBase(TR=1, TL=3, BR=0, BL=2, signifficant_diff=40)
BS = Servo(pin=12, min_val=10, max_val=50)
US = Servo(pin=18, min_val=29, max_val=43)
pv_based = Pv(6)
pv_servo = Pv(5)
...
calculate_wind_vel_p = Process(target=calculate_wind_vel, args=(ane, BS, US))
calculate_power_p = Process(target=calculate_power, args=(pv_based, pv_servo))
turn_platform_p = Process(target=turn_platform, args=(prb, BS, US))
calculate_wind_vel_p.start()
calculate_power_p.start()
turn_platform_p.start()
calculate_wind_vel_p.join()
calculate_power_p.join()
turn_platform_p.join()
Process calculate_power_p works great, problem appears in two other processes
Process calculate_wind_vel_p targets function calculate_wind_vel, which looks like this:
def calculate_wind_vel(ane, BS, US):
while True:
wind_speed = ane.calculate_mean_wind_velocity()
...
data = (datetime.datetime.now(), wind_speed)
insert_vel_data(conn, data)
time.sleep(1)
and Anemometer class, which contains calculate_mean_wind_velocity() looks like this:
class Anemometer:
def __init__(self, pin, radius, anamometer_factor, action_time,
measurment_fraquency, max_wind_speed, mesurment_time):
...
def calculate_wind_velocity(self) -> float:
rotations = 0
count = 0
endtime = time.time() + self.action_time
sensorstart = GPIO.input(self.pin)
circumference = (self.radius * 2 / 1000) * math.pi
while time.time() < endtime:
if GPIO.event_detected(self.pin):
count = count + 1
**print(count)**
if rotations==1 and sensorstart==1:
rotations = 0
rots_per_second = float(count/3/self.action_time)
wind_vel = float((rots_per_second)*circumference*self.anamometer_factor)
**print("Wind vel: " + str(wind_vel))**
return wind_vel
def calculate_mean_wind_velocity(self) -> float:
sum = 0
measure_count = int(self.mesurment_time/self.action_time)
for _ in range(measure_count):
sum += self.calculate_wind_velocity()
return float(sum/measure_count)
The Problem:
When process executes calculate_wind_vel() it jumps to ane.calculate_mean_wind_velocity(). The second function, executes calculate_wind_velocity e.g 4 times and calculate mean wind speed.
When calculate_wind_velocity() starts, program prints 'count' only once (first bolded print) and later count does not increment despite event is triggered. Than it calculate wind speed, with that one counted interruption. Later when calculate_wind_velocity() is execute again (process target has while True:) count is not incremented at all!
weirdest thing is that, when I run this code without process, just paste this into main():
ane = Anemometer(pin=38, radius=65, anamometer_factor=1, action_time=5,
measurment_fraquency=0.01, max_wind_speed=40, mesurment_time=30)
while True:
ret = ane.calculate_mean_wind_velocity()
print(ret)
time.sleep(2)
It works perfect - it calculates every single GPIO.event_detected(self.pin) and wind_speed, so it has smth to do with processes!.
Similar problem with moving platform - process executes functions from servo class, but it does not move, functions don't calculate/change state.
Related
I know similar questions have been asked before. In fact, I tried to use the following as a basis for what I am trying to do, which I found here: How to catch the return value of function that is scheduled using python schedule package.
import schedule as sd
import time
def dothis(h):
h-=1
print("Hour left : "+str(h))
return h
h=24
d="14:00"
sd.every().day.at(d).do(dothis,h)
a = dothis(h)
while True:
sd.run_pending()
return_value = a
time.sleep(1)
I tested the above code above after changing sd.every().day.at(d).do(dothis,h) to sd.every(15).seconds.do(dothis, h), thinking that the value of h would decrease every time 'dothis' is invoked. The output is:
Hour left : 23
Hour left : 23
Hour left : 23
Hour left : 23
Hour left : 23
I have been unable to find a way to return and access an object which will likely be different each time the scheduled job is run.
Any help would be appreciated. Thank you.
Currently, every time dothis is called, it is passed the value 24 as an argument - python has no reference passing, so the parameter h in the function has no link to the global h.
One way to fix this is to use an object to hold the state, and call the bound method of the object. (In the below example, I modified it to run every second for ease of testing.)
import schedule as sd
import time
class State:
def __init__(self):
self.h = 24
def dothis(self):
self.h -= 1
print("Hour left : " + str(self.h))
state = State()
d = "14:00"
sd.every(1).seconds.do(state.dothis)
state.dothis()
while True:
sd.run_pending()
time.sleep(1)
Alternatively, if you want to leave your dothis function unchanged, you could make a more general class that takes a function and an initial value as arguments, and feeds the return value back into the function after every call:
import schedule as sd
import time
class CaptureReturn:
def __init__(self, func, initial_state):
self.func = func
self.state = initial_state
def call_func(self):
self.state = self.func(self.state)
def dothis(h):
h -= 1
print("Hour left : " + str(h))
return h
h = 24
capture_return = CaptureReturn(dothis, h)
sd.every(1).seconds.do(capture_return.call_func)
capture_return.call_func()
while True:
sd.run_pending()
time.sleep(1)
You can also modify the above example to use decorators for extra style points, e.g.:
import schedule as sd
import schedule as sd
import time
def capture_return(initial_state):
state = initial_state
def decorator(f):
def _wrapped():
nonlocal state
state = f(state)
return state
return _wrapped
return decorator
#capture_return(initial_state=24)
def dothis(h):
h -= 1
print("Hour left : " + str(h))
return h
sd.every(1).seconds.do(dothis)
print('manually calling:', dothis())
while True:
sd.run_pending()
time.sleep(1)
I have a problem that involves collecting data continuously from multiple sources.
My setup as it is currently, writes each data entry from each source to a MySQL db, and then, with another python program, does Select's that bring all the data together. I need to make INSERT's at roughly 1000/second and as it is my SELECT's can take 15-20 seconds each.
The whole process takes so long the data is obsolete before I get to do anything useful with it.
I have created a toy example to try and demonstrate what I am looking for.
program 1 'generateClasses':
import time
import random
from datetime import datetime
class Race:
def __init__(self,name):
hist = {}
now = datetime.now()
self.name = name
self.now = now
hist[now] = 0
self.v = 0
self.hist = hist # example variable's.
def update(self,name,v):
now = datetime.now()
hist = self.hist
hist[now] = v
self.v = v
self.now - now
self.hist = hist
class Looper:
def __init__(self,name):
self.a = Race(name)
def loop(self,name):
# simulating the streaming API
while True:
v = self.a.v
v += 1
self.a.update(name,v)
print(a,datetime.now(),v) # can i access this stream using the location displayed with the print(a)?
time.sleep(0.1) # this should be more like time.sleep(0.001)
def pickData(self,name):
v = self.v
self.loop(name)
print('The state at {} {} = '.format(self.now,self.v))
return self.hist
if __name__ == "__main__":
x = 'Some_ID'
a = Looper(x)
a.loop(x)
program 2:
from generateClasses import Looper
from datetime import datetime
import time
start_time = int((datetime.now() - datetime(1970, 1, 1)).total_seconds())
print(start_time)
x = 'Some_orher_ID'
a = Looper(x)
print('this will print')
a.loop(x)
print('this wont ever print')
a.pickdata(x)
# this last section is the functionality i am looking for in this program, but, as it is, it will never run.
x = ‘Some_ID’
while True:
now_time = int((datetime.now() - datetime(1970, 1, 1)).total_seconds())
print(start_time)
if int(now_time-start_time) == 10:
a.pickData(x)
# b.pickData(x)
# c.pickData(x)
# d.pickData(x)
# make further actions.....
What happens currently in my examples is that it creates its own loop using the class structure from the first program.
What I want it to do is call the the pickData() method from program 2 at timely intervals of my choosing on a loop running in another program.
Is my best option picking a db located in memory and getting a faster computer?
Maybe something can be done with the object location shown when you print the instance name?
I have uploaded to github if anybody fancies it..
I would be grateful of any suggestions.
also, recommendations for further reading would be appreciated also.
I have a huge list that I need to process, which takes some time, so I divide it into 4 pieces and multiprocess each piece with some function. It still takes a bit of time to run with 4 cores, so I figured I would add some progress bar to the function, so that it could tell me where each processor is at in processing the list.
My dream was to have something like this:
erasing close atoms, cpu0 [######..............................] 13%
erasing close atoms, cpu1 [#######.............................] 15%
erasing close atoms, cpu2 [######..............................] 13%
erasing close atoms, cpu3 [######..............................] 14%
with each bar moving as the loop in the function progresses. But instead, I get a continuous flow:
etc, filling my terminal window.
Here is the main python script that calls the function:
from eraseCloseAtoms import *
from readPDB import *
import multiprocessing as mp
from vectorCalc import *
prot, cell = readPDB('file')
atoms = vectorCalc(cell)
output = mp.Queue()
# setup mp to erase grid atoms that are too close to the protein (dmin = 2.5A)
cpuNum = 4
tasks = len(atoms)
rangeSet = [tasks / cpuNum for i in range(cpuNum)]
for i in range(tasks % cpuNum):
rangeSet[i] += 1
rangeSet = np.array(rangeSet)
processes = []
for c in range(cpuNum):
na, nb = (int(np.sum(rangeSet[:c] + 1)), int(np.sum(rangeSet[:c + 1])))
processes.append(mp.Process(target=eraseCloseAtoms, args=(prot, atoms[na:nb], cell, 2.7, 2.5, output)))
for p in processes:
p.start()
results = [output.get() for p in processes]
for p in processes:
p.join()
atomsNew = results[0] + results[1] + results[2] + results[3]
Below is the function eraseCloseAtoms():
import numpy as np
import click
def eraseCloseAtoms(protein, atoms, cell, spacing=2, dmin=1.4, output=None):
print 'just need to erase close atoms'
if dmin > spacing:
print 'the spacing needs to be larger than dmin'
return
grid = [int(cell[0] / spacing), int(cell[1] / spacing), int(cell[2] / spacing)]
selected = list(atoms)
with click.progressbar(length=len(atoms), label='erasing close atoms') as bar:
for i, atom in enumerate(atoms):
bar.update(i)
erased = False
coord = np.array(atom[6])
for ix in [-1, 0, 1]:
if erased:
break
for iy in [-1, 0, 1]:
if erased:
break
for iz in [-1, 0, 1]:
if erased:
break
for j in protein:
protCoord = np.array(protein[int(j)][6])
trueDist = getMinDist(protCoord, coord, cell, vectors)
if trueDist <= dmin:
selected.remove(atom)
erased = True
break
if output is None:
return selected
else:
output.put(selected)
accepted answer says it's impossible with click and it'd require 'non trivial amount of code to make it work'.
While it's true, there is another module with this functionality out of the box: tqdm
https://github.com/tqdm/tqdm which does exatly what you need.
You can do nested progress bars in docs https://github.com/tqdm/tqdm#nested-progress-bars etc.
I see two issues in your code.
The first one explains why your progress bars are often showing 100% rather than their real progress. You're calling bar.update(i) which advances the bar's progress by i steps, when I think you want to be updating by one step. A better approach would be to pass the iterable to the progressbar function and let it do the updating automatically:
with click.progressbar(atoms, label='erasing close atoms') as bar:
for atom in bar:
erased = False
coord = np.array(atom[6])
# ...
However, this still won't work with multiple processes iterating at once, each with its own progress bar due to the second issue with your code. The click.progressbar documentation states the following limitation:
No printing must happen or the progress bar will be unintentionally destroyed.
This means that whenever one of your progress bars updates itself, it will break all of the other active progress bars.
I don't think there is an easy fix for this. It's very hard to interactively update a multiple-line console output (you basically need to be using curses or a similar "console GUI" library with support from your OS). The click module does not have that capability, it can only update the current line. Your best hope would probably be to extend the click.progressbar design to output multiple bars in columns, like:
CPU1: [###### ] 52% CPU2: [### ] 30% CPU3: [######## ] 84%
This would require a non-trivial amount of code to make it work (especially when the updates are coming from multiple processes), but it's not completely impractical.
For anybody coming to this later. I created this which seems to work okay. It overrides click.ProgressBar fairly minimally, although I had to override an entire method for only a few lines of code at the bottom of the method. This is using \x1b[1A\x1b[2K to clear the progress bars before rewriting them so may be environment dependent.
#!/usr/bin/env python
import time
from typing import Dict
import click
from click._termui_impl import ProgressBar as ClickProgressBar, BEFORE_BAR
from click._compat import term_len
class ProgressBar(ClickProgressBar):
def render_progress(self, in_collection=False):
# This is basically a copy of the default render_progress with the addition of in_collection
# param which is only used at the very bottom to determine how to echo the bar
from click.termui import get_terminal_size
if self.is_hidden:
return
buf = []
# Update width in case the terminal has been resized
if self.autowidth:
old_width = self.width
self.width = 0
clutter_length = term_len(self.format_progress_line())
new_width = max(0, get_terminal_size()[0] - clutter_length)
if new_width < old_width:
buf.append(BEFORE_BAR)
buf.append(" " * self.max_width)
self.max_width = new_width
self.width = new_width
clear_width = self.width
if self.max_width is not None:
clear_width = self.max_width
buf.append(BEFORE_BAR)
line = self.format_progress_line()
line_len = term_len(line)
if self.max_width is None or self.max_width < line_len:
self.max_width = line_len
buf.append(line)
buf.append(" " * (clear_width - line_len))
line = "".join(buf)
# Render the line only if it changed.
if line != self._last_line and not self.is_fast():
self._last_line = line
click.echo(line, file=self.file, color=self.color, nl=in_collection)
self.file.flush()
elif in_collection:
click.echo(self._last_line, file=self.file, color=self.color, nl=in_collection)
self.file.flush()
class ProgressBarCollection(object):
def __init__(self, bars: Dict[str, ProgressBar], bar_template=None, width=None):
self.bars = bars
if bar_template or width:
for bar in self.bars.values():
if bar_template:
bar.bar_template = bar_template
if width:
bar.width = width
def __enter__(self):
self.render_progress()
return self
def __exit__(self, exc_type, exc_val, exc_tb):
self.render_finish()
def render_progress(self, clear=False):
if clear:
self._clear_bars()
for bar in self.bars.values():
bar.render_progress(in_collection=True)
def render_finish(self):
for bar in self.bars.values():
bar.render_finish()
def update(self, bar_name: str, n_steps: int):
self.bars[bar_name].make_step(n_steps)
self.render_progress(clear=True)
def _clear_bars(self):
for _ in range(0, len(self.bars)):
click.echo('\x1b[1A\x1b[2K', nl=False)
def progressbar_collection(bars: Dict[str, ProgressBar]):
return ProgressBarCollection(bars, bar_template="%(label)s [%(bar)s] %(info)s", width=36)
#click.command()
def cli():
with click.progressbar(length=10, label='bar 0') as bar:
for i in range(0, 10):
time.sleep(1)
bar.update(1)
click.echo('------')
with ProgressBar(iterable=None, length=10, label='bar 1', bar_template="%(label)s [%(bar)s] %(info)s") as bar:
for i in range(0, 10):
time.sleep(1)
bar.update(1)
click.echo('------')
bar2 = ProgressBar(iterable=None, length=10, label='bar 2')
bar3 = ProgressBar(iterable=None, length=10, label='bar 3')
with progressbar_collection({'bar2': bar2, 'bar3': bar3}) as bar_collection:
for i in range(0, 10):
time.sleep(1)
bar_collection.update('bar2', 1)
for i in range(0, 10):
time.sleep(1)
bar_collection.update('bar3', 1)
if __name__ == "__main__":
cli()
It may not be the same as your dream, but you can use imap_unordered with click.progressbar to integrate with multiprocessing.
import multiprocessing as mp
import click
import time
def proc(arg):
time.sleep(arg)
return True
def main():
p = mp.Pool(4)
args = range(4)
results = p.imap_unordered(proc, args)
with click.progressbar(results, length=len(args)) as bar:
for result in bar:
pass
if __name__ == '__main__:
main()
Something like this will work if you are okay with having one progress bar:
import click
import threading
import numpy as np
reallybiglist = []
numthreads = 4
def myfunc(listportion, bar):
for item in listportion:
# do a thing
bar.update(1)
with click.progressbar(length=len(reallybiglist), show_pos=True) as bar:
threads = []
for listportion in np.split(reallybiglist, numthreads):
thread = threading.Thread(target=myfunc, args=(listportion, bar))
thread.start()
threads.append(thread)
for thread in threads:
thread.join()
So I've been at this for a while now. My swing skills are not bad, but right now I seem to be missing something. I've been experimenting with Jython just recently and I have been utilizing the swing package from within a Jython script.
Let me start with this: My goal is to make a JPanel slide across the JFrame. To keep it to my knowledge, I tried attempting something like this:
x = 0
while panel.getX() < frame.getWidth():
print "panel.getX(): %i" % panel.getX()
panel.setLocation(x,0)
x += 5
time.sleep(0.01)
But here's the gist of my confusion... I ran this in my code and it did exactly what I wanted. The JPanel slid across the JFrame and I could see it do so:
from javax.swing import *
from java.awt import *
from java.awt.event import *
import time
f = JFrame()
p = JPanel()
p.setPreferredSize(Dimension(300,300))
def slide():
x = 0
while p.getX() < f.getWidth():
print "p.getX(): %i" % p.getX()
p.setLocation(x,0)
x += 5
time.sleep(0.5)
p.add(JLabel("hi"))
f.getContentPane().add(p)
f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE)
f.setVisible(True)
f.pack()
slide()
BUT, when I add tad more complexity with events, it has no reaction at all. No updating, repainting or anything:
from javax.swing import *
from java.awt import *
from java.awt.event import *
import time
f = JFrame()
p = JPanel()
p.setPreferredSize(Dimension(300,300))
def slide(event):
x = 0
while p.getX() < f.getWidth():
print "p.getX(): %i" % p.getX()
p.setLocation(x,0)
x += 5
time.sleep(0.5)
b = JButton(actionPerformed=slide)
p.add(JLabel("hi"))
f.getContentPane().setLayout(BoxLayout(f.getContentPane(), BoxLayout.Y_AXIS))
f.getContentPane().add(p)
f.getContentPane().add(b)
f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE)
f.setVisible(True)
f.pack()
Any ideas???
Thanks,
Dave
The loop blocks the event dispatch thread, so no events or drawing can be processed. Use a swing Timer for the sliding. The documentation is for java but hopefully not too difficult to translate to python.
I am trying to do an animation of a Particle Swarm Optimization using Python and Mayavi2.
The animation is working fine, my problem is that it is not possible to interact with the plot while it is animating the movement. Specifically i would like to turn the graph and zoom. Maybe someone has experience doing animations?
The way i do it is first to calculate the positions of the particles and then to store them. After the calculation is finished i plot the positions of the particle at the first instace of time with point3d() and then i iterate through time updating the data using the set() method.
Is there a way to make it possible to turn the graph? I have heard about something with threads, disabeling the the rendering, but i could not figure out how to do it in my code. Besides lots of other stuff, I have read:
http://code.enthought.com/projects/mayavi//docs/development/html/mayavi/mlab_animating.html
http://code.enthought.com/projects/mayavi//docs/development/html/mayavi/tips.html#acceleration-mayavi-scripts
but it can't see how to use it.
Any suggestions?
Here is my code:
#!/usr/bin/env python
'''
#author rt
'''
import pylab as plt
from numpy import *
from mayavi import mlab
from threading import Thread # making plotting faster?
import ackley as ac
class Swarm(Thread, object):
'''
constructor for the swarm
initializes all instance variables
'''
def __init__(self,objective_function):
Thread.__init__(self)
# optimization options
self.omega = 0.9 # inertial constant
self.c1 = 0.06 # cognitive/private constant
self.c2 = 0.06 # social constant
self.objective = objective_function # function object
self.max_iteration = 100 # maximal number of iterations
# Swarm stuff
self.number = 0
self.best = [] # gbest; the global best position
self.particles = [] # empty list for particles
# temporary
self.min = self.objective.min
self.max = self.objective.max
self.best_evolution = []
# self.dimensions = 2 # dimensions NB!
'''
add particles to the swarm
find the best position of particle in swarm to set global best
'''
def add_particles(self, n):
for i in range(n):
particle = Particle(self)
if i == 0: # initialize self.best
self.best = particle.position
if particle.eval() < self._eval(): # check if there is a better and if, set it
self.best = copy(particle.position)
self.particles.append(particle) # append the particle to the swarm
def _eval(self):
return self.objective.evaluate(self.best)
def plot(self):
for i in range(self.max_iteration):
pos_x = []
pos_y = []
pos_z = []
#print pos_x
for particle in self.particles:
[x,y,z] = particle.trail[i]
pos_x.append(x)
pos_y.append(y)
pos_z.append(z)
#print pos_x
if i ==0:
g = mlab.points3d(pos_x, pos_y,pos_z, scale_factor=0.5)
ms =g.mlab_source
ms.anti_aliasing_frames = 0
ms.set(x=pos_x, y = pos_y, z = pos_z,scale_factor=0.5) #updating y value
#print pos_y
#ms.set(x=pos_x) # update x values
#ms.set(y=pos_y) #updating y value
#ms.set(z=pos_z) #updating y value
#for p in self.particles:
#p.plot()
def plot_objective(self):
delta = 0.1
v = mgrid[self.min:self.max:delta,self.min:self.max:delta]
z = self.objective.evaluate(v)
#mlab.mesh(v[0],v[1],z)
mlab.surf(v[0],v[1],z) # surf creates a more efficient data structure than mesh
mlab.xlabel('x-axis', object=None)
mlab.ylabel('y-axis', object=None)
mlab.zlabel('z-axis', object=None)
def _info(self):
self.plot()
print '----------------------------'
print 'The best result is:'
print 'Coordinates:', self.best
print 'Value: ', self._eval()
#print 'with ', nreval, 'evaluations'
print 'nr of particles: ', len(self.particles)
print '----------------------------'
def run(self):
self.plot_objective()
self.best = self.particles[0].get_position()
iteration = 0
while iteration < self.max_iteration:
#if iteration!= 0: obj.scene.disable_render = True
#disable_render = True
for particle in self.particles:
rnd_c1 = array([random.uniform(0,1),random.uniform(0,1)])
rnd_c2 = array([random.uniform(0,1),random.uniform(0,1)])
particle.velocity = self.omega * array(particle.velocity) + \
self.c1 * rnd_c1 * (array(particle.best) - array(particle.position)) + \
self.c2 * rnd_c2 * (array(self.best) - array(particle.position)) # TODO: change so independent rnd for components
particle.position = array(particle.position) + particle.velocity
if particle.eval() < particle.best_eval():
particle.best = copy(particle.position)
if particle.eval() < self._eval():
self.best = copy(particle.position)
particle.update() # add the point to the trail
iteration +=1
self.best_evolution.append(self._eval())
#obj.scene.disable_render = False
print 'finished: ', iteration
self._info()
'''
Class modeling particle
'''
class Particle():
def __init__(self, swarm):
self.swarm = swarm
x_rand = random.uniform(self.swarm.min,self.swarm.max)
y_rand = random.uniform(self.swarm.min,self.swarm.max)
self.position = array([x_rand,y_rand])
v_x_rand = random.uniform(self.swarm.min,self.swarm.max)
v_y_rand = random.uniform(self.swarm.min,self.swarm.max)
self.velocity = array([v_x_rand, v_y_rand])
self.size = 0.5
self.best = self.position
# visualization
self.trail = []
def plot(self):
[x,y] = self.position
z = self.eval()
mlab.points3d(x,y,z,scale_factor=self.size)
def eval(self):
return self.swarm.objective.evaluate(self.position)
def best_eval(self):
return self.swarm.objective.evaluate(self.best)
def get_position(self):
return self.position
def update(self):
[x,y] = self.position
z = self.eval()
#print [x,y,z]
self.trail.append([x,y,z])
def plot_trail(self,index):
[x,y,z] = self.trail[index]
mlab.points3d(x,y,z,scale_factor=self.size)
# Make the animation
mlab.figure(1, bgcolor=(0, 0, 0), size=(1300, 700)) # create a new figure with black background and size 1300x700
objective = ac.Ackley() # make an objective function
swarm = pso.Swarm(objective) # create a swarm
nr_of_particles = 25 # nr of particles in swarm
swarm.add_particles(nr_of_particles)
swarm.run()
#swarm.start()
mlab.show()
print '------------------------------------------------------'
print 'Particle Swarm Optimization'
#objective.info()
print 'Objective function to minimize has dimension = ', objective.get_dimension()
print '# of iterations = ', 1000
print '# of particles in swarm = ', nr_of_particles
print '------------------------------------------------------'
In my case, even though I was somewhat able to do what Brandon Rhodes suggested for a mock program (https://stackoverflow.com/questions/16617814/interacting-with-mlab-scene-while-it-is-being-drawn), I could not manage to convert my already existing larger program.
Then I found this link: http://wiki.wxpython.org/LongRunningTasks
So, I just sprinkled a lot of wx.Yield() s inside my loops. This way I did not need to change my program structure, and I am able to interact with the window. I think better ways are explained in the link.
Your problem is that the wx event loop, which runs the Mayavi GUI window and listens for mouse clicking and dragging and responds by moving the scene, is not getting any time to run during your animation because you are keeping Python captive in your loop without ever letting it return control.
Instead of keeping control of the program with a loop of your own, you need to create a wx.Timer class that advances the scene by one frame update, and that then returns control to the wx event loop after scheduling itself again. It will look something like this:
import wx
...
class Animator(wx.Timer):
def Notify(self):
"""When a wx.Timer goes off, it calls its Notify() method."""
if (...the animation is complete...):
return
# Otherwise, update all necessary data to advance one step
# in the animation; you might need to keep a counter or
# other state as an instance variable on `self`
# [DATA UPDATE GOES HERE]
# Schedule ourselves again, giving the wx event loop time to
# process any pending mouse motion.
self.Start(0, oneShot=True) # "in zero milliseconds, call me again!"
I played with slightly higher values like 1 for the number of milliseconds that wx gets to run the UI with, but could not really tell a difference between that and just choosing 0 and having control returned "immediately".