I created a robot that will run based on python. For its autonomous program I need it to run for a certain distance( say 10 feet). Currently I am using time to have it go the distance, but is there any way to implement distance in the code to make it more exact. Thank you.
This was code for an old robotics competition i did and i want to learn by improving it. I used these libraries:
import sys
import wpilib
import logging
from time import time
This is the code:
def autonomous_straight(self):
'''Called when autonomous mode is enabled'''
t0 = time()
slow_forward = 0.25
t_forward_time = 6.5
t_spin_time = 11
t_shoot_time = 11.5
while self.isAutonomous() and self.isEnabled():
t = time() - t0
if t < t_forward_time:
self.motor_left.set(slow_forward)
self.motor_right.set(-slow_forward)
self.motor_gobbler.set(1.0)
elif t < t_spin_time:
self.motor_left.set(2 * slow_forward)
self.motor_right.set(2 * slow_forward)
self.motor_shooter.set(-1.0)
elif t < t_shoot_time:
self.motor_mooover.set(.5)
else:
self.full_stop()
self.motor_gear.set(-1.0)
wpilib.Timer.delay(0.01)
self.full_stop()
It looks like you're trying to drive a distance based on time. While this may work over short distances at known speeds, it's generally much better to drive based on sensor feedback. What you're going to want to take a look at are encoders, more specifically Rotary encoders. Encoders are simply counters that keep track of 'ticks'. Each 'tick' represents a percent rotation of the drive shaft.
where d is distance traveled, cir is the wheel circumference res is the encoder 'resolution' (number of ticks per rotation), and num is the current tick count (read off the encoder)
# Example Implementation
import wpilib
import math
# Some dummy speed controllers
leftSpeed = wpilib.Spark(0)
rightSpeed = wpilib.Spark(1)
# Create a new encoder linked to DIO pins 0 & 1
encoder = wpilib.Encoder(0, 1)
# Transform ticks into distance traveled
# Assuming 6" wheels, 512 resolution
def travelDist (ticks):
return ((math.pi * 6) / 512) * ticks
# Define auto, takes travel distance in inches
def drive_for(dist = 10):
while travelDist(encoder.get()) < dist:
rightSpeed.set(1)
leftSpeed.set(-1) # negative because one controller must be inverted
This simple implementation will allow you to call drive_for(dist) to travel a desired distance with a fair degree of accuracy. It does however, have quite a few problems. Here we attempt to set a linear speed, notice there is no acceleration control. This will cause error to build up over longer distances, the solution to this is PID control. wpilib has constructs to simplify the math. Simply take the difference between your setpoint and current travel distance and plug it into your PID controller as an error. The PID controller will spit out a new value to set your motor controllers to. The PID controller (with some tuning) can account for acceleration, inertia, and overshoot.
docs on PID:
http://robotpy.readthedocs.io/projects/wpilib/en/latest/_modules/wpilib/pidcontroller.html?
Related
I am trying to detect sudden loud noises in audio recordings. One way I have found to do this is by creating a spectrogram of the audio and adding the values of each column. By graphing the sum of the values in each column, one can see a spike every time there is a sudden loud noise. The problem is that, in my use case, I need to play a beep tone (with a frequency of 2350 Hz), while the audio is being recorded. The spectrogram of the beep looks like this:
As you can see, at the beginning and the end of this beep (which is a simple tone with a frequency of 2350 Hz), there are other frequencies present, which I have been unsuccessful in removing. These unwanted frequencies cause a spike when summing up the columns of the spectrogram, at the beginning and at the end of the beep. I want to avoid this because I don't want my beep to be detected as a sudden loud noise. See the spectrogram below for reference:
Here is what the graph of the sum of each column in the spectrogram:
Obviously, I want to avoid having false positives in my algorithm. So I need some way of getting rid of the spikes caused by the beginning and end of the beep. One idea that I have had so far is to add random noise with a low decibel value above and/or below the 2350 Hz line in the beep spectrogram above. This would ideally, create a tone that sounds very similar to the original, but instead of creating a spike when I add up all the values in the column, it would create more of a plateau. Is this idea a feasible solution to my problem? If so, how would I go about creating a beep sound that has random noise like I described above using python? Is there another, easier solution to my problem that I am overlooking?
Currently, I am using the following code to generate my beep sound:
import math
import wave
import struct
audio = []
sample_rate = 44100.0
def append_sinewave(
freq=440.0,
duration_milliseconds=500,
volume=1.0):
"""
The sine wave generated here is the standard beep. If you want something
more aggresive you could try a square or saw tooth waveform. Though there
are some rather complicated issues with making high quality square and
sawtooth waves... which we won't address here :)
"""
global audio # using global variables isn't cool.
num_samples = duration_milliseconds * (sample_rate / 1000.0)
for x in range(int(num_samples)):
audio.append(volume * math.sin(2 * math.pi * freq * ( x / sample_rate )))
return
def save_wav(file_name):
# Open up a wav file
wav_file=wave.open(file_name,"w")
# wav params
nchannels = 1
sampwidth = 2
# 44100 is the industry standard sample rate - CD quality. If you need to
# save on file size you can adjust it downwards. The stanard for low quality
# is 8000 or 8kHz.
nframes = len(audio)
comptype = "NONE"
compname = "not compressed"
wav_file.setparams((nchannels, sampwidth, sample_rate, nframes, comptype, compname))
# WAV files here are using short, 16 bit, signed integers for the
# sample size. So we multiply the floating point data we have by 32767, the
# maximum value for a short integer. NOTE: It is theortically possible to
# use the floating point -1.0 to 1.0 data directly in a WAV file but not
# obvious how to do that using the wave module in python.
for sample in audio:
wav_file.writeframes(struct.pack('h', int( sample * 32767.0 )))
wav_file.close()
return
append_sinewave(volume=1, freq=2350)
save_wav("output.wav")
Not really an answer - more of a question.
You're asking the speaker to go from stationary to a sine wave instantaneously - that is quite hard to do (though the frequencies aren't that high). If it does manage it, then the received signal should be the convolution of the top hat and the sine wave (sort of like what you are seeing, but without having some data and knowing what you're doing for the spectrogram it's hard to tell).
In either case you could check this by smoothing the start and end of your tone. Something like this for your tone generation:
tr = 0.05 # rise time, in seconds
tf = duration_milliseconds / 1000 # finish time of tone, in seconds
for x in range(int(num_samples)):
t = x / sample_rate # Time of sample in seconds
# Calculate a bump function
bump_function = 1
if 0 < t < tr: # go smoothly from 0 to 1 at the start of the tone
tp = 1 - t / tr
bump_function = math.e * math.exp(1/(tp**2 - 1))
elif tf - tr < t < tf: # go smoothly from 1 to 0 at the end of the tone
tp = 1 + (t - tf) / tr
bump_function = math.e * math.exp(1/(tp**2 - 1))
audio.append(volume * bump_function * math.sin(2 * math.pi * freq * t))
You might need to tune the rise time a bit. With this form of bump function you know that you have a full volume tone from tr after the start to tr before the end. Lots of other functions exist, but if this smooths the start/stop effects in your spectrogram then you at least know why they are there. And prevention is generally better than trying to remove the effect in post-processing.
I'm trying to do the following:
Extract the melody of me asking a question (word "Hey?" recorded to
wav) so I get a melody pattern that I can apply to any other
recorded/synthesized speech (basically how F0 changes in time).
Use polynomial interpolation (Lagrange?) so I get a function that describes the melody (approximately of course).
Apply the function to another recorded voice sample. (eg. word "Hey." so it's transformed to a question "Hey?", or transform the end of a sentence to sound like a question [eg. "Is it ok." => "Is it ok?"]). Voila, that's it.
What I have done? Where am I?
Firstly, I have dived into the math that stands behind the fft and signal processing (basics). I want to do it programatically so I decided to use python.
I performed the fft on the entire "Hey?" voice sample and got data in frequency domain (please don't mind y-axis units, I haven't normalized them)
So far so good. Then I decided to divide my signal into chunks so I get more clear frequency information - peaks and so on - this is a blind shot, me trying to grasp the idea of manipulating the frequency and analyzing the audio data. It gets me nowhere however, not in a direction I want, at least.
Now, if I took those peaks, got an interpolated function from them, and applied the function on another voice sample (a part of a voice sample, that is also ffted of course) and performed inversed fft I wouldn't get what I wanted, right?
I would only change the magnitude so it wouldn't affect the melody itself (I think so).
Then I used spec and pyin methods from librosa to extract the real F0-in-time - the melody of asking question "Hey?". And as we would expect, we can clearly see an increase in frequency value:
And a non-question statement looks like this - let's say it's moreless constant.
The same applies to a longer speech sample:
Now, I assume that I have blocks to build my algorithm/process but I still don't know how to assemble them beacause there are some blanks in my understanding of what's going on under the hood.
I consider that I need to find a way to map the F0-in-time curve from the spectrogram to the "pure" FFT data, get an interpolated function from it and then apply the function on another voice sample.
Is there any elegant (inelegant would be ok too) way to do this? I need to be pointed in a right direction beceause I can feel I'm close but I'm basically stuck.
The code that works behind the above charts is taken just from the librosa docs and other stackoverflow questions, it's just a draft/POC so please don't comment on style, if you could :)
fft in chunks:
import numpy as np
import matplotlib.pyplot as plt
from scipy.io import wavfile
import os
file = os.path.join("dir", "hej_n_nat.wav")
fs, signal = wavfile.read(file)
CHUNK = 1024
afft = np.abs(np.fft.fft(signal[0:CHUNK]))
freqs = np.linspace(0, fs, CHUNK)[0:int(fs / 2)]
spectrogram_chunk = freqs / np.amax(freqs * 1.0)
# Plot spectral analysis
plt.plot(freqs[0:250], afft[0:250])
plt.show()
spectrogram:
import librosa.display
import numpy as np
import matplotlib.pyplot as plt
import os
file = os.path.join("/path/to/dir", "hej_n_nat.wav")
y, sr = librosa.load(file, sr=44100)
f0, voiced_flag, voiced_probs = librosa.pyin(y, fmin=librosa.note_to_hz('C2'), fmax=librosa.note_to_hz('C7'))
times = librosa.times_like(f0)
D = librosa.amplitude_to_db(np.abs(librosa.stft(y)), ref=np.max)
fig, ax = plt.subplots()
img = librosa.display.specshow(D, x_axis='time', y_axis='log', ax=ax)
ax.set(title='pYIN fundamental frequency estimation')
fig.colorbar(img, ax=ax, format="%+2.f dB")
ax.plot(times, f0, label='f0', color='cyan', linewidth=2)
ax.legend(loc='upper right')
plt.show()
Hints, questions and comments much appreciated.
The problem was that I didn't know how to modify the fundamental frequency (F0). By modifying it I mean modify F0 and its harmonics, as well.
The spectrograms in question show frequencies at certain points in time with power (dB) of certain frequency point.
Since I know which time bin holds which frequency from the melody (green line below) ...
....I need to compute a function that represents that green line so I can apply it to other speech samples.
So I need to use some interpolation method which takes as parameters the sample F0 function points.
One need to remember that degree of the polynomial should equal to the number of points. The example doesn't have that unfortunately, but the effect is somehow ok as for the prototype.
def _get_bin_nr(val, bins):
the_bin_no = np.nan
for b in range(0, bins.size - 1):
if bins[b] <= val < bins[b + 1]:
the_bin_no = b
elif val > bins[bins.size - 1]:
the_bin_no = bins.size - 1
return the_bin_no
def calculate_pattern_poly_coeff(file_name):
y_source, sr_source = librosa.load(os.path.join(ROOT_DIR, file_name), sr=sr)
f0_source, voiced_flag, voiced_probs = librosa.pyin(y_source, fmin=librosa.note_to_hz('C2'),
fmax=librosa.note_to_hz('C7'), pad_mode='constant',
center=True, frame_length=4096, hop_length=512, sr=sr_source)
all_freq_bins = librosa.core.fft_frequencies(sr=sr, n_fft=n_fft)
f0_freq_bins = list(filter(lambda x: np.isfinite(x), map(lambda val: _get_bin_nr(val, all_freq_bins), f0_source)))
return np.polynomial.polynomial.polyfit(np.arange(0, len(f0_freq_bins), 1), f0_freq_bins, 3)
def calculate_pattern_poly_func(coefficients):
return np.poly1d(coefficients)
Method calculate_pattern_poly_coeff calculates polynomial coefficients.
Using pythons poly1d lib I can compute function which can modify the speech. How to do that?
I just need to move up or down all values vertically at certain point in time.
for instance I want to move all frequencies at time bin 0,75 seconds up 3 times -> it means that frequency will be increased and the melody at that point will sound higher.
Code:
def transform(sentence_audio_sample, mode=None, show_spectrograms=False, frames_from_end_to_transform=12):
# cutting out silence
y_trimmed, idx = librosa.effects.trim(sentence_audio_sample, top_db=60, frame_length=256, hop_length=64)
stft_original = librosa.stft(y_trimmed, hop_length=hop_length, pad_mode='constant', center=True)
stft_original_roll = stft_original.copy()
rolled = stft_original_roll.copy()
source_frames_count = np.shape(stft_original_roll)[1]
sentence_ending_first_frame = source_frames_count - frames_from_end_to_transform
sentence_len = np.shape(stft_original_roll)[1]
for i in range(sentence_ending_first_frame + 1, sentence_len):
if mode == 'question':
by = int(_question_pattern(i) / 500)
elif mode == 'exclamation':
by = int(_exclamation_pattern(i) / 500)
else:
by = 0
rolled = _roll_column(rolled, i, by)
transformed_data = librosa.istft(rolled, hop_length=hop_length, center=True)
def _roll_column(two_d_array, column, shift):
two_d_array[:, column] = np.roll(two_d_array[:, column], shift)
return two_d_array
In this case I am simply rolling up or down frequencies referencing certain time bin.
This needs to be polished as it doesn't take into consideration an actual state of the transformed sample. It just rolls it up/down according to the factor calculated using the polynomial function computer earlier.
You can check full code of my project at github, "audio" package contains pattern calculator and audio transform algorithm described above.
Feel free to ask if something's unclear :)
I attempted to use continuous action-space DDPG in order to solve the following control problem. The goal is to walk towards an initially unknown position within a bordered, two-dimensional area by being told how far one is from the target position at each step (similar to this children's game where the player is guided by "temperature" levels, hot and cold).
In the setup the target position is fixed while the agent's starting position is varied from episode to episode. The goal is to learn a policy for walking as quickly as possible towards the target position. The agent's observation consists just of its current position. Concerning the reward design I considered the Reacher environment, since it involves a similar goal, and similarly use a control reward and a distance reward (see code below). That is getting closer to the target yields a greater reward and the closer the agent gets the more it should favor smaller actions.
For the implementation I considered the openai/spinningup package. Concerning the network architecture I figured that, if the target position was known, the optimal action would be action = target - position, i.e. the policy pi(x) -> a could be modeled as just a single dense layer and the target position would be learned in form of the bias term: a = W # x + b where, after convergence (ideally), W = -np.eye(2) and b = target. Since the environment imposes an action limit such that the target position likely cannot be reached in a single step, I manually scale the computed actions as a = a / tf.norm(a) * action_limit. This preserves the direction towards the target and hence resembles still the optimal action. I used this custom architecture for the policy network as well as a standard MLP architecture with 3 hidden layers (see code and results below).
Results
After having run the algorithm for about 400 episodes in the MLP case and 700 episodes in the custom policy case, with 1000 steps per episode, it didn't seem to heave learnt anything useful. During the test runs the average return didn't increase and when I checked the behavior on three different starting positions it always walks towards the (0, 1) corner of the area; even when it starts right next to the target position it walks past it, heading for the (0, 1) corner. What I noticed is that the custom policy architecture agent resulted in much smaller std. dev. of the test episode returns.
Question
I'd like to understand why the algorithm doesn't seem learn anything for the given setup and what needs to be changed in order to have it converge. I suspect a problem with the implementation or with the choice of hyper-parameters, as I can't spot any conceptual problems with learning a policy in the given setup. However I couldn't pinpoint the source of the problem, so I'd be happy if someone can help.
Average test return (custom policy architecture):
(vertical bars indicate std. dev. of test episode returns)
Average test return (MLP policy architecture):
Test cases (custom policy architecture):
Test cases (MLP policy architecture):
Code
import logging
import os
import gym
from gym.wrappers.time_limit import TimeLimit
import numpy as np
from spinup.algos.ddpg.ddpg import core, ddpg
import tensorflow as tf
class TestEnv(gym.Env):
target = np.array([0.7, 0.8])
action_limit = 0.01
observation_space = gym.spaces.Box(low=np.zeros(2), high=np.ones(2), dtype=np.float32)
action_space = gym.spaces.Box(-action_limit * np.ones(2), action_limit * np.ones(2), dtype=np.float32)
def __init__(self):
super().__init__()
self.pos = np.empty(2, dtype=np.float32)
self.reset()
def step(self, action):
self.pos += action
self.pos = np.clip(self.pos, self.observation_space.low, self.observation_space.high)
reward_ctrl = -np.square(action).sum() / self.action_limit**2
reward_dist = -np.linalg.norm(self.pos - self.target)
reward = reward_ctrl + reward_dist
done = abs(reward_dist) < 1e-9
logging.debug('Observation: %s', self.pos)
logging.debug('Reward: %.6f (reward (ctrl): %.6f, reward (dist): %.6f)', reward, reward_ctrl, reward_dist)
return self.pos, reward, done, {}
def reset(self):
self.pos[:] = np.random.uniform(self.observation_space.low, self.observation_space.high, size=2)
logging.info(f'[Reset] New position: {self.pos}')
return self.pos
def render(self, *args, **kwargs):
pass
def mlp_actor_critic(x, a, hidden_sizes, activation=tf.nn.relu, action_space=None):
act_dim = a.shape.as_list()[-1]
act_limit = action_space.high[0]
with tf.variable_scope('pi'):
# pi = core.mlp(x, list(hidden_sizes)+[act_dim], activation, output_activation=None) # The standard way.
pi = tf.layers.dense(x, act_dim, use_bias=True) # Target position should be learned via the bias term.
pi = pi / (tf.norm(pi) + 1e-9) * act_limit # Prevent division by zero.
with tf.variable_scope('q'):
q = tf.squeeze(core.mlp(tf.concat([x,a], axis=-1), list(hidden_sizes)+[1], activation, None), axis=1)
with tf.variable_scope('q', reuse=True):
q_pi = tf.squeeze(core.mlp(tf.concat([x,pi], axis=-1), list(hidden_sizes)+[1], activation, None), axis=1)
return pi, q, q_pi
if __name__ == '__main__':
log_dir = 'spinup-ddpg'
if not os.path.exists(log_dir):
os.mkdir(log_dir)
logging.basicConfig(level=logging.INFO)
ep_length = 1000
ddpg(
lambda: TimeLimit(TestEnv(), ep_length),
mlp_actor_critic,
ac_kwargs=dict(hidden_sizes=(64, 64, 64)),
steps_per_epoch=ep_length,
epochs=1_000,
replay_size=1_000_000,
start_steps=10_000,
act_noise=TestEnv.action_limit/2,
gamma=0.99, # Use large gamma, because of action limit it matters where we walk to early in the episode.
polyak=0.995,
max_ep_len=ep_length,
save_freq=10,
logger_kwargs=dict(output_dir=log_dir)
)
You are using a HUGE network (64x64x64) for a very small problem. That alone can be a big issue. You are also keeping 1M samples in your memory and, again, for a very simple problem this may be detrimental and slow convergence. Try with a much simpler setup first (32x32 net and 100,000 memory, or even linear approximator with polynomial features). Also, how are you updating your target network? What is polyak? Finally, normalizing the action like that may not be a good idea. Better to just clip it or use a tanh layer at the end.
I'm new to ROS, and I have a mission to develop an algorithm that allows
the robot to move forward as long as he doesn't have an obstacle in front of
him, but it kept getting stuck in obstacles that I've put in front of him in the gazebo simulation.
When I checked it in depth, I figured that it seems that my robot scans to the sides instead of in front. And when I checked the specs for the scanner laser
it said that the angles of the scan should be maximum between -90 degrees to 90 degrees and preferably much less than that. So it seems that I can't complete my mission due to "hardware" problems but it seems strange to me.
Can anyone please help?
Here is my code:
#!/usr/bin/python
#
# stopper.py
#
# Created on:
# Author:
#
import rospy
import math
from geometry_msgs.msg import Twist
from sensor_msgs.msg import LaserScan
class Stopper(object):
def __init__(self, forward_speed):
self.forward_speed = forward_speed
self.min_scan_angle = -10/180*math.pi
self.max_scan_angle = 10 / 180 * math.pi
self.min_dist_from_obstacle = 0.5
self.keep_moving = True
self.command_pub = rospy.Publisher("/cmd_vel_mux/input/teleop", Twist, queue_size=10)
self.laser_subscriber = rospy.Subscriber("scan",LaserScan, self.scan_callback, queue_size=1)
def start_moving(self):
rate = rospy.Rate(10)
rospy.loginfo("Starting to move")
while not rospy.is_shutdown() and self.keep_moving:
self.move_forward()
rate.sleep()
def move_forward(self):
move_msg = Twist()
move_msg.linear.x = self.forward_speed
self.command_pub.publish(move_msg)
def scan_callback(self, scan_msg):
for dist in scan_msg.ranges:
if dist < self.min_dist_from_obstacle:
self.keep_moving = False
break
You should be able to select the angles you are interested in yourself. The -90 and +90 degrees are just the endpoints the laser scanner measures. So you get a dataset with a lot of distances in different angles. To detect obstacles in front of the robot you need to select a (or multiple) measurements in the middle of the dataset (my knowledge is rusty, I assume the ranges are sorted from -90° to 90° so 0° is in the middle of the array). So you may don't want to loop through all distances in msg.ranges but just a subset.
I found this tutorial that shows how to read out the data and access to the value from different angles.
There are several threads asking for a way to simulate time-inhomogenous poisson processes in python. The NeuroTools module offer a simple way to do so via the inh_poisson_generator () function. The help of this function is introduced at the bottom of this thread. The function was originally designed to simulate spike trains, and uses the thinning method.
I would like to simulate a spike train during 2000ms. The spike rate (in Hertz) changes every millisecond, and is comprised between 20 spikes/second and 160 spikes/second. I've tried to simulate this using the following code:
import NeuroTools
import numpy as np
from NeuroTools import stgen
import matplotlib.pyplot as plt
import random
st_gen = stgen.StGen()
time = np.arange(0, 2000)
t_rate = []
for i in range (2000):
t_rate.append(random.randrange(20, 161, 1))
t_rate = np.array(t_rate)
Psim = st_gen.inh_poisson_generator(rate = t_rate, t = time, t_stop = 2000, array = True)
However, the code returns very few timestamps (e.g., array([ 397.55345905, 1208.79804513, 1478.03525045, 1982.63643262]), which doesn't make sense to me. I would appreciate any help on this.
inh_poisson_generator(self, rate, t, t_stop, array=False) method of NeuroTools.stgen.StGen instance
Returns a SpikeTrain whose spikes are a realization of an inhomogeneous
poisson process (dynamic rate). The implementation uses the thinning
method, as presented in the references.
Inputs:
rate - an array of the rates (Hz) where rate[i] is active on interval
[t[i],t[i+1]]
t - an array specifying the time bins (in milliseconds) at which to
specify the rate
t_stop - length of time to simulate process (in ms)
array - if True, a numpy array of sorted spikes is returned,
rather than a SpikeList object.
Note:
t_start=t[0]
References:
Eilif Muller, Lars Buesing, Johannes Schemmel, and Karlheinz Meier
Spike-Frequency Adapting Neural Ensembles: Beyond Mean Adaptation and Renewal Theories
Neural Comput. 2007 19: 2958-3010.
Devroye, L. (1986). Non-uniform random variate generation. New York: Springer-Verlag.
Examples:
>> time = arange(0,1000)
>> stgen.inh_poisson_generator(time,sin(time), 1000)enter code here
I don't really have an answer for you but because this post helped me to get started with NeuroTools, I thought I'd share my small example which is working fine.
For the inh_poisson_generator() the rate input is in unit Hz and all times are in ms. I use an average rate of 1.6 spikes/ms, so I expect to receive ~4000 events. The results confirm that just fine!
I guess it might be an issue that you are using a non-continuous rate. However I barely know anything about the algorithm implemented for this function..
I hope my example can help you somehow!
import NeuroTools
from NeuroTools import stgen
v0=1.6 #spikes/ms
Amp=1 # amplitude in spikes/ms
w=4/1000 # periodic frequency in spikes/ms
st_gen = stgen.StGen()
tstop=2500.0
intervals=np.arange(0,tstop,0.05)
rate=np.array([])
for tt in intervals:
v_next=v0+Amp*math.sin(2*math.pi*w*tt)
if (v_next>0.0):
rate=np.append(rate,v_next*1000)
else: rate=np.append(rate,0.0)
PSim=st_gen.inh_poisson_generator(rate=rate,t = intervals, t_stop = 2500.0, array = True) # important to have rate in Hz and all other times in ms
print len(PSim)
print np.mean(rate)/1000*tstop