I've been working on an animatronic shooting gallery using a Raspberry Pi B+ and an Arduino Mega. Overall, things are going well except for one detail. I'm having trouble keeping the motor movements in sync with sound.
For example, say I got this talking pig. You hit the target and the pig says "Hey, Kid! Watch were you're pointing that thing! You'll put your eye out!" or something like that. Problem is, the motors making the pig move can't keep up with the audio. They lag behind, getting out of sync with the audio. The longer the routine, the further they fall behind. What's more, it doesn't lag behind at a consistent rate. It seems to depend on what else the computer is doing at the time. For example, I had Audacity running at the same time as my program was running, and there was noticeably more lag then when my program was the only program running. I also notice a slight difference in lag between running my program in IDLE vs running from terminal.
Some of my targets are more sensitive to synchronization than others, for example, if I had a rocket that went up to a whooshing sound, it's not really an issue if the motor moves a little faster or slower, but mouth movements look terrible if they're off by more than maybe a tenth of a second.
To control my motors, I use a pickled list of tuples each containing 0: time in ms after beginning of routine for the position signal to be sent, 1: the number of the motor for the signal to be sent to, and 2: the position of the motor. The main program loops through all the targets and gives each a chance to see if it is time for it to send the next motor position command.
Here is my current code:
import time
import pygame.mixer
import os
import cPickle as pickle
import RPi.GPIO as GPIO
from Adafruit_PWM_Servo_Driver import PWM
GPIO.setmode(GPIO.BOARD)
chan_list = [17,18]
GPIO.setup(chan_list, GPIO.IN, pull_up_down=GPIO.PUD_UP)
GPIO.add_event_detect(17, GPIO.FALLING)
GPIO.add_event_detect(18, GPIO.FALLING)
pygame.mixer.init(channels=1, frequency=22050)
pygame.mixer.set_num_channels(10)
pwm = PWM(0x40)
pwm.setPWMFreq(60)
game_on = 1
class Bot:
def __init__(self, folder, servopins):
self.startExecute = 0
self.finishExecute = 0
self.folder = folder # name of folder containing routine sound and motor position data
self.servoPins = servopins #list of servo motor pins for this bot
## self.IRpin = IRpin
self.startTime = 0 # start time of curent routine in ms. begins at 0ms
self.counter = 0 #next frame of routine to be executed
self.isRunning = 0 #1 if bot is running a a routine 0 if not target inactive if 1
self.routine = 0 # position of current routine in self.routines
## self.numRoutines = 0
self.sounds = [] #list containing sound objects for routines. One per routine
self.routines = [] #list of lists of tuples containing time, motor number, and motor position. One per routine
self.list_dir = os.listdir(self.folder) # creates a list containing the names of all routine sound and motor position data files
self.list_dir.sort()
self.currentFrame = () # tuple containing current routine frame containing execution time, motor number and motor position waiting to be executed
# appends all routine sound files into a list
for filo in self.list_dir:
if filo.endswith('.wav'):
self.sounds.append(pygame.mixer.Sound(self.folder + filo))
print filo
#appends all routine motor position files into a list
for filo in self.list_dir:
if filo.endswith('.pkl'):
self.incoming = open(self.folder + filo)
self.pklroutine = pickle.load(self.incoming)
self.routines.append(self.pklroutine)
self.incoming.close()
print filo
# self.sound = pygame.mixer.Sound(str(self.routine) + '.wav')
# self.motorfile = open(str(self.routine) + '.pkl')
# starts a routine running. resets counter to first frame of routine.
#Starts routine timer. Starts routine sound object playing. Loads first frame of routine
def run(self):
self.isRunning = 1
self.startTime = round(time.clock() * 1000)
self.sounds[self.routine].play()
self.currentFrame = self.routines[self.routine][self.counter]
def execute(self):
if self.counter == 0:
self.startExecute = round(time.clock() * 1000)
if self.currentFrame[0] <= (round(time.clock() * 1000) - self.startTime):
pwm.setPWM(self.servoPins[self.currentFrame[1]-1], 0, int(round(150 + ((self.currentFrame[2] * 2.6)))))
if self.counter < (len(self.routines[self.routine]) -2):
self.counter = self.counter + 1
self.currentFrame = self.routines[self.routine][self.counter]
else:
print (round(time.clock() * 1000) - self.startTime)
self.finishExecute = round(time.clock() * 1000)
self.counter = 0
self.isRunning = 0
if self.routine < (len(self.routines) - 1):
self.routine = self.routine + 1
else:
self.routine = 0
bot1 = Bot('Pig/', [0,1])
bot2 = Bot('Goat/', [2,3])
while 1:
if game_on:
if GPIO.event_detected(17):
bot1.run()
if GPIO.event_detected(18):
bot2.run()
if bot1.isRunning:
bot1.execute()
if bot2.isRunning == 1:
bot2.execute()
The list of tuples is created using two other programs, one on the Arduino Mega, and another on my PC using Python. The arduino is connected to a couple of potentiometers connected to +5V and ground, with the signal voltage coming from the middle terminal into an analog input. The arduino converts this analog value to a motor position in degrees, and sends that over the serial port to the PC that saves it in a tuple along with the motor number, and time that the byte was received, appends the tuple to a list, then at the end of the program, saves it in a pkl file. I will provide that code if somebody really wants to see it, but I don't believe the problem lies within the data produced by this process.
If you have any suggestions about how I could fix this code to make it work, that would be the preferable solution, because the solution I'm contemplating now seems like it might just be one massive PITA.
I'm thinking that I could use one audio channel (say, the left channel) for my audio, then in the other channel, I could have sine waves of different frequencies and amplitudes with each frequency being assigned to a specific motor. Say for example I have this one motor that moves a mouth. This motor will have an assigned frequency of 1000 Hz. Whenever the mouth needs to be open, there will be a 1000 Hz sine wave in the motor control channel. I would use an Arduino Mega to detect that sine wave, and send the PWM signal to the motor to move to a certain position. The position may even be determined by the amplitude of the sine wave. To make things a little simpler, I could just have 3 positions for the mouth, closed (no signal), open a little (small amplitude signal), and open a little more (larger amplitude signal).
All of this would be done using this FFT library: http://wiki.openmusiclabs.com/wiki/ArduinoFFT
I would need to build a circuit to convert the AC signal from the RPi to a 0-5V signal. I have found this circuit here that looks like it plausibly might work, assuming I can find a way to cleanly amplify the signal from the RPi to +/- 2.5V. Source
Other than that, I don't know how difficult or effective it would be to implement this solution. For one, I'm not sure if I can send PWM signals to the motors and run the FFT library at the same time, and I've never worked with this library, or had any experience with FFT at all.
My fallback solution is to design the shooting gallery so that it is less dependent on synchronization. This would involve such compromises such as just making the mouth open and close at a steady rate rather than trying to make it move in sync with audio, and keeping triggered routines short (<= 5 seconds)
Okay, problem defined, now on to specific questions:
What can I do to make the current version of the program work without having to resort to analyzing a bunch of sine waves using FFT?
Failing that, any suggestions on the input circuit? What could I use to cleanly amplify the RPi signal to +/- 2.5V, and will the circuit I linked to actually work to convert that signal to a 0V-5V signal readable by an Arduino analog input?
Will using the FFT library interfere with sending the PWM signals to the motors?
How much of a PITA will it be to extract usable data from a bunch of sine waves using FFT?
The documentation for the FFT library is a little lacking, How do I set the range of frequencies to analyze? I see that it starts at 0 Hz(DC), but what's the high end? Can I set that? I only need maybe 30 usable bins. What frequencies should I use to get the clearest signals? I'm guessing I don't want to use the highest and lowest, because the lowest would take longer to detect, and the highest will be more distorted due to the poor quality of the audio coming out of the RPi. Should I set it to 64 bins and use the middle 30? And again, how do I determine the center frequency of each bin?
Gosh, who will answer to all these Qs.
1. Your program is doing OK, Raspi is responsible for your trouble.
OS has its timer which interrupts every process, usually 100 times per second. Then OS does what it needs to do and chooses another process to be active. Then after X MS or US, according to OSes preference, that process is interrupted, and so on. This causes your program to pause at least 0.01 sec (best case). I don't know for Raspi B+, but on B kernel interrupts were interfering with software PWM as well, because, as far as I remember, Raspi B has only one hardware PWM pin. You have more than one servo, which have to be updated every 20 MS with correct duty cycle to keep the wanted position. Your idea to use audio output to control servoes is really not so bad.
For that, you do not need FFT, you just generate the sine wave according to formula (use numpy):
sine = A*np.sin(2*np.pi*np.arange(how_long)*freq/fs)
Where A is amplitude, freq is desired frequency of sine, and fs is the sample rate of audio output. how_long is number of samples you want to generate. Anyway, PWM is not a sine, it is a square signal. So you would not even need that. Just N samples of zeros and N samples of 1.
I can recommend implementing a signal check from Arduino. You tell a motor, start to move, when it does, play the sound. Slow down motors to be able to catch up with sound.
The best way of doing this would be to make your motor controler out of some MCU, and not use Raspi GPIO for that. Then talk to MCU using I2C or RS232. This works very well.
If you have multiple motors, take care that chosen MCU has enough hardware PWM pins, or choose one with high oscilator frequency and add demultiplexor to it.
Yes, if you use FFT, or start another program, or something, PWM will suffer badly.
I think numpy is included in Raspbian and FFT works very well on it, but you would not need it anyway. May be, but just may be, try pyaudio or alsaaudio instead of pygame to see who is faster in responding. On PyPI you have one nice module SWMixer, that turns pyaudio into pygame.mixer(), so you would not need to do anything extra. Although, I think, this will be very slow because of manual mixing. But pyaudio can open multiple streams on which you can write parallely. Also, it would be great, for this purpose to have a real time OS.
There are some prepared images for Raspi. I found, few years back, an image from Machinoid, precompiled Xenomai with Debian. The trouble was that audio driver did not work. It is possible that somebody fixed that. On RTOS, you can specify time in which some action should be performed, thus making it atomic. I.e. OS will do all to perform it in specified time. In that manner OSes interrupts and switching to other processes are governed by timeout of required action.
You choose the easiest solution for you. I would firstly see what can I do with the feedback connection, if it does not work, I would try with audio, and if this does not work, I would use external IC for PWM.
Audio approach has a limit. You cannot make two pigs talk at the same time. :D
Enjoy yourself. I would! :D
Related
I'm working on a project with a Raspberry Pi and a Parallax 360 servo widh feedback and pigpio library. What I'm trying to accomplish is to determine programatically the min and max pulse width I could send to the servo in both rotation directions.
These values are shown in the servo's datasheet, however I would like to code a "calibrate" function since this absolute values might vary a little from unit to unit, as I verified with my classmates'. (BTW, minimum PW is 1280 µS for max speed in one direction whereas maximum PW is 1720 µS for max speed in the opposite direction, where around 1500 µS means 0 RPM)
The feedback signal is also documented on said datasheet, and I am able to read it. It is also a PWM signal which will vary from 2.9% to 97.1% duty cycle depending on the shaft angle. I thought that if I'm able to gather samples of this signal in a given window of time, I would be able to calculate how quickly it's spinning at different control signals.
Assuming that the servo speed is constant at a constant control signal, changes on this "rate" or "slope" for the feedback duty cycle signal, will indicate that the servo is accelerating or decelerating. Cool. This is the procedure I thought to find the limits:
Start below the absolute minimum (1280 µS) where the servo will spin at its maximum speed, and start increasing the duty cycle of the control signal until I can detect a deceleration on the feedback signal, indicating that I found the real minimum control PW.
Continue increasing the duty cycle of the control signal, until the "slope" is zero, which means the servo stopped and I found the next bound.
Continue increasing the duty cycle of the control signal, until I detect and acceleration on the feedback signal which will indicate the servo started to move again (In the opposite direction, OFC)
Continue increasing the duty cycle of the control signal, observing the acceleration "slope" until this slope is zero, which means the servo have reached its maximum speed.
The problem... I don't really have a clue on the maths involved or needed to calculate this. This is my code by now (This is a method part of the class I'm coding meant to control the servo):
Thanks #101 about the issue of variables' name. Before this I was coding the class itself and thus the confusing naming. Do not take that on consideration!
def calibrate(self):
__PW_STEP = 10 # Pulse width increasing step
__MIN_PW = self.__MAX_CW_PW - 100.0 # self.__MAX_CW_PW = 1280 µS
__MAX_PW = self.__MAX_CCW_PW + 100.0 # self.__MAX_CCW_PW = 1720 µS
__PW = __MIN_PW # Start from shortest to longest!
__SAMPLE_TIME_PER_PW = 0.5 # In seconds, time interval to change the servo speed
__timeMilestone = time.time()
while __PW <= __MAX_PW:
if __PW is not self.__pi.get_servo_pulsewidth(self.controlPin): # Only apply changes when needed
self.__pi.set_servo_pulsewidth(self.controlPin, __PW)
# I guess I should gather feedback signal samples around here
# I have another class to do so. I mean, a method that returns the read of a PWM signal and I successfully tested it working.
if (time.time() - __timeMilestone >= __SAMPLE_TIME_PER_PW): # Blink-without-delay.ino reference ;)
__timeMilestone = time.time()
__PW += __PW_STEP # Increase PWM control signal for next iteration!
Any ideas on how to deal with this? I'm stuck here however I feel I am so close. The main problem I think it is that I lack of understanding a part of the concept well enough so I can not translate the next step to code.
Should I gather as many samples as I can on that window time? And then what? Since the value will iterate from 2.9% to 97.1% several times (Or maybe none if the servo is slow enough) how can I measure there a speed change?
Thanks in advance! Sorry for long question,
Cheers.
I wanted to find the inicial point of a stepper motor, if it exists, so I could rotate it always in 90 degrees or 512 steps (2048 steps for a full rotation). I've put four cups in the stepper motor and I want to use the degree 0 for cup 1, degree 90 for cup 2 and so on. I'm using it with Beaglebone Black with python language. So far I've only get to move the motor giving him a number of steps. I'm using the Adafruit_BBIO library to control GPIOs from Beaglebone.
Is it possible to get motor's initial position or move it to a inicial position? I've never used a stepper motor before.
Thank you.
No - it is not possible to determine the exact position of a stepper motor without additional information (inputs). As you've noticed, you can only move a certain number of steps, but unless you know where you started, you won't know where you end up.
This is usually solved by using another input, typically a limit switch, at a known location such that the switch closes when the moving part is directly over that location. When you first start up, you rotate the stepper until the switch closes, at which point you know the current location. Once you have calibrated your initial position, THEN you can determine your exact position by counting steps (assuming your motor doesn't ever slip!)
You see this a lot with inkjet printers; when you first turn them on, the print head will slide all the way to one side (where there is almost certainly some sort of detector). That is the printer finding its zero point.
Some alternatives to a switch:
If you don't need full rotation, you can use a servo motor instead. These DO have internal position sensing.
Another hack solution using a stepper would be to place a mechanical block at one extremity that will prevent your mechanism from passing. Then just rotate the stepper one full revolution in a given direction. You know that at some point you will have hit the block and have stopped. This isn't great; you have to be careful that running into the stop won't damage anything or knock anything out of alignment. Due to the nature of steppers, your step count may also be off by up to 3 steps, so this won't be super high precision.
I am trying to write a simple audio function generator in Python, to be run on a Raspberry Pi (model 2). The code essentially does this:
Generate 1 second of the audio signal (say, a sine wave, or a square wave, etc)
Play it repeatedly in a loop
For example:
import pyaudio
from numpy import linspace,sin,pi,int16
def note(freq, len, amp=1, rate=44100):
t = linspace(0,len,len*rate)
data = sin(2*pi*freq*t)*amp
return data.astype(int16) # two byte integers
RATE = 44100
FREQ = 261.6
pa = pyaudio.PyAudio()
s = pa.open(output=True,
channels=2,
rate=RATE,
format=pyaudio.paInt16,
output_device_index=2)
# generate 1 second of sound
tone = note(FREQ, 1, amp=10000, rate=RATE)
# play it forever
while True:
s.write(tone)
The problem is that every iteration of the loop results in an audible "tick" in the audio, even when using an external USB sound card. Is there any way to avoid this, rather than trying to rewrite everything in C?
I tried using the pyaudio callback interface, but that actually sounded worse (like maybe my Pi was flatulent).
The generated audio needs to be short because it will ultimately be adjusted dynamically with an external control, and anything more than 1 second latency on control changes just feels awkward. Is there a better way to produce these signals from within Python code?
You're hearing a "tick" because there's a discontinuity in the audio you're sending. One second of 261.6 Hz contains 261.6 cycles, so you end up with about half a cycle left over at the end:
You'll need to either change the frequency so that there are a whole number of cycles per second (e.g, 262 Hz), change the duration such that it's long enough for a whole number of cycles, or generate a new audio clip every second that starts in the right phase to fit where the last chunk left off.
I was looking for a similar question to yours, and found a variation that plays a pre-calculated length by concatenating a bunch of pre-calculated chunks.
http://milkandtang.com/blog/2013/02/16/making-noise-in-python/
Using a for loop with a 1-second pre-calculated chunk "play_tone" function seems to generate a smooth sounding output, but this is on a PC. If this doesn't work for you, it may be that the raspberry pi has a different back-end implementation that doesn't like successive writes.
Pure tones in Psychopy are ending with clicks. How can I remove these clicks?
Tones generated within psychopy and tones imported as .wav both have the same problem. I tried adding 0.025ms of fade out in the .wav tones that I generated using Audacity. But still while playing them in psychopy, they end with a click sound.
Now I am not sure how to go ahead with this. I need to perform a psychoacoustic experiment and it can not proceed with tone presentation like that.
Crackling sounds or clicks are, to my knowledge, often associated with buffering errors. Many years back, I experienced similar problems on Linux systems when an incorrect bitrate was set. So there could be at least two possible culprits at work here: the bitrate, and the buffer size.
You already applied both an onset and offset ramp to allow the membranes to swing in/out, so this should not be the issue. (By the way, I think you meant 0.025 seconds instead of ms? Otherwise, the ramps would be too short!)
PyGame initializes the sound system with the following settings:
initPygame(rate=22050, bits=16, stereo=True, buffer=1024)
Whereas Pyo initializes it the following way:
initPyo(rate=44100, stereo=True, buffer=128)
The documentation of psychopy.sound states:
For control of bitrate and buffer size you can call psychopy.sound.init before
creating your first Sound object:
from psychopy import sound
sound.init(rate=44100, stereo=True, buffer=128)
s1 = sound.Sound('ding.wav')
So, I would suggest you:
Try out both sound backends, Pyo and PyGame -- you can change which one to use in the PsychoPy preferences under General / audio library. Change the field to ['pyo'] to use Pyo only, or to ['pygame'] to use only PyGame.
Experiment with different settings for bitrate and buffer size with both backends (Pyo, PyGame).
If you want to get started with serious psychoacoustics, however, I would suggest you do not use either of the proposed solutions, and get some piece of professional sound hardware or a data-acquisition board with analog outputs, which will deliver undistorted sound with sub-millisecond precision, such as the devices produced by National Instruments or competitors. The NI boards can be controlled from Python via PyLibNIDAQmx.
Clicks in the beginning and end of sounds often occur because the sound is stopped mid-way so that the wave abruptly goes from some value to zero. This waveform can only be made using high-amplitude high-frequency waves superimposed on the signal, i.e. a click. So the solution is to make the wave stop while on zero.
Are you using an old version of psychopy? If yes, then upgrade. Newer versions add a Hamming window (fade in/out) to self-generated tones which should avoid the click.
For the .wav files, try adding (extra) silence in the end, e.g. 50 ms. It might be that psychopy stops the sound prematurely.
I'm trying to read the speed of a manual treadmill (the York Pacer 2120 - manual: http://www.yorkfitness.com.au/uploaded/pdf_40Pacer%202120%20Treadmill_5500.pdf) by intercepting the wire that comes out of its speed sensor. My understanding that I've garnered by taking apart as much of the treadmill as I can is that the speed sensor is basically a magnet attached to a big disk attached to the belt of the treadmill that generates current every time it passes a coil of wire.
The wire that comes out of the speed sensor ends in a 3.5mm jack. I plugged this into my laptop's microphone port and recorded the "sound" of me walking at both high and low speeds. I've attached images of the waveform recorded in Audacity for low and high speed respectively.
My aim is to measure the speed of the treadmill in real time so that I can pass it as input into my game engine and control the speed of a character in game. I'm not sure what the best method to do this is but at the moment I'm trying to measure the distance between the "beats" in python using PyAudio.
To do this I've copied the beat detection code from the answer to another question (Detect beat and play (wav) file in a syncronised manner) but that gave me an usably high level of false positives.
Does anyone have any ideas as to how else I could go about getting a usable speed out of this signal? If you do, a code example would be very much appreciated. Other than that, how else would people go about trying to measure the speed off a manual treadmill? I've tried everything from using a camera to measure the distance between pieces of tape stuck to the treadmill belt to physically sticking a mouse to the treadmill to measure the speed of the belt.
The sound files are here:
https://www.dropbox.com/s/jbyl8c3ajv9e6xg/Fast_Raw.wav?dl=0
https://www.dropbox.com/s/0fp1mzuixhf5uju/Slow_Raw.wav?dl=0
And the audacity projects here:
https://www.dropbox.com/s/3cjvo3m2ln2ldet/AudacityFiles.zip?dl=0
I might look here Convert multi-channel PyAudio into NumPy array
From looking at the audio, you just need a simple trigger for when the signal is <0, you can likely modify the callback method to detect when the amplitude was positive and has been negative for N samples, then count the occurrences per second to retrieve the speed
I did eventually solve this but I gave up on PyAudio and used a Raspberry Pi instead. I open sourced the code if anyone happens to be interested: https://bitbucket.org/grootteam/gpio-treadmill-speed/