Python: Get system audio in speech recognition instead of microphone - python

I am working on speech recognition in python, but it is only getting the input from Micropohone. How is it possible to give the audio from speakers as input to the speech recognition library?
The piece of code is given below:
import speech_recognition as sr
with sr.Microphone() as source: # using microphone here, would like to use speaker instead
print("Talk Something...")
audio=r.listen(source)
print("Time Over...")
import time
try:
t1=time.time()
print("Text: "+r.recognize_google(audio)) # prints after converting speech to text
t2=time.time()
print("T2-T1: ", t2-t1)
except:
print("Didn't understand the audio")
I have been struggling here for so long and any help will be much appreciated. Thanks!

You can configure device index as in docs:
import speech_recognition as sr
for index, name in enumerate(sr.Microphone.list_microphone_names()):
print("Microphone with name \"{1}\" found for `Microphone(device_index={0})`".format(index, name))
If LINEIN is not available as a separate input, you might just configure it as a recording source in audio properties.

Related

How to get words timestamps using speech_recognition in Python?

I want to determine, when exactly a speech in the audio file starts and ends. Firstly, I am using speech_recognition library to determine speech content of the audio file:
import speech_recognition as sr
filename = './dir_name/1.wav'
r = sr.Recognizer()
with sr.AudioFile(filename) as source:
audio_data = r.record(source)
text = r.recognize_google(audio_data, language = "en-US")
print(text)
Running this code I get a correct output (speech recognized content of the audio file). How can I determine, where exactly the speech starts and ends? Let's assume the file contains ONLY speech. I tried different methods, e. g. by analysing amplitude of the signal, its envelope etc. but I do not get high efficiency. So I started using speech_recognition library with a hope it could be useful here.

Speech is not recognized via microphone in python speechRecognition module

When i run this program the speech is recognized if I use a headset or an external microphone.
But, if I use the laptop microphone( Microphone Array (Realtek(R) Audio) ) the speech is not recognized. It's like the program hangs at the line audio = r.listen(source) If I say something and then plug in the headset then the program works.
The microphone in the laptop is working perfectly.
import speech_recognition as sr
import pyaudio
r = sr.Recognizer()
with sr.Microphone() as source:
print("Listening......")
audio = r.listen(source)
try:
print("Recognizing...")
query = r.recognize_google(audio, language='en-in')
print(f"USER: {query}\n")
except Exception:
print("Did not catch that")
Why id this happening? Can somebody help me out please?
Thank you.
r.adjust_for_ambient_noise(source)
I used this function and it's working now.
This increases the range to recognize the audio.
Thank you everyone.
I will guess.
Probably it uses external microphone as default device and you have to manually set other device.
In documentation you can see
Microphone(device_index = None)
And
A device index is an integer between 0 and pyaudio.get_device_count() - 1
You can also see how to get list of all available devices.
import speech_recognition as sr
for index, name in enumerate(sr.Microphone.list_microphone_names()):
print("Microphone with name \"{1}\" found for `Microphone(device_index={0})`".format(index, name))
BTW: You can also read Troubleshooting - maybe it gives more ideas.

'Audio data must be audio data' error with google speech recognition in python

I am trying to load an audio file in python and process it with google speech recognition
The problem is that unlike in C++, python doesn't show data types, classes, or give you access to memory to convert between one data type and another by creating a new object and repacking data
I dont understand how it's possible to convert from one data type to another in python
The code in question is below,
import speech_recognition as spr
import librosa
audio, sr = librosa.load('sample_data/metal.mp3')
# create a speech recognition object
r = spr.Recognizer()
r.recognize_google(audio)
The error is:
audio_data must be audio data
How do I convert the audio object to be used in google speech recognition
#Mich, I hope you have found a solution by now. If not, please try the below.
First, convert the .mp3 format to .wav format using other methods as a pre-process step.
import speech_recognition as sr
# Create an instance of the Recognizer class
recognizer = sr.Recognizer()
# Create audio file instance from the original file
audio_ex = sr.AudioFile('sample_data/metal.wav')
type(audio_ex)
# Create audio data
with audio_ex as source:
audiodata = recognizer.record(audio_ex)
type(audiodata)
# Extract text
text = recognizer.recognize_google(audio_data=audiodata, language='en-US')
print(text)
You can select the speech language from https://cloud.google.com/speech-to-text/docs/languages
Additionally you can set the minimum threshold for the loudness of the audio using below command.
recognizer.set_threshold = 300 # min threshold set to 300
Librosa returns numpy array, you need to convert it back to wav. Something like this:
raw_audio = np.int16(audio/np.max(np.abs(audio)) * 32767).tobytes()
You probably better load mp3 with ffmpeg wrapper without librosa things, librosa does strange things with the audio (normalizes, etc). Its better to work with raw data.
Try this with speech recognizer:
import speech_recognition as spr
with spr.WavFile('sample_data/metal.mp3') as source:
audio = r.record(source)
r = spr.Recognizer()
r.recognize_google(audio)

Enabling Audio Input for Speech Recognition Library

How do I turn on audio input for all device indexes using a Speech Recognition Library? As I want to pass in the audio for testing and there might be possibility that the library uses a different audio input device. How do I let it take the audio input from all the indexes?
You can use your microphone as a default audio input device below is a code snippet:
import speech_recognition as sr
r=sr.Recognizer() # this is a recognizer which recognize our voice3
with sr.Microphone() as source: # in this we are using a microphone to record our voicecmd
speak.speak("What can i do for you!") # this a speak invoke method w3hich ask us something
print("Ask me Something!") # this a print statement which come on console to ask something
audio=r.listen(source,timeout=60,phrase_time_limit=3)
data = ""
try:
"""
this is a try block it will recognize it our voice and say what we have told
"""
data= r.recognize_google(audio,language="en-US")
print("dynamo think you said!" + " "+data) # this will print on your console what will going to recognize by google apis
except:
"""
this is a except block which except the error which come in try block and the code is not able to run it will pass a value
"""
print("not able to listen you or your microphone is not good")
exit()
First, You require the following things installed on your system.
1. Python
2. Speech Recognition Package
3. PyAudio
Now, You can run this Code for know your Version
import speech_recognition as s_r
print(s_r.__version__)
Output
3.8.1
It will print the current version of your speech recognition package.
Then, Set microphone to accept sound :
my_mic = s_r.Microphone()
Here you have to pass the parameter device_index=?
To recognize input from the microphone you have to use a recognizer class. Let’s just create one.
r = s_r.Recognizer()
Now, I Convert the Sound Speech into Text In Python
To convert using Google speech recognition we can use the following line:
r.recognize_google(audio)
It will return a string with some texts. ( It will convert your voice to texts and return that as a string.
You can simply print it using the below line:
print(r.recognize_google(audio))
Now the full program will look like this:
import speech_recognition as s_r
print(s_r.__version__) # just to print the version not required
r = s_r.Recognizer()
my_mic = s_r.Microphone(device_index=1) #my device index is 1, you have to put your device index
with my_mic as source:
print("Say now!!!!")
audio = r.listen(source) #take voice input from the microphone
print(r.recognize_google(audio)) #to print voice into text
If you run this should you get an Output.
But after waiting a few moments if you don’t get any output, check your internet connection.

Convert voice to text while talking in python

I made a program which allows me to speak and converts it to a text. It converts my voice after I stopped talking. What I want to do is to convert my voice to text while I am talking.
https://www.youtube.com/watch?v=96AO6L9qp2U&t=2s&ab_channel=StormHack at min 2:31.
Pay attention to top right corner of Tony's monitor. It converts his voice to text while talking. I want to do the same thing. Can it be done?
This is my whole program:
import speech_recognition as sr
import pyaudio
r = sr.Recognizer()
with sr.Microphone() as source:
print("Listening...")
audio = r.listen(source)
try:
text = r.recognize_google(audio)
print("You said : {}".format(text))
except:
print("Sorry could not recognize what you said")
solution, tips, hints, or anything would be greatly appreciated, thank you in advance.
In order to do this you will have to do what's called VAD: Voice Audio Detection, a simple way to do this is take a set of samples from the audio and grab their intensity, if they are above a certain threshold then you should begin recording, once the intensity falls below a certain threshold for a given period of time then you conclude the recording and send it off to the service. You can find an example of this here.
More complex systems use better heuristics to decide whether or not the user is speaking, such as the frequency as well as applying things like noise reduction, other systems are also able to perform live speech to text as the user is speaking like DeepSpeech 2.
To do what you want, you need to listen not to a complete sentence, but for just a few words. You then have to process the audio data and to finally print the result. Here is a very basic implementation of it:
import speech_recognition as sr
import threading
import time
from queue import Queue
listen_recognizer = sr.Recognizer()
process_recognizer = sr.Recognizer()
audios_to_process = Queue()
def callback(recognizer, audio_data):
if audio_data:
audios_to_process.put(audio_data)
def listen():
source = sr.Microphone()
stop_listening = listen_recognizer.listen_in_background(source, callback, 3)
return stop_listening
def process_thread_func():
while True:
if audios_to_process.empty():
time.sleep(2)
continue
audio = audios_to_process.get()
if audio:
try:
text = process_recognizer.recognize_google(audio)
except:
pass
else:
print(text)
stop_listening = listen()
process_thread = threading.Thread(target=process_thread_func)
process_thread.start()
input()
stop_listening()
As you can see, I use 2 recognizers, so one will always be listening and the other will process the audio data.
The first one listens to data, then adds the audio data to a queue and listens again. At the same time, the other recognizer is checking if there is audio data to process into some text to then print it.

Categories