I would like to know if it is possible to get all the possible transcripts that google can generate from a given audio file, as you can see it is only giving the transcript that has the higher matching result.
from google.cloud import speech
import os
import io
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = ''
# Creates google client
client = speech.SpeechClient()
# Full path of the audio file, Replace with your file name
file_name = os.path.join(os.path.dirname(__file__),"test2.wav")
#Loads the audio file into memory
with io.open(file_name, "rb") as audio_file:
content = audio_file.read()
audio = speech.RecognitionAudio(content=content)
config = speech.RecognitionConfig(
encoding=speech.RecognitionConfig.AudioEncoding.LINEAR16,
audio_channel_count=1,
language_code="en-gb"
)
# Sends the request to google to transcribe the audio
response = client.recognize(request={"config": config, "audio": audio})
print(response.results)
# Reads the response
for result in response.results:
print("Transcript: {}".format(result.alternatives[0].transcript))
On your RecognitionConfig(), set a value to max_alternatives. When this is set greater than 1, it will show the other possible transcriptions.
max_alternatives int
Maximum number of recognition hypotheses to be returned. Specifically,
the maximum number of SpeechRecognitionAlternative messages within
each SpeechRecognitionResult. The server may return fewer than
max_alternatives. Valid values are 0-30. A value of 0
or 1 will return a maximum of one. If omitted, will return a
maximum of one.
Update your RecognitionConfig() to the code below:
config = speech.RecognitionConfig(
encoding=speech.RecognitionConfig.AudioEncoding.LINEAR16,
audio_channel_count=1,
language_code="en-gb",
max_alternatives=10 # place a value between 0 - 30
)
I tested this using the sample audio from the github repo of Speech API. I used code below for testing:
from google.cloud import speech
import os
import io
# Creates google client
client = speech.SpeechClient()
# Full path of the audio file, Replace with your file name
file_name = os.path.join(os.path.dirname(__file__),"audio.raw")
#Loads the audio file into memory
with io.open(file_name, "rb") as audio_file:
content = audio_file.read()
audio = speech.RecognitionAudio(content=content)
config = speech.RecognitionConfig(
encoding=speech.RecognitionConfig.AudioEncoding.LINEAR16,
sample_rate_hertz=16000,
audio_channel_count=1,
language_code="en-us",
max_alternatives=10 # used 10 for testing
)
# Sends the request to google to transcribe the audio
response = client.recognize(request={"config": config, "audio": audio})
for result in response.results:
print(result.alternatives)
Output:
Related
I have been using google text to speech API because of how great the voices are. The only problem is been trying to find how to make it user friendly. The biggest thing is google text to speech can only accept text files with 5000 or fewer characters. The main issue that, I have been finding is that currently all I can do is use a single text file copy and paste my stuff on there before saving. Does anyone know how can I upload a folder filled with text files to make it quicker? Plus also saving the mp3 instead of overwriting them?
# [START tts_ssml_address_imports]
from google.cloud import texttospeech
import os
import html
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] =
# [END tts_ssml_address_imports]
# [START tts_ssml_address_audio]
def ssml_to_audio(ssml_text, outfile):
# Generates SSML text from plaintext.
#
# Given a string of SSML text and an output file name, this function
# calls the Text-to-Speech API. The API returns a synthetic audio
# version of the text, formatted according to the SSML commands. This
# function saves the synthetic audio to the designated output file.
#
# Args:
# ssml_text: string of SSML text
# outfile: string name of file under which to save audio output
#
# Returns:
# nothing
# Instantiates a client
client = texttospeech.TextToSpeechClient()
# Sets the text input to be synthesized
synthesis_input = texttospeech.types.SynthesisInput(text=ssml_text)
# Builds the voice request, selects the language code ("en-US") and
# the SSML voice gender ("MALE")
voice = texttospeech.types.VoiceSelectionParams(language_code='en-US',
name="en-US-Wavenet-D",
ssml_gender=texttospeech.enums.SsmlVoiceGender.MALE))
# Selects the type of audio file to return
audio_config = texttospeech.types.AudioConfig(audio_encoding="LINEAR16", pitch = 0, speaking_rate = 0.9)
# Performs the text-to-speech request on the text input with the selected
# voice parameters and audio file type
response = client.synthesize_speech(synthesis_input, voice, audio_config)
# Writes the synthetic audio to the output file.
with open(outfile, 'wb') as out:
out.write(response.audio_content)
print('Audio content written to file ' + outfile)
# [END tts_ssml_address_audio]
def main():
# test example address file
file = 'input_text.txt'
with open(file, 'r') as f:
text = f.read()
ssml_text = text
ssml_to_audio(ssml_text, 'file_output_speech.mp3')
# [END tts_ssml_address_test]
if __name__ == '__main__':
main()
I already tried this code to convert my large wav file to text
import speech_recognition as sr
r = sr.Recognizer()
hellow=sr.AudioFile('hello_world.wav')
with hellow as source:
audio = r.record(source)
try:
s = r.recognize_google(audio)
print("Text: "+s)
except Exception as e:
print("Exception: "+str(e))
But it is not converting it accurately, the reason I feel it's the 'US' accent.
Please tell me how i can convert whole large wav file accurately.
Google's speech to text is very effective, try the below link,
https://cloud.google.com/speech-to-text/
You can choose the language (English US in your case) and also upload files.
Like #bigdataolddriver commented 100% accuracy is not possible yet, and will be worth millions.
Google speech to text has three types of APIs
Synchronous, Asynchronous and streaming, in which asynchronous allows you to ~480 minutes audio conversion while others will only let you ~1 minute. Following is the sample code to do the conversion.
filepath = "~/audio_wav/" #Input audio file path
output_filepath = "~/Transcripts/" #Final transcript path
bucketname = "callsaudiofiles" #Name of the bucket created in the step before
# Import libraries
from pydub import AudioSegment
import io
import os
from google.cloud import speech
from google.cloud.speech import enums
from google.cloud.speech import types
import wave
from google.cloud import storage
Speech to text support wav files with LINEAR16 or MULAW encoded audio.
Below is the code to get the frame rate and channel with code.
def frame_rate_channel(audio_file_name):
with wave.open(audio_file_name, "rb") as wave_file:
frame_rate = wave_file.getframerate()
channels = wave_file.getnchannels()
return frame_rate,channels
and the code below is the does the asynchronous conversion.
def google_transcribe(audio_file_name):
file_name = filepath + audio_file_name
# The name of the audio file to transcribe
frame_rate, channels = frame_rate_channel(file_name)
if channels > 1:
stereo_to_mono(file_name)
bucket_name = bucketname
source_file_name = filepath + audio_file_name
destination_blob_name = audio_file_name
upload_blob(bucket_name, source_file_name, destination_blob_name)
gcs_uri = 'gs://' + bucketname + '/' + audio_file_name
transcript = ''
client = speech.SpeechClient()
audio = types.RecognitionAudio(uri=gcs_uri)
config = types.RecognitionConfig(
encoding=enums.RecognitionConfig.AudioEncoding.LINEAR16,
sample_rate_hertz=frame_rate,
language_code='en-US')
# Detects speech in the audio file
operation = client.long_running_recognize(config, audio)
response = operation.result(timeout=10000)
for result in response.results:
transcript += result.alternatives[0].transcript
delete_blob(bucket_name, destination_blob_name)
return transcript
and this is how you write them to file
def write_transcripts(transcript_filename,transcript):
f= open(output_filepath + transcript_filename,"w+")
f.write(transcript)
f.close()
Kindly let me know if you need any further clarifications.
I am using Cloud speech to text api to convert audio file to text file. I am executing it using python, Below is code.
import io
import os
os.environ["GOOGLE_APPLICATION_CREDENTIALS"]="D:\\Sentiment_Analysis\\My Project 59503-717155d6fb4a.json"
# Imports the Google Cloud client library
from google.cloud import speech
from google.cloud.speech import enums
from google.cloud.speech import types
# Instantiates a client
client = speech.SpeechClient()
# The name of the audio file to transcribe
file_name = os.path.join(os.path.dirname('D:\CallADoc_VoiceImplementation\audioclip154173607416598.amr'),'CallADoc_VoiceImplementation','audioclip154173607416598.amr')
# Loads the audio into memory
with io.open(file_name, 'rb') as audio_file: content = audio_file.read()
audio = types.RecognitionAudio(content=content)
config = types.RecognitionConfig(encoding=enums.RecognitionConfig.AudioEncoding.LINEAR16,sample_rate_hertz=16000,language_code='en-IN')
# Detects speech in the audio file
response = client.recognize(config, audio)
for result in response.results: print('Transcript: {}'.format(result.alternatives[0].transcript))
When i execute the sample/tested audio file in the name "audio.raw", the audio is converting and result is like below.
runfile('C:/Users/sandesh.p/CallADoc/GoogleSpeechtoText.py', wdir='C:/Users/sandesh.p/CallADoc')
Transcript: how old is the Brooklyn Bridge
But for same code, i am recording a audio and try to convert, it is giving empty result like below:
runfile('C:/Users/sandesh.p/CallADoc/GoogleSpeechtoText.py', wdir='C:/Users/sandesh.p/CallADoc')
I am trying to fix this from past 2 days and please help me to resolve this.
Try following the troubleshooting steps to have your audio with the appropriate settings.
For instance, your audio file will have the following settings, which are required to have better results:
Encoding: FLAC
Channels: 1 # 16-bit
Sampleratehertz: 16000Hz
The following is my code (I made some slight changes to the original example code):
import io
import os
# Imports the Google Cloud client library
from google.cloud import speech
from google.cloud.speech import enums
from google.cloud.speech import types
# Instantiates a client
client = speech.SpeechClient()
# The name of the audio file to transcribe
file_name = os.path.join(
os.path.dirname(__file__),
'C:\\Users\\louie\\Desktop',
'TOEFL2.mp3')
# Loads the audio into memory
with io.open(file_name, 'rb') as audio_file:
content = audio_file.read()
audio = types.RecognitionAudio(content=content)
config = types.RecognitionConfig(
encoding=enums.RecognitionConfig.AudioEncoding.LINEAR16,
sample_rate_hertz=16000,
language_code='en-US')
# Detects speech in the audio file
response = client.recognize(config, audio)
for result in response.results:
print('Transcript: {}'.format(result.alternatives[0].transcript))
text_file = open("C:\\Users\\louie\\Desktop\\Output.txt", "w")
text_file.write('Transcript: {}'.format(result.alternatives[0].transcript))
text_file.close()
I can only directly run this code in my windows prompt command since otherwise, the system cannot know the GOOGLE_APPLICATION_CREDENTIALS. However, when I run the code, nothing happened. I followed all the steps and I could see the request traffic changed on my console. But I cannot see any transcript. Could someone help me out?
You are trying to decode TOEFL2.mp3 file encoded as MP3 while you specify LINEAR audio encoding with
encoding=enums.RecognitionConfig.AudioEncoding.LINEAR16
You have to convert mp3 to wav first, see information about AudioEncoding
I'm using Google Speech API and since i'm using LongRunning functions for wav files, and they're all in pt-BR language, they're returning with content such as "voc\303\252 hoje boa noite cart\303\243o".
How can I convert this back to UTF-8?
I already tried .encode function, and already tried to check if there's any parameter to send, but I cannot find anything.
# [START def_transcribe_gcs]
def transcribe_gcs(gcs_uri):
"""Asynchronously transcribes the audio file specified by the gcs_uri."""
from google.cloud import speech
from google.cloud.speech import enums
from google.cloud.speech import types
client = speech.SpeechClient()
audio = types.RecognitionAudio(uri=gcs_uri)
config = types.RecognitionConfig(
encoding=enums.RecognitionConfig.AudioEncoding.LINEAR16,
sample_rate_hertz=8000,
language_code='pt-BR')
operation = client.long_running_recognize(config, audio)
print('Waiting for operation to complete...')
response = operation.result(timeout=300)
# Print the first alternative of all the consecutive results.
for result in response.results:
print('Transcript: {}'.format(result.alternatives[0].transcript))
print('Confidence: {}'.format(result.alternatives[0].confidence))
##This part is mine, the rest of the code belongs to Google
file = open("Test.txt", "wb")
file.write(str(response.results))
file.close()
# [END def_transcribe_gcs]