GATT Characteristics with read-property not found by application - python

I am trying to develop an application that is communicating with an external device using BLE. I have decided to use pygatt (Python) with BGAPI (using a BlueGiga dongle).
The device I am communicating with has a custom primary service with a set of characteristics. According to their specs they have 2 READ characteristics, 8 NOTIFY chars and 1 WRITE char. Initially, I want to read one of the two READ chars, but I am unable to do so. Their UUIDs are not recognized as characteristics. How can this be? I am 100% certain that they are entered correctly.
import pygatt
import bleconnect
import blelib
import logging
logging.basicConfig()
logging.getLogger('pygatt').setLevel(logging.DEBUG)
adapter = pygatt.BGAPIBackend(serial_port='/dev/tty.usbmodem1')
adapter.start()
# Find the device
result = adapter.scan(timeout=5)
for item in result:
scan_name = item['name']
scan_rssi = item['rssi']
scan_address = item['address']
if scan_name == bleconnect.TARGET_NAME:
break
# Connect
device = adapter.connect(address=scan_address)
device.char_read(blelib.CHARACTERISTIC_DEVICE_FEATURES)
I can see in the debug messages that all the NOTIFY and WRITE characteristics are found, but not the two READ characteristics.
What am I missing?

This appears to be some kind of shortcoming in the pygatt API. I managed to find the actual value using bgapi only.

Related

How to read data from ".data" section of a process?

So, as far as I understand from looking online, the .data section of the PE file contains the static and global variables of the program.
All I am trying to do is create a variable in Python, or write something in notepad.exe, and look it up in its memory using the Win32 API.
What I have done:
I looked into the PE file structure of notepad.exe and found the virtual address of its .data section.
I attached to the running process and found its base address, and then added the virtual address of the .data section to that (is this the right calculation? If not, I would appreciate any learning resources, as I couldn't find much).
I tried to read about 1000 bytes, and I found some data. But when looking for hello (in hex, of course), it didn't find anything.
For the sake of POC and simplicity, I am using the Pymem module to make the Win32 API calls.
Here is my code:
import pymem
import pefile
process = pymem.Pymem('Notepad.exe')
process_module = list(process.list_modules())[0]
pe = pefile.PE(process_module.filename)
data_section = [x for x in pe.sections if b'.data' in x.Name][0]
section_address = process_module.lpBaseOfDll + data_section.VirtualAddress
section_size_to_scan = data_section.section_max_addr - data_section.section_min_addr
mem_dump = process.read_bytes(section_address, section_size_to_scan)
hello_in_hex = b'\x68\x65\x6c\x6c\x6f'
print(mem_dump)
print(hello_in_hex in mem_dump)
Obviously, the output is False.
Does anyone knows where I am making a mistake?

Custom phrases/words are ignored by Google Speech-To-Text

I am using python3 to transcribe an audio file with Google speech-to-text via the provided python packages (google-speech).
There is an option to define custom phrases which should be used for transcription as stated in the docs: https://cloud.google.com/speech-to-text/docs/speech-adaptation
For testing purposes I am using a small audio file with the contained text:
[..] in this lecture we'll talk about the Burrows wheeler transform and the FM index [..]
And I am giving the following phrases to see the effects if for example I want a specific name to be recognized with the correct notation. In this example I want to change burrows to barrows:
config = speech.RecognitionConfig(dict(
encoding=speech.RecognitionConfig.AudioEncoding.ENCODING_UNSPECIFIED,
sample_rate_hertz=24000,
language_code="en-US",
enable_word_time_offsets=True,
speech_contexts=[
speech.SpeechContext(dict(
phrases=["barrows", "barrows wheeler", "barrows wheeler transform"]
))
]
))
Unfortunately this does not seem to have any effect as the output is still the same as without the context phrases.
Am I using the phrases wrong or has it such a high confidence that the word it hears is indeed burrows so that it will ignore my phrases?
PS: I also tried using the speech_v1p1beta1.AdaptationClient and speech_v1p1beta1.SpeechAdaptation instead of putting the phrases into the config but this only gives me an internal server error with no additional information on what is going wrong. https://cloud.google.com/speech-to-text/docs/adaptation
I have created an audio file to recreate your scenario and I was able to improve the recognition using the model adaptation. To achieve this with this feature, I would suggest taking a look at this example and this post to better understand the adaptation model.
Now, to improve the recognition of your phrase, I performed the following:
I created a new audio file using the following page with the mentioned phrase.
in this lecture we'll talk about the Burrows wheeler transform and the FM index
My tests were based on this code sample. This code creates a PhraseSet and CustomClass that includes the word you would like to improve, in this case the word "barrows". You can also create/update/delete the phrase set and custom class using the Speech-To-Text GUI. Below is the code I used for the improvement.
from os import pathconf_names
from google.cloud import speech_v1p1beta1 as speech
import argparse
def transcribe_with_model_adaptation(
project_id="[PROJECT-ID]", location="global", speech_file=None, custom_class_id="[CUSTOM-CLASS-ID]", phrase_set_id="[PHRASE-SET-ID]"
):
"""
Create`PhraseSet` and `CustomClasses` to create custom lists of similar
items that are likely to occur in your input data.
"""
import io
# Create the adaptation client
adaptation_client = speech.AdaptationClient()
# The parent resource where the custom class and phrase set will be created.
parent = f"projects/{project_id}/locations/{location}"
# Create the custom class resource
adaptation_client.create_custom_class(
{
"parent": parent,
"custom_class_id": custom_class_id,
"custom_class": {
"items": [
{"value": "barrows"}
]
},
}
)
custom_class_name = (
f"projects/{project_id}/locations/{location}/customClasses/{custom_class_id}"
)
# Create the phrase set resource
phrase_set_response = adaptation_client.create_phrase_set(
{
"parent": parent,
"phrase_set_id": phrase_set_id,
"phrase_set": {
"boost": 0,
"phrases": [
{"value": f"${{{custom_class_name}}}", "boost": 10},
{"value": f"talk about the ${{{custom_class_name}}} wheeler transform", "boost": 15}
],
},
}
)
phrase_set_name = phrase_set_response.name
# print(u"Phrase set name: {}".format(phrase_set_name))
# The next section shows how to use the newly created custom
# class and phrase set to send a transcription request with speech adaptation
# Speech adaptation configuration
speech_adaptation = speech.SpeechAdaptation(
phrase_set_references=[phrase_set_name])
# speech configuration object
config = speech.RecognitionConfig(
encoding=speech.RecognitionConfig.AudioEncoding.FLAC,
sample_rate_hertz=24000,
language_code="en-US",
adaptation=speech_adaptation,
enable_word_time_offsets=True,
model="phone_call",
use_enhanced=True
)
# The name of the audio file to transcribe
# storage_uri URI for audio file in Cloud Storage, e.g. gs://[BUCKET]/[FILE]
with io.open(speech_file, "rb") as audio_file:
content = audio_file.read()
audio = speech.RecognitionAudio(content=content)
# audio = speech.RecognitionAudio(uri="gs://biasing-resources-test-audio/call_me_fionity_and_ionity.wav")
# Create the speech client
speech_client = speech.SpeechClient()
response = speech_client.recognize(config=config, audio=audio)
for result in response.results:
# The first alternative is the most likely one for this portion.
print(u"Transcript: {}".format(result.alternatives[0].transcript))
# [END speech_transcribe_with_model_adaptation]
if __name__ == "__main__":
parser = argparse.ArgumentParser(
description=__doc__, formatter_class=argparse.RawDescriptionHelpFormatter
)
parser.add_argument("path", help="Path for audio file to be recognized")
args = parser.parse_args()
transcribe_with_model_adaptation(speech_file=args.path)
Once it runs, you will receive an improved recognition as the below; however, consider that the code tries to create a new custom class and a new phrase set when it runs, and it might throw an error with a element already exists message if try to re-create the custom class and the phrase set.
Using the recognition without the adaptation
(python_speech2text) user#penguin:~/replication/python_speech2text$ python speech_model_adaptation_beta.py audio.flac
Transcript: in this lecture will talk about the Burrows wheeler transform and the FM index
Using the recognition with the adaptation
(python_speech2text) user#penguin:~/replication/python_speech2text$ python speech_model_adaptation_beta.py audio.flac
Transcript: in this lecture will talk about the barrows wheeler transform and the FM index
Finally, I would like to add some notes about the improvement and the code I performed:
I have used a flac audio file as it is recommended for optimal results.
I have used the model="phone_call" and use_enhanced=True as this was the model recognized by Cloud Speech-To-Text using my own audio file. Also the enhanced model can provide better results, you can see the documentation for more details. Note that this configuration might vary from your audio file.
Consider enable data logging to Google to collect data from your audio transcription requests. Google then uses this data to improve its machine learning models used for recognizing speech audio.
Once I have create the custom class and the phrase set, you can use the Speech-to-Text UI to updae and perform your tests quickly. only contains the
I have used in the phrase set the parameter boost, when you use boost, you assign a weighted value to phrase items in a PhraseSet resource. Speech-to-Text refers to this weighted value when selecting a possible transcription for words in your audio data. The higher the value, the higher the likelihood that Speech-to-Text chooses that word or phrase from the possible alternatives.
I hope this information helps you to improve your recognitions.

How to use an uhf rfid module with python?

I'm trying to read and write data from an RFID tag using python whit this module:
https://es.aliexpress.com/item/32573423210.html
I can connect successfully whit serial but I don't know how to read any tag, because the datasheet from pr9200(the reader that I am working) use this:
Image for pr9200 operation It's like a raw packet whit only hex address that I need to send to the module for it works
my code on python is this:
import serial
ser = serial.Serial(port = "COM27", baudrate=115200, bytesize=8, parity='N', stopbits=1)
while(ser.is_open == True):
rfidtag = ''
incomingByte = ser.read(21)
print(incomingByte)
for i in incomingByte:
rfidtag = rfidtag + hex(i)
Some comments to jump start your coding:
-What you need to do is send a command to your device to ask it to start sending readings in auto mode. To do that you need to use ser.write(command). You can find a good template here.
-To prepare the command you just need to take the raw bytes (those hex values you mentioned) and put them together as, for instance a bytearray.
-The only minor hurdle remaining is to calculate the CRC. There are some nice methods here at SO, just search CRC 16 CCITT.
-Be aware that after writing you can not start immediately waiting for readings, you have to wait first for the device to acknowledge the command. Hint: read 9 bytes.
-Lastly, take a new count of the bytes you will receive for each tag. I think they are 22 instead of 21.
You can use pyembedded python library for this which can give you the tag id.
from pyembedded.rfid_module.rfid import RFID
rfid = RFID(port='COM3', baud_rate=9600)
print(rfid.get_id())
https://pypi.org/project/pyembedded/

Using Python hidapi to open device with multiple usages

I'm new to the Python hidapi although I've used the C version that it is based on before. The Python library is really different and I can't figure out how to use it from the one example that is provided. Does anyone know of any good documentation for this library?
If you're looking for a specific question, I'm trying to open an HID device that has multiple usages. My device has the following relevant characteristics:
vendor_id: 10618
product_id: 4
usage: 8
usage_page: 1
interface_number: 1
I have tried using hid_enumerate to select the dictionary that I want but after instantiating the device object the device will not open even though I know it's there (since it is listed in enumerate).
Although I would still like to find some decent documentation, after using the C hidapi header for reference I found an answer to my original question. In order to specify the usage, you must use open_path() instead of the regular open() method (see below):
import hid
#Get the list of all devices matching this vendor_id/product_id
vendor_id = 10618
product_id = 4
device_list = hid.enumerate(vendor_id, product_id)
#Find the device with the particular usage you want
device_dict = (device in device_list if device['usage'] == '8').next()
device = hid.device()
device.open_path(device_dict['path']) #Open from path

How to send hid data to device using python / pywinusb?

I'm trying to use pywinusb to send output report to a pic18f4550. The device can receive data, and I've tested it with a C# application, which worked fine. Also, I can read data from the device with pywinusb just fine, but I have a problem trying to send data.
Here's the code I'm running:
from pywinusb import hid
filter = hid.HidDeviceFilter(vendor_id = 0x0777, product_id = 0x0077)
devices = filter.get_devices()
if devices:
device = devices[0]
print "success"
device.open()
out_report = device.find_output_reports()[0]
buffer= [0x00]*65
buffer[0]=0x0
buffer[1]=0x01
buffer[2]=0x00
buffer[3]=0x01
out_report.set_raw_data(buffer)
out_report.send()
dev.close()
It produces this error:
success
Traceback (most recent call last):
File "C:\Users\7User\Desktop\USB PIC18\out.py", line 24, in <module>
out_report.send()
File "build\bdist.win32\egg\pywinusb\hid\core.py", line 1451, in send
self.__prepare_raw_data()
File "build\bdist.win32\egg\pywinusb\hid\core.py", line 1406, in __prepare_raw_data
byref(self.__raw_data), self.__raw_report_size) )
File "build\bdist.win32\egg\pywinusb\hid\winapi.py", line 382, in __init__
raise helpers.HIDError("hidP error: %s" % self.error_message_dict[error_code])
HIDError: hidP error: data index not found
Here is my code and it works with an MSP430F chip running TI's datapipe USB stack. This is basically hid input and output endpoints that act as a custom data pipe allowing me to send 64 bytes in any format I want with the exception of the first byte being an ID number (defined by TI) decimal 63 and the second byte being the number of pertinent or useful bytes in the packet (64 byte max packet) with the first two bytes described above. It took me a while to figure this out mostly because of the lack of documentation. The few examples that come with pywinusb are hard to learn from at best. Anyways here is my code. It is working with my micro so this should help you.
filter = hid.HidDeviceFilter(vendor_id = 0x2048, product_id = 0x0302)
hid_device = filter.get_devices()
device = hid_device[0]
device.open()
print(hid_device)
target_usage = hid.get_full_usage_id(0x00, 0x3f)
device.set_raw_data_handler(sample_handler)
print(target_usage)
report = device.find_output_reports()
print(report)
print(report[0])
buffer = [0xFF]*64
buffer[0] = 63
print(buffer)
report[0].set_raw_data(buffer)
report[0].send()
One area that may be screwing you up is here:
out_report = device.find_output_reports()[0]
Try using "out_report = device.find_output_reports()" without the "[0]" at the end.
Then use
out_report[0].set_raw_data(buffer)
and finally
out_report[0].send()
Hope this helps you out.
HID is really powerful but nobody is using proper HID enumeration, HID provides a very flexible (not easy though) schema for describing the format on its reports.
For a simple device I'd recommend using a simple byte array usage to get started, this will give host applications to give context for your data items.
Anyway, raw reports here we go again...
Use starting_data = output_report.get_raw_data()[:] for any given output report, then change any 'raw' element directly.
Of course, ideally you'd have properly defined usage and you'd be able to change report items independently, without guessing bit widths and positions :-)

Categories