Using asammdf in Python to decode CAN frames - python

I might be misunderstanding something about asammdf in general, but I'm not really sure how to go about decoding a CAN frame.
Supposing I have a dbc with relevant messages defined, I get that I can use mdf.extract_bus_logging() to parse the signals, but I'm not really sure what to do from there.
I'm sending a raw payload + can frame ID, and I think I could get away with using cantools to parse the raw data (and then passing that data into an asammdf signal), but it feels like there's some degree of support here within asammdf.
I have an example here along the lines of what I want to do (using this motohawk.dbc file from the cantools examples here https://cantools.readthedocs.io/en/latest/)
from asammdf import MDF
import cantools
filename = "~/mdf_dbc_play/tmp.dbc"
db = cantools.database.load_file(filename)
msg_bytes = [0,0,0xff,0x01,0x02,0x03,0x04,0x05]
# Temporary bogus timestamp
timestamps = [0.1]
# Use cantools to decode the raw bytes
msg = db.get_message_by_name("EEC1")
unpacked_signals = msg.decode(msg_bytes)
# Using version 3.10 just in case we have some versioning issues with old tooling.
with asammdf.MDF(version='3.10') as mdf:
# I assume there's some way of getting the actual value we care about from an unpacked message.
samples = unpacked_signals.get_values()
sig = asammdf.Signal(name="EEC1", samples=samples, timestamps=timestamps)
mdf.append(sig)
mdf.save("~/mdf_dbc_play/output.mdf")

For logging CAN traffic to an MF4 file you can try this code from python-can
https://github.com/hardbyte/python-can/blob/mf4-io/can/io/mf4.py

Related

Extract data field from a byte data type (data is obtained using paramiko on python)

Reading nvram remotely from a Mac using SSH on paramiko package (python3), my readback data stdout.read() seems to have bytes as its data type.
data_readback= stdout.read()
print(type(data_readback)) #print data type of read back data
print(data_readback[0:60]) #printing first 60 characters of read back data
**Resultant output **:
Question:
I would like to extract the data assigned to boot-args data field (ie..debug=0x104c0c) from this. How do I accomplish this? Note that the data debug=0x104c0c maybe different on each read back.
I did try to convert the data to string by using this next code but dont know how to extract either since it may require regex maybe.
boot_args_readback= stdout.read().decode("utf8")
Does this work for you?
data_as_bytes = b'boot-args\tdebug=0x104c0c\nnauto-boot\ttrue\nboot-volume\tEF57347C'
data_as_string = str(data_as_bytes, "ascii")
output = data_as_string.split()
entries = dict([(x, y) for x, y in zip(output[::2], output[1::2])])
print(entries['boot-args']) # debug=0x104c0c

How to extract RAW data from PCAP file which of 6LoWPAN or IEEE 802.15.4 using python/ scapy?

I am trying to extract the data from 6LoWPAN support device type. Able to retrieve raw data only.
I tried using Scapy and pypcapkit. Both are providing src, dst, type and raw data only.
>>from scapy.all import *
>>pcap = rdpcap("/content/sample.pcap")
>>pcap
output as: <sample.pcap: TCP:0 UDP:0 ICMP:0 Other:252>
>>pcap[0]
output as:<Ether dst=xx:xx:xx:xx:xx:xx src=xx:xx:xx:xx:xx:xx type=0x809a |<Raw load='A\xd8\xbdxV\xff\xff\x8bRk\x02ece.......
code snap shot
expecting extracted data as Wireshark data
Update 1:
I tried jNetPcap, pcap4j. But nothing helped out.
When i tried pyshark. It is able to project the data.
But there is problem i am facing now packet.6lowpan is not allowing by python parser, variable should not start with integer.
Ex: pcap = pyshark.FileCapture("xxxx//xx//xxxx.pcapng")
for pkt in pcap:
print(pkt.6lowpan)
syntax error...
Update 2:
pkt.6lowpan issue resolved by using pkt["6lowpan"]... Ref solution by pyshark Team
Updating the content, may help some other.

Is it possible to read a .csv from a remote server, using Paramiko and Dask's read_csv() method in conjunction?

Today I began using the Dask and Paramiko packages, partly as a learning exercise, and partly because I'm beginning a project that will require dealing with large datasets (10s of GB) that must be accessed from a remote VM only (i.e. cannot store locally).
The following piece of code belongs to a short, helper program that will make a dask dataframe of a large csv file hosted on the VM. I want to later pass its output (reference to the dask dataframe) to a second function that will perform some overview analysis on it.
import dask.dataframe as dd
import paramiko as pm
import pandas as pd
import sys
def remote_file_to_dask_dataframe(remote_path):
if isinstance(remote_path, (str)):
try:
client = pm.SSHClient()
client.load_system_host_keys()
client.connect('#myserver', username='my_username', password='my_password')
sftp_client = client.open_sftp()
remote_file = sftp_client.open(remote_path)
df = dd.read_csv(remote_file)
remote_file.close()
sftp_client.close()
return df
except:
print("An error occurred.")
sftp_client.close()
remote_file.close()
else:
raise ValueError("Path to remote file as string required")
The code is neither nice nor complete, and I will replace username and password with ssh keys in time, but this is not the issue. In a jupyter notebook, I've previously opened the sftp connection with a path to a file on the server, and read it into a dataframe with a regular Pandas read_csv call. However, here the equivalent line, using Dask, is the source of the problem:df = dd.read_csv(remote_file).
I've looked at the documentation online (here), but I can't tell whether what I'm trying above is possible. It seems that for networked options, Dask wants a url. The parameter passing options for, e.g. S3, appear to depend on that infrastructure's backend. I unfortunately cannot make any sense of the dash-ssh documentation (here).
I've poked around with print statements and the only line that fails to execute is the one stated. The error risen is: raise TypeError('url type not understood: %s' % urlpath)
TypeError: url type not understood:
Can anybody point me in the right direction for achieving what I'm trying to do? I'd expected Dask's read_csv to function as Pandas' had, as it's based on the same.
I'd appreciate any help, thanks.
p.s. I'm aware of Pandas' read_csv chunksize option, but I would like to achieve this through Dask, if possible.
In the master version of Dask, file-system operations are now using fsspec which, along with the previous implementations (s3, gcs, hdfs) now supports some additional file-systems, see the mapping to protocol identifiers fsspec.registry.known_implementations.
In short, using a url like "sftp://user:pw#host:port/path" should now work for you, if you install fsspec and Dask from master.
It seems that you would have to implement their "file system" interface.
I'm not sure what is minimal set of methods that you need to implement to allow read_csv. But you definitely have to implement the open.
class SftpFileSystem(object):
def open(self, path, mode='rb', **kwargs):
return sftp_client.open(path, mode)
dask.bytes.core._filesystems['sftp'] = SftpFileSystem
df = dd.read_csv('sftp://remote/path/file.csv')

Read Binary string in Python, zlib

I want to store a large JSON (dict) from Python in dynamoDB.
After some investigation it seems that zlib is the way to go to get compression at a good level. Using below Im able to encode the dict.
ranking_compressed = zlib.compress(simplejson.dumps(response["Item"]["ranking"]).encode('utf-8'))
The (string?) then looks like this: b'x\x9c\xc5Z\xdfo\xd3....
I can directly decompress this and get the dict back with:
ranking_decompressed = simplejson.loads(str(zlib.decompress(ranking_compressed).decode('utf-8')))
All good so far. However, when putting this in dynamoDB and then reading it back using the same decompress code as above. The (string?) now looks like this:
Binary(b'x\x9c\xc5Z\xdf...
The error I get is:
bytes-like object is required, not 'Binary'
Ive tried accessing the Binary with e.g. .data but I cant reach it.
Any help is appreciated.
Boto3 Binary objects have a value property.
# in general...
binary_obj.value
# for your specific case...
ranking_decompressed = simplejson.loads(str(zlib.decompress(response["Item"]["ranking_compressed"].value).decode('utf-8')))
Oddly, this seems to be documented nowhere except the source code for the Binary class here

How do I store data from the Bloomberg API into a Pandas dataframe?

I recently started using Python so I could interact with the Bloomberg API, and I'm having some trouble storing the data into a Pandas dataframe (or a panel). I can get the output in the command prompt just fine, so that's not an issue.
A very similar question was asked here:
Pandas wrapper for Bloomberg api?
The referenced code in the accepted answer for that question is for the old API, however, and it doesn't work for the new open API. Apparently the user who asked the question was able to easily modify that code to work with the new API, but I'm used to having my hand held in R, and this is my first endeavor with Python.
Could some benevolent user show me how to get this data into Pandas? There is an example in the Python API (available here: http://www.openbloomberg.com/open-api/) called SimpleHistoryExample.py that I've been working with that I've included below. I believe I'll need to modify mostly around the 'while(True)' loop toward the end of the 'main()' function, but everything I've tried so far has had issues.
Thanks in advance, and I hope this can be of help to anyone using Pandas for finance.
# SimpleHistoryExample.py
import blpapi
from optparse import OptionParser
def parseCmdLine():
parser = OptionParser(description="Retrieve reference data.")
parser.add_option("-a",
"--ip",
dest="host",
help="server name or IP (default: %default)",
metavar="ipAddress",
default="localhost")
parser.add_option("-p",
dest="port",
type="int",
help="server port (default: %default)",
metavar="tcpPort",
default=8194)
(options, args) = parser.parse_args()
return options
def main():
options = parseCmdLine()
# Fill SessionOptions
sessionOptions = blpapi.SessionOptions()
sessionOptions.setServerHost(options.host)
sessionOptions.setServerPort(options.port)
print "Connecting to %s:%s" % (options.host, options.port)
# Create a Session
session = blpapi.Session(sessionOptions)
# Start a Session
if not session.start():
print "Failed to start session."
return
try:
# Open service to get historical data from
if not session.openService("//blp/refdata"):
print "Failed to open //blp/refdata"
return
# Obtain previously opened service
refDataService = session.getService("//blp/refdata")
# Create and fill the request for the historical data
request = refDataService.createRequest("HistoricalDataRequest")
request.getElement("securities").appendValue("IBM US Equity")
request.getElement("securities").appendValue("MSFT US Equity")
request.getElement("fields").appendValue("PX_LAST")
request.getElement("fields").appendValue("OPEN")
request.set("periodicityAdjustment", "ACTUAL")
request.set("periodicitySelection", "DAILY")
request.set("startDate", "20061227")
request.set("endDate", "20061231")
request.set("maxDataPoints", 100)
print "Sending Request:", request
# Send the request
session.sendRequest(request)
# Process received events
while(True):
# We provide timeout to give the chance for Ctrl+C handling:
ev = session.nextEvent(500)
for msg in ev:
print msg
if ev.eventType() == blpapi.Event.RESPONSE:
# Response completly received, so we could exit
break
finally:
# Stop the session
session.stop()
if __name__ == "__main__":
print "SimpleHistoryExample"
try:
main()
except KeyboardInterrupt:
print "Ctrl+C pressed. Stopping..."
I use tia (https://github.com/bpsmith/tia/blob/master/examples/datamgr.ipynb)
It already downloads data as a panda dataframe from bloomberg.
You can download history for multiple tickers in one single call and even download some bloombergs reference data (Central Bank date meetings, holidays for a certain country, etc)
And you just install it with pip.
This link is full of examples but to download historical data is as easy as:
import pandas as pd
import tia.bbg.datamgr as dm
mgr = dm.BbgDataManager()
sids = mgr['MSFT US EQUITY', 'IBM US EQUITY', 'CSCO US EQUITY']
df = sids.get_historical('PX_LAST', '1/1/2014', '11/12/2014')
and df is a pandas dataframe.
Hope it helps
You can also use pdblp for this (Disclaimer: I'm the author). There is a tutorial showing similar functionality available here https://matthewgilbert.github.io/pdblp/tutorial.html, the functionality could be achieved using something like
import pdblp
con = pdblp.BCon()
con.start()
con.bdh(['IBM US Equity', 'MSFT US Equity'], ['PX_LAST', 'OPEN'],
'20061227', '20061231', elms=[("periodicityAdjustment", "ACTUAL")])
I've just published this which might help
http://github.com/alex314159/blpapiwrapper
It's basically not very intuitive to unpack the message, but this is what works for me, where strData is a list of bloomberg fields, for instance ['PX_LAST','PX_OPEN']:
fieldDataArray = msg.getElement('securityData').getElement('fieldData')
size = fieldDataArray.numValues()
fieldDataList = [fieldDataArray.getValueAsElement(i) for i in range(0,size)]
outDates = [x.getElementAsDatetime('date') for x in fieldDataList]
output = pandas.DataFrame(index=outDates,columns=strData)
for strD in strData:
outData = [x.getElementAsFloat(strD) for x in fieldDataList]
output[strD] = outData
output.replace('#N/A History',pandas.np.nan,inplace=True)
output.index = output.index.to_datetime()
return output
I've been using pybbg to do this sort of stuff. You can get it here:
https://github.com/bpsmith/pybbg
Import the package and you can then do (this is in the source code, bbg.py file):
banner('ReferenceDataRequest: single security, single field, frame response')
req = ReferenceDataRequest('msft us equity', 'px_last', response_type='frame')
print req.execute().response
The advantages:
Easy to use; minimal boilerplate, and parses indices and dates for you.
It's blocking. Since you mention R, I assume you are using this in some type of an interactive environment, like IPython. So this is what you want , rather than having to mess around with callbacks.
It can also do historical (i.e. price series), intraday and bulk data request (no tick data yet).
Disadvantages:
Only works in Windows, as far as I know (you must have BB workstationg installed and running).
Following on the above, it depends on the 32 bit OLE api for Python. It only works with the 32 bit version - so you will need 32 bit python and 32 bit OLE bindings
There are some bugs. In my experience, when retrieving data for a number of instruments, it tends to hang IPython. Not sure what causes this.
Based on the last point, I would suggest that if you are getting large amounts of data, you retrieve and store these in an excel sheet (one instrument per sheet), and then import these. read_excel isn't efficient for doing this; you need to use the ExcelReader (?) object, and then iterate over the sheets. Otherwise, using read_excel will reopen the file each time you read a sheet; this can take ages.
Tia https://github.com/bpsmith/tia is the best I've found, and I've tried them all... It allows you to do:
import pandas as pd
import datetime
import tia.bbg.datamgr as dm
mgr = dm.BbgDataManager()
sids = mgr['BAC US EQUITY', 'JPM US EQUITY']
df = sids.get_historical(['BEST_PX_BPS_RATIO','BEST_ROE'],
datetime.date(2013,1,1),
datetime.date(2013,2,1),
BEST_FPERIOD_OVERRIDE="1GY",
non_trading_day_fill_option="ALL_CALENDAR_DAYS",
non_trading_day_fill_method="PREVIOUS_VALUE")
print df
#and you'll probably want to carry on with something like this
df1=df.unstack(level=0).reset_index()
df1.columns = ('ticker','field','date','value')
df1.pivot_table(index=['date','ticker'],values='value',columns='field')
df1.pivot_table(index=['date','field'],values='value',columns='ticker')
The caching is nice too.
Both https://github.com/alex314159/blpapiwrapper and https://github.com/kyuni22/pybbg do the basic job (thanks guys!) but have trouble with multiple securities/fields as well as overrides which you will inevitably need.
The one thing this https://github.com/kyuni22/pybbg has that tia doesn't have is bds(security, field).
A proper Bloomberg API for python now exists which does not use COM. It has all of the hooks to allow you to replicate the functionality of the Excel addin, with the obvious advantage of a proper programming language endpoint. The request and response objects are fairly poorly documented, and are quite obtuse. Still, the examples in the API are good, and some playing around using the inspect module and printing of response messages should get you up to speed. Sadly, the standard terminal licence only works on Windows. For *nix you will need a server licence (even more expensive). I have used it quite extensively.
https://www.bloomberg.com/professional/support/api-library/

Categories