pysnmp Command Responder - handling managed objects value classes - python

I'm developing a command responder with pysnmp, based on
http://pysnmp.sourceforge.net/examples/current/v3arch/agent/cmdrsp/v2c-custom-scalar-mib-objects.html
My intention is to answer to the get message of my managed objects by reading the snmp data from a text file (updated over time).
I'm polling the responder using snmpB, drawing a graph of the polled object value evolution.
I've successfully modify the example exporting my first Managed Object, adding it with mibBuilder.exportSymbols() and retrieving the values from the txt file in the modified getvalue method. I'm able to poll this object with success. It's a Counter32 type object.
The next step is handle other objects with a value type different from the "supported" classes like Integer32, Counter32, OctetString
I need to handle floating point values or other specific data formats defined within MIB files, because snmpB expect these specific formats for correctly plotting the graph.
Unfortunetely I can't figure out a way to do this.
Hope someone can help,
Mark
EDIT 1
The textual-convention I need to implement is the Float32TC defined in FLOAT-TC-MIB from RFC6340:
Float32TC ::= TEXTUAL-CONVENTION
STATUS current
DESCRIPTION "This type represents a 32-bit (4-octet) IEEE
floating-point number in binary interchange format."
REFERENCE "IEEE Standard for Floating-Point Arithmetic,
Standard 754-2008"
SYNTAX OCTET STRING (SIZE(4))

There is no native floating point type in SNMP, and you can't add radically new types to the protocol. But you can put additional constraints on existing types or modify value representation via TEXTUAL-CONVENTION.
To represent floating point numbers you have two options:
encode floating point number into octet-stream and pass it as OCTET STREAM type (RFC6340)
use INTEGER type along with some TEXTUAL-CONVENTION to represent integer as float
Whatever values are defined in MIB, they always base on some built-in SNMP type.
You could automatically generate pysnmp MibScalar classes from your ASN.1 MIB with pysmi tool, then you could manually add MibScalarInstance classes with some system-specific code thus linking pysnmp to your data sources (like text files).

Related

Implementing Typed Arrays with cbor2

Is it possible to implement Typed Arrays using the cbor2 Python library? In the documentation I only saw that you can define a custom encoder to serialize custom objects, but instead I would like to implement Typed Arrays to reduce the number of bytes I send, like it's explained in this specification: https://datatracker.ietf.org/doc/html/rfc8746
For example, I would like to be able to say that I will send an array of 32 bits unsigned integers using a single tag that indicates the Type of the array, instead of repeating the information about the type for each value inside the array.
Are there other cbor libraries that can do that, if cbor2 can't?
I received an answer on their Github: https://github.com/agronholm/cbor2/issues/128

Fail on any usage of floating point

My python program manipulates bitcoin amounts precise to 8 decimal places. My intention is to use decimal.Decimal types everywhere, to avoid any floating point precision issues -- but I'm not sure I got every usage.
For quality assurance, I'd like to raise an error if there's any floats constructed anywhere in the program. Is this possible in python 3.5?
(I cannot use integers because I'm interfacing via JSON with other programs that expect decimal values.)

What does struct.pack() do to network packets

The python official document mentions that 'struct' module is used to convert between Python and binary data structures.
What do binary data structures here refer to? As I understand, does the data structure refer to the packet structure as defined in network related C functions?
Does struct.pack(fmt,v1,v2) build a C equivalent structure of the fields v1,v2 in format fmt?? For example if I am building an Ip packet, my fmt is the IP header and values are ip header feilds?
I am referring to this example while understanding how network packets can be built.
Binary data structures refers to the layout of the data in memory. Python's objects are far more complicated than a simple C struct. There is a significant amount of header data in Python objects that makes completing common tasks simpler for the Python interpreter.
Your interpretation is largely correct. The other important thing to note is that we specify a particular byte order, which may or may not be the same byte order used by a standard C structure (it depends on your machine architecture).

What model field type should I be using to store a number of type long?

I'm wanting to store longitude and latitude values using Django's ORM, I've tried using a BigInt type but its cutting off a lot of the decimal places. Should I be using the Decimal type here ?
Looking at the GeoDjango-Docs you should use a FloatField.
=> Link
Actually GeoDjango docs says PointField. I worked on a project involving geospatial computations and I was using PointField in the models.
FloatField probably uses floating point representation of the coordinates, which might introduce floating point errors. PointField is initialized with string representation of the lat/long values.

Storing and replaying binary network data with python

I have a Python application which sends 556 bytes of data across the network at a rate of 50 Hz. The binary data is generated using struct.pack() which returns a string, which is subsequently written to a UDP socket.
As well as transmitting this data, I would like to save this data to file as space-efficiently as possible, including a timestamp for each message, so that I can replay the data at a later time. What would be the best way of doing this using Python?
I have mulled over using a logging object, but have not yet found out whether Python can read in log files so that I can replay the data. Also, I don't know whether the logging object can handle binary data.
Any tips would be much appreciated! Although Wireshark would be an option, I'd rather store the data using my application so that I can automatically start new data files each time I run the program.
Python's logging system is intended to process human-readable strings, and it's intended to be easy to enable or disable depending on whether it's you (the developer) or someone else running your program. Don't use it for something that your application always needs to output.
The simplest way to store the data is to just write the same 556-byte string that you send over the socket out to a file. If you want to have timestamps, you could precede each 556-byte message with the time of sending, converted to an integer, and packed into 4 or 8 bytes using struct.pack(). The exact method would depend on your specific requirements, e.g. how precise you need the time to be, and whether you need absolute time or just relative to some reference point.
One possibility for a compact timestamp for replay purposes...: set the time as a floating point number of seconds since the epoch with time.time(), multiply by 50 since you said you're repeating this 50 times a second (the resulting unit, one fiftieth of a second, is sometimes called "a jiffy"), truncate to int, subtract from the similar int count of jiffies since the epoch that you measured at the start of your program, and struct.pack the result into an unsigned int with the number of bytes you need to represent the intended duration -- for example, with 2 bytes for this timestamp, you could represent runs of about 1200 seconds (20 minutes), but if you plan longer runs you'd need 4 bytes (3 bytes is just too unwieldy IMHO;-).
Not all operating systems have time.time() returning decent precision, so you may need more devious means if you need to run on such unfortunately limited OSs. (That's VERY os-dependent, of course). What OSs do you need to support...?
Anyway...: for even more compactness, use a slightly higher multiplier than 50 (say 10000) for more accuracy, and store, each time, the difference wrt the previous timestamp -- since that difference should not be much different from a jiffy (if I understand your spec correctly) that should be about 200 or so of these "tenth-thousands of a second" and you can store a single unsigned byte (and have no limit wrt the duration of runs you're storing for future replay). This depends even more on accurate returns from time.time() of course.
If your 556-byte binary data is highly compressible, it will be worth your while to use gzip to store the stream of timestamp-then-data in compressed form; this is best assessed empirically on your actual data, though.

Categories