ThingsBoard IoT Gateway - Timestamp mapping timeseries in release 1.4.0 - python

I've just upgraded my ThingsBoard IoT Gateway to the release 1.4.0, and I saw from the repository that it is now possible to map the published telemetry with client side timestamp. From my understanding this feature before was only possible by directly publishing to the ThingsBoard embedded MQTT broker, but not through the Gateway.
From the repository I found that the previous mapping class (rep. branch 1.2) was the following:
public class KVMapping {
private String key;
private DataTypeMapping type;
private String value;
}
While the new release (rep. branch 1.4) has the following class:
public class KVMapping {
private String key;
private DataTypeMapping type;
private String value;
private String ts;
private String tsFormat;
}
From my understanding the timestamp feature (and the formatting style) have been added in the message mapping.
My problem is that I'm unable to map the timestamp in the message I publish towards ThingsBoard. The platform still receives the correct key and value, but maps the data with the server side timestamp.
This is a code snippet of the python code I use to publish the packet to the external MQTT broker, that shows how my json packet is structured:
timeStamp = "1488273476000"
data = {
"about": "Devices",
"properties": [
{
"about": "Device1",
"iotStateObservation": [
{
"phenomenonTime": timeStamp,
"value": "1"
}
]
},
{
"about": "Device2",
"iotStateObservation": [
{
"phenomenonTime": timeStamp,
"value": "174468"
}
]
},
{
"about": "Device3",
"iotStateObservation": [
{
"phenomenonTime": timeStamp,
"value": "12"
}
]
}
]
}
This is a snippet from my ThingsBoard IoT gateway mapping file (mqtt-config.json), here configured all the wanted mapping:
{
"topicFilter": "sensors",
"converter": {
"type": "json",
"filterExpression": "$.properties[*]",
"deviceNameJsonExpression": "${$.about}",
"timeseries": [
{
"type": "double",
"ts": "${$.iotStateObservation[0].phenomenonTime}",
"key": "${$.about}",
"value": "${$.iotStateObservation[0].value}"
}
]
}
}
Is there some mistake I'm committing in this procedure, or simply it is still not possible to map the data with client side timestamp yet?

Ok, so after performing a better analysis of the thingsboard gateway code, I found out that for some reason it is still not possible to map the client side timestamp for a timeseries using MQTT. This functionality may be possible using HTTP, but didn't test this. So in order to add this feature I forked the repository and slightly changed the MQTT mapping routine to add this feature. if anyone is interested in this you can find the modified code on my repo.

Related

How to resolve the error: "columns[11]: custom variable cannot be found" when running an SA360 API call?

Heyy!!
Hope everyone is doing well,
I'm pulling data from SA360 (or DS3, Doubleclick Search), although I receive this error when I try to download the report:
"columns[11]: A custom variable named 'DDA Product Sign Ups' with
platform source 'FLOODLIGHT' cannot be found.">
I know:
That conversion exists on the platform UI (second result)
That my script works because when I take off the conversion field I can deploy my function with no prob.
My Script (more or less):
conversion_name = "DDA Product Sign Ups"
request = ds3_manager.reports().request(body =
{
"reportScope": {
"agencyId": agency_id,
"advertiserId" : advertiser_id },
"reportType": "adGroup",
"columns": [
{ "columnName": "date"},
{ "columnName": "accountType"},
{ "columnName": "account" },
{ "columnName": "cost" },
{ "columnName": "impr" },
{ "columnName": "clicks" },
{
"customMetricName" : conversion_name,
"platformSource": "floodlight"
}
],
"timeRange": {
"startDate": start_date,
"endDate": end_date
},
"downloadFormat": "csv",
"maxRowsPerFile": 6000000,
"statisticsCurrency": "agency"
}
)
When I google the issue I land on this web result: Set up custom Floodlight metrics and dimensions but I don't understand, to me, it is already set up, as I can add to it to my reports on the UI or on my webqueries already... So I'm not why it is not picked up by the script..
If anyone has an idea that would be greatly appreciated :D.
Best,
Alex
In order to include custom columns in a report request, you first need to find out how that custom column is defined (check this link). You can then include the custom column in the request (check this link).
You will find useful information on this topic in the "Optional: Request for Specific User-defined Column" section of this article.

EventGrid-triggered Python Azure Function "ClientOtherError" and "AuthorizationError", how to troubleshoot?

For some reason, today my Python Azure Function is not firing.
Setup:
Trigger: Blob upload to storage account
Method: EventGrid
Auth: Uses System-assigned Managed Identity to auth to Storage Account
Advanced Filters:
Subject ends with .csv, .json
data.api contains "FlushWithClose"
Issue:
Upload a .csv file
No EventGrid triggered
New "ClientOtherError" and "AuthorizationError"s shown in logs
Question:
These are NEW errors and this is NEW behavior of an otherwise working Function. No changes have been recently made.
What do these errors mean?
How do I troubleshoot them?
The way I troubleshot the Function was to:
Remove ALL ADVANCED FILTERS from the EventGrid trigger
Attempt upload
Upload successful
Look at EventGrid message
The culprit (though unclear why ClientOtherError and AuthorizationError are generated here!) seems to be:
Files pushed to Azure Storage via Azure Data Factory use the FlushWithClose api.
These are the only ones I want to grab
Our automations all use ADF and if you don't have the FlushWithClose filter in place, your Functions will run 2x (because ADF causes two events on the storage but only one (flush with close) is the actual blob write.)
{
"id": "redact",
"data": {
"api": "FlushWithClose",
"requestId": "redact",
"eTag": "redact",
"contentType": "application/octet-stream",
"contentLength": 87731520,
"contentOffset": 0,
"blobType": "BlockBlob",
"blobUrl": "https://mything.blob.core.windows.net/mything/20201209/yep.csv",
"url": "https://mything.dfs.core.windows.net/mything/20201209/yep.csv",
"sequencer": "0000000000000000000000000000701b0000000000008177",
"identity": "redact",
"storageDiagnostics": {
"batchId": "redact"
}
},
"topic": "/subscriptions/redact/resourceGroups/redact/providers/Microsoft.Storage/storageAccounts/redact",
"subject": "/blobServices/default/containers/mything/blobs/20201209/yep.csv",
"event_type": "Microsoft.Storage.BlobCreated"
}
Files pushed to Azure Storage via Azure Storage Explorer (and via Azure Portal) use the PutBlob api.
{
"id": "redact",
"data": {
"api": "PutBlob",
"clientRequestId": "redact",
"requestId": "redact",
"eTag": "redact",
"contentType": "application/vnd.ms-excel",
"contentLength": 1889042,
"blobType": "BlockBlob",
"blobUrl": "https://mything.blob.core.windows.net/thing/yep.csv",
"url": "https://mything.blob.core.windows.net/thing/yep.csv",
"sequencer": "0000000000000000000000000000761d0000000000000b6e",
"storageDiagnostics": {
"batchId": "redact"
}
},
"topic": "/subscriptions/redact/resourceGroups/redact/providers/Microsoft.Storage/storageAccounts/redact",
"subject": "/blobServices/default/containers/thing/blobs/yep.csv",
"event_type": "Microsoft.Storage.BlobCreated"
}
I was testing locally with ASE instead of using our ADF automations
Thus the advanced filter for data.api was not triggering the EventGrid
Ok... but what about the errors?

ElasticSearch updating documents in an automated way

I don't know how to ask questions about my current problem so I think, that is why I couldn't find the right answer. So please let me tell you what my problem is.
I am trying to do a simple internet port scanner by using Zmap and Zgrab like Shodan.io,like censys.io etc.
I need to store the data inside of the ElasticSearch(because I want to learn how to use it).
In this case;
I have created a JSON architecture to use it in ElasticSearch such as
{
"value": "192.168.0.1",
"port": [
{
"value": 80,
"HTMLbody": "BODY OF THE WEB PAGE",
"status": 200,
"headers": {
"content_type": "html/png",
"content_length": 23123,
"...": "..."
},
"certificate": {
"company_names": [
"example.com",
"acme.io"
]
}
}
]
}
There will be approximately 4 billion IP address inside of the Elasticsearch with different ports open. My problem begins here; After first initial scan, I need to update the exist IP addresses.
For example;
IP: 192.168.0.1
port: 80 open
When in the second scan, I scan port 443 and It will be probably open too. Then I need to update my Elasticsearch document depends on the open ports.
What I found so far
There is an endpoint I found which is; POST /<index>/_update/<_id> but it updates a single document. I need to update at least more than 100.000 document in one scan. And it should be automatically too. How do I know that an ip address document id and update it ?
Also secondly, I found;
POST <index>/_update_by_query
I thought about searching the ip address by using query and gathering its index number and then updating the document as follows;
{
"value": "192.168.0.1",
"port": [
{
"value": 80,
"HTMLbody": "BODY OF THE WEB PAGE",
"status": 200,
"headers": {
"content_type": "html/png",
"content_length": 23123,
"...": "..."
},
"certificate": {
"company_names": [
"example.com",
"acme.io"
]
}
},
{
"value": 443,
"HTMLbody": "BODY OF THE SSL WEB PAGE",
"status": 200
}
]
}
In theory, I could do this but in practice couldn't write the code as efficient. Because I had at least 6 GB JSON file for one scan and it takes so long to process the whole file and updating elasticsearch.
Is there any way that solve this problem efficiently ?
Please Look the image below
To fully answer to the question, first, use the ip adress as _id to make sur to update the appropriate document.
You need to add an entry in the port array, so use the _update API to perform it like this :
POST <index>/_update/192.168.0.1
{
"script": {
"source": "ctx._source.port.add(params.tag)",
"lang": "painless",
"params": {
"tag": {
"value": 443,
"HTMLbody": "BODY OF THE SSL WEB PAGE",
"status": 200
}
}
}
}
This is an update query to add an entry to an array with a little script.
To use the bulk, simply add all of your request like explained here

Mapping two float fields containing geo langitude and latitude into geo_point in kibana using the AWS ElasticSearch Service Instance

I'm building a solution that processes the data from Lambda(Python 2.7) through kinesis stream and firehose to Elastic Search domain. Data is stored in Python dictionary and dumped as a JSON to Kinesis
dataDictionary = {
"precipitationType": precipitationType,
"location": location,
"humidity" : humidity,
"groundTemp": groundTemp,
"airTemp": airTemp,
"windSpeed": windSpeed,
"windDirection": windDirection,
"measureDate": parsedMeasureDate,
"systemDate": systemDate,
"stationGeoLatitude": stationGeoLatitude,
"stationGeoLongitude": stationGeoLongitude
}
#Push data to AWS Kinesis Stream
res = kinesis.put_record(StreamName = LocalStreamName,Data=json.dumps(dataDictionary),PartitionKey='systemDate')
Process is succesful but in want to display the results on map in
Kibana I only have two float fields and no geo_point/geohash fields
I cannot figure out how to map them in AWS ElasticSearch Service. I found some documentation about mapping but I have no idea how to use it inside AWS. Maybe I should pass this data in other way in Python code?
You have to use mappings and tell elasticsearch to map your 2 fields as a geo-point location:
https://www.elastic.co/guide/en/elasticsearch/reference/current/geo-point.html
You will have to reindex your data, but first specify your mappings.
You could do it using python client, or post the json mapping manualy:
PUT your_index
{
"mappings": {
"your_type": {
"properties": {
"location": {
"type": "geo_point"
}
}
}
}
}

Localhost request from angular

Hello i have created a mini endpoint in Flask in python, and it's working well. When i access it from Postman :
http://127.0.0.1:5000/retrieve_data
it retrieves the data as a json.
But when i call the endpoint from angular:
export class HomeComponent implements OnInit {
title = 'angular-demo';
data = this.getAnnounce();
getAnnounce() {
return this.http.get('http://127.0.0.1:5000/retrieve_data');
}
constructor(private http: HttpClient) {
}
The actual result is:
{
"_isScalar": false,
"source": {
"_isScalar": false,
"source": {
"_isScalar": false,
"source": {
"_isScalar": true,
"value": {
"url": "http://127.0.0.1:5000/retrieve_data",
"body": null,
"reportProgress": false,
"withCredentials": false,
"responseType": "json",
"method": "GET",
"headers": {
"normalizedNames": {},
"lazyUp
Canceldate": null,
"headers": {}
},
"params": {
"updates": null,
"cloneFrom": null,
"encoder": {},
"map": null
},
"urlWithParams": "http://127.0.0.1:5000/retrieve_data"
}
},
"operator": {
"concurrent": 1
}
},
"operator": {}
},
"operator": {}
}
PS: i am very noob at anuglar
From the document of angular's http client, the return value's type of http.get() is Observable
So if you want to get the data that you server respond, you must subscribe as this:
data = {};
getAnnounce() {
this.http.get('http://127.0.0.1:5000/retrieve_data')
.subscribe(result => this.data = result);
}
Great question, and as some others have suggested it would seem that you're missing a .subscribe() on your getAnnounce() method. Angular Docs has a guide on observables that can help you get a better understanding.
I have a few more suggestions for improving your experience with angular. Take advantage of services for grouping together similar functionality. For example, in your provided code, you could move your getAnnounce() to a new service you create; something like AnnouncementService. Then you can use this in many places of your code. it also helps test it and find bugs later on since your code is more separated.
Another thing you can do is to move your server api address to the environment variables. By default if you built your angular project using the Angular CLI you'll find in our src folder an environments folder with two files by default environment.ts and environment.prod.ts. you can access this JSON object anywhere within your .ts files and when you build your code for product vs locally it can change values to what ever you set. In your case you would put your local API address in there http://127.0.0.1:5000.
export const environment = {
production: false,
api: 'http://127.0.0.1:5000'
};
Now you can access this easily and have one spot to change if you ever change your port number or have your api in a real server.
import { environment } from './environments/environment';
/* remember to import the non prod version of the environment file, angular knows how to switch them automatically when building for production or not. */
Hopefully this helps you with your coding, good luck!
You have to subscribe to your getAnnounce function.
For instance, you can try:
data = this.getAnnounce().subscribe();
Or directly on your constructor / ngOnInit:
ngOnInit() {
this.getAnnounce().subscribe(
data => this.data = data
);
}

Categories