Add nftables map element using libnftables-json API from python - python

I am trying to dynamically add a map element using the nftables JSON API from python. In my firewall I have the following map in the router table in the ip family:
map port_forwards {
type inet_service: ipv4_addr . inet_service;
}
Here is a minimal example of what I am trying to do:
import nftables
nft_cmd = {"nftables": [
{ "add": { "element":{
"family": "ip",
"table": "router",
"name": "port_forwards",
"elem": { "map": {
"key": "80",
"data": "172.16.0.1 . 80"
}}
}}}
]}
nft = nftables.Nftables()
nft.json_validate(nft_cmd)
rc, _output, error = nft.json_cmd(nft_cmd)
if rc != 0:
raise RuntimeError(f"Error running nftables command: {error}")
This results in the following error:
RuntimeError: Error running nftables command: internal:0:0-0: Error: Unexpected JSON type object for immediate value.
internal:0:0-0: Error: Invalid set.
internal:0:0-0: Error: Parsing command array at index 0 failed.
I assume I am mis-understanding the spec somehow (https://manpages.debian.org/unstable/libnftables1/libnftables-json.5.en.html), but I can't figure out the correct usage.
UPDATE: I have discovered nft can echo your command in json format. This is the command:
sudo nft -e -j add element ip router port_forwards '{80 : 172.16.0.1 . 8080 }'
and the response pretty-printed:
{"nftables": [
{"add": {"element": {
"family": "ip",
"table": "router",
"name": "port_forwards",
"elem": {"set": [[
80,
{"concat": ["172.16.0.1", 8080]}
]]}
}}}
]}
Unfortunately copying this into the above python code still results in the same error

It turns out that the "elem" property takes the array directly instead of being wrapped in a "set" object. This was hinted at by the error:
Unexpected JSON type object for immediate value.
The working code is shown below:
import nftables
nft_cmd = {"nftables": [
{ "add": { "element":{
"family": "ip",
"table": "router",
"name": "port_forwards",
"elem": [[
80,
{"concat": ["172.16.0.1", 8080]}
]]
}}}
]}
nft = nftables.Nftables()
nft.json_validate(nft_cmd)
rc, _output, error = nft.json_cmd(nft_cmd)
if rc != 0:
raise RuntimeError(f"Error running nftables command: {error}")

Related

I am getting this following error: 21/10/06 21:15:55 ERROR ScalaDriverLocal: User Code Stack Trace: java.lang.Exception: max value not updated

I am trying to extract data from Netsuite and load it into Azure Databricks, by scripting a JSON config and running it through Azure Data Factory pipeline. I get the error that is mentioned above:
ERROR ScalaDriverLocal: User Code Stack Trace:
java.lang.Exception: max value not updated
Could this be related to an error checkpoint table updation?
I am providing the JSON script I used below. I hope someone can help me figure out the error. Thanks.
{
"parallelism": 1,
"onJobFailure": "Fail",
"onEmptyDF": "Fail",
"ignoreInvalidRows": true,
"cleanColumnNames": true,
"jobs": [
{
"name": "GenericPassThroughBatchJob.CURRENCY_EXCHANGE_RATE_L1",
"description": "Extract CURRENCY_EXCHANGE_RATE_L1 data from NetSuite",
"ignoreInvalidRows": true,
"cleanColumnNames": true,
"jdbcInputs": [
{
"dataFrameName": "CURRENCY_EXCHANGE_RATE_L1",
"driver": "com.netsuite.jdbc.openaccess.OpenAccessDriver",
"flavor": "oracle",
"url": "${spark.wsgc.jdbcUrl}",
"keyVaultAuth": {
"keyVaultParams": {
"clientId": "${spark.wsgc.clientId}",
"usernameKey": "${spark.wsgc.usernamekey}",
"passwordKey": "${spark.wsgc.passwordkey}",
"clientKey": "${spark.wsgc.clientkey}",
"vaultBaseUrl": "${spark.wsgc.vaultbaseurl}"
}
},
"incrementalParams": {
"checkpointTablePath": "dbfs:/mnt/data/governed/l1/audit/log/checkpoint_log/",
"extractId": "NETSUITE_CURRENCY_EXCHANGE_RATE",
"incrementalSql": "(select b.NAME as BASE_CURRENCY_CD, c.NAME as CURRENCY_CD, a.EXCHANGE_RATE, a.DATE_EFFECTIVE from Administrator.CURRENCY_EXCHANGE_RATES a left join Administrator.CURRENCIES b on a.BASE_CURRENCY_ID = b.CURRENCY_ID left join Administrator.CURRENCIES c on a.CURRENCY_ID = c.CURRENCY_ID) a1",
"maxCheckPoint1": "(select to_char(max(DATE_EFFECTIVE), 'DD-MM-YYYY HH24:MI:SS') from Administrator.CURRENCY_EXCHANGE_RATES where DATE_EFFECTIVE > to_date('%%{CHECKPOINT_VALUE_1}', 'YYYY-MM-DD HH24:MI:SS'))"
}
}
],
"fileOutputs": [
{
"dataFrameName": "CURRENCY_EXCHANGE_RATE_L1",
"format": "PARQUET",
"path": "dbfs:/mnt/data/governed/l1/global_netsuite/CurrencyExchangeRate/table/inbound/All_Currency_Exchange_Rate/",
"saveMode": "Overwrite"
},
{
"dataFrameName": "CURRENCY_EXCHANGE_RATE_L1",
"format": "DELTA",
"path": "dbfs:/mnt/data/governed/l1/global_netsuite/CurrencyExchangeRate/table/inbound_archive/All_Currency_Exchange_Rate/",
"saveMode": "Append"
}
]
}
]
}
This was an issue with Checkpoint table updation. Once I rectified the checkpoint value, this error was resolved. I had to set the checkpoint value to min(DATE_LAST_MODIFIED) from the records in the table.

Passing the answer of Watson Assistant to a variable Python

I am trying to get the output of Watson Assistant into a variable. So as far as I have searched, i need to get the "output" and "text" part of the json (at first it is a dict, but then we parse it to a json). But I cannot seem to get it:
I have searched in these 2 questions already:This one for watson This one for parsing the json
The code is really simple: accessing to my bot, and inputting "trips". I've taken out the api and workspace, but I have them (obviously).
if __name__ == '__main__':
assistant = watson_developer_cloud.AssistantV1(
iam_apikey='{YOUR API HERE}',
version='2018-09-20',
url='https://gateway-syd.watsonplatform.net/assistant/api'
)
response = assistant.message(
workspace_id='{YOUR WORKSPACE HERE}',
input={
'text': 'trips'
}
).get_result()
fullResponse=json.dumps(response, indent=2)
print(fullResponse)
print("testing to print the output: ")
respuesta=json.dumps(response, indent=2)
#print(respuesta['output'][0]['text'])
print(respuesta['output']['text'])
And the output:
Traceback (most recent call last):
"intents": [
File "C:/Users/.PyCharmCE2018.3/config/scratches/pruebaMain.py", line 105, in <module>
{
print(respuesta['output']['text'])
"intent": "trips",
TypeError: string indices must be integers
"confidence": 1
}
],
"entities": [],
"input": {
"text": "trips"
},
"output": {
"generic": [
{
"response_type": "text",
"text": "We got trips to different countries! Type continents to know more!"
}
],
"text": [
"We got trips to different countries! Type continents to know more!"
],
"nodes_visited": [
"node_2_1544696932582"
],
"log_messages": []
},
"context": {
"conversation_id": "{took it out for privacy}",
"system": {
"initialized": true,
"dialog_stack": [
{
"dialog_node": "root"
}
],
"dialog_turn_counter": 1,
"dialog_request_counter": 1,
"_node_output_map": {
"node_2_1544696932582": {
"0": [
0
]
}
},
"branch_exited": true,
"branch_exited_reason": "completed"
}
}
}
testing to print the output:
Process finished with exit code 1
So I want to get the answer of "We got trips to different countries! Type continents to know more!". I have read the documentation of the python API and some more info (https://github.com/IBM-Cloud/watson-conversation-variables) of but can't seem to find anything. I also tried accessing the json variable with $but did not work.
You don't have to use json.dumps here, you can directly use the response JSON returned from the service as shown in the code snippet below
import watson_developer_cloud
if __name__ == '__main__':
assistant = watson_developer_cloud.AssistantV1(
iam_apikey='APIKEY',
version='2018-09-20',
url='https://gateway.watsonplatform.net/assistant/api'
)
response = assistant.message(
workspace_id='WORKSPACE_ID',
input={
'text': 'trips'
}
).get_result()
print(response)
print(response['output']['text'][0])

scan python helper in elasticsearch with slice

I have the following code:
client = Elasticsearch(hosts=['host'], port=9200)
scan_arguments = {'query': {'slice': {'max': 1, 'id': 0}}, 'preference': '_shards:0', 'index': u'my_index'}
for hit in scan(client, **scan_args):
# do something with hit
and I get the following error
RequestError: TransportError(400, u'parsing_exception', u'[slice] failed to parse field [max]')
How should the slice parameter be passed in the scan function?
"max" needs to be >1 in my experience. I saw the same error before when using "max":1.
The raw error from the HTTP API says max must be greater than 1.
{
"error": {
"root_cause": [
{
"type": "x_content_parse_exception",
"reason": "[3:20] [slice] failed to parse field [max]"
}
],
"type": "x_content_parse_exception",
"reason": "[3:20] [slice] failed to parse field [max]",
"caused_by": {
"type": "illegal_argument_exception",
"reason": "max must be greater than 1"
}
},
"status": 400
}

How do I properly construct a query using the elasticsearch python API?

I've got some code that looks like this
from elasticsearch import Elasticsearch
client = Elasticsearch(hosts = [myhost])
try:
results = es_client.search(
body = {
'query' : {
'bool' : {
'must' : {
'term' : {
'foo' : 'bar',
'hello' : 'world'
}
}
}
}
},
index = 'index_A,index_B',
size = 10,
from_ = 0
)
except Exception as e:
## my code stops here, as there is an exception
import pdb
pdb.set_trace()
Examining the exception
SearchPhaseExecutionException[Failed to execute phase [query], all shards failed;
And further down
Parse Failure [Failed to parse source [{"query": {"bool": {"must": {"term": {"foo": "bar", "hello": "world"}}}}}]]]; nested: QueryParsingException[[index_A] [bool] query does not support [must]];
The stack trace was huge so I just grabbed snippets of it, but the main error appears to be that "must" is not supported, at least the way I have constructed my query.
I was using this and this for guidance on constructing the query.
I can post a more complete stack trace, but I was hoping someone is able to see a very obvious error that I have made inside the "body" parameter inside the "search" method.
Can anyone see anything that I have clearly done wrong as far as constructing the query body for the python API?
The syntax of the query doesn't look correct to me. Try this:
results = es_client.search(
body = {
"query": {
"bool": {
"must": [
{
"term": {
"foo": {
"value": "bar"
}
}
},
{
"term": {
"hello": {
"value": "world"
}
}
}
]
}
}
},
index = 'index_A,index_B',
size = 10,
from_ = 0
)

how to rewrite mongo shell code to regular python file?

The below code exist inside a python file I run at cli as 'python test.py '....
import pymongo
from pymongo import Connection
connection = Connection('localhost', 27017)
db = connection.school
data = db.students.aggregate(
{ $match : { 'scores.type': 'homework' },
{ $project: { id : $_id,
name : $name,
scores : $scores
},
{ $unwind: "$scores" },
{ $group : {
_id : "$id",
minScore: { $min :"$scores.score" },
maxScore: { $max: "$scores.score" }
}
});
for _id in data:
print _id
# NOTE: this can only be done ONCE...first find lowest_id then
# remove it by uncommenting the line below...then recomment line.
# db.students.remove(data)
when I run this code I get this error...
File "test.py", line 11
{ $match : { 'scores.type': 'homework' },
^
SyntaxError: invalid syntax
How do I rewrite this code so it works correctly from inside my test.py python file?
You have a few syntax issues.
First, pipeline is an array (a list in Python) where you are trying to pass multiple pipeline elements as separate parameters.
Second, you need to quote the pipeline operators, such as $match: '$match'
Here is a page that has some nice examples:
http://blog.pythonisito.com/2012/06/using-mongodbs-new-aggregation.html

Categories