Unable to send deserialized data to a Web API - python

I am trying to use an API which I have used previously for various jobs, to query and get me relevant data. But lately, I am unable to do that because of an unusual exception returned, which I honestly have no idea about.
The CODE:
import SIEMAuth
import requests
alert_id = '144116287822364672|12101929'
query_params = {"id": {"value": alert_id}, "format": {"format": 0}}
print(requests.post(SIEMAuth.url + 'ipsGetAlertPacket', json=query_params, headers=SIEMAuth.session_headers, verify=False).text)
The following exception/traceback response is returned on querying this:
Can not construct instance of com.mcafee.siem.api.data.alert.EsmPacketFormat: no suitable constructor found, can not deserialize from Object value (missing default constructor or creator, or perhaps need to add/enable type information?)
at [Source: java.io.StringReader#1a15fbf; line: 1, column: 2]
Process finished with exit code 0
On trying to surf the internet to know more about the exception, most of the results are related to Jackson Parser for Json in Java Programming Environment which is not something I am working on or am aware of.
If anybody could help, I'd be extremely grateful.....

Unfortunately it's as I suggested; basically one way or another it's broken. The response from their support is as follows.
I have reach out to my development team for this question. I got below response.
That particular get is not meant to be used in the external API. It should only be used from the interface, and has been removed since the version of the ESM you are on. If you want to use that externally then you need to submit it as a per.
I hope this clears your questions.
Edit: This has actually been expanded on in a thread on their support forums. You need a login to see the original thread.
Name notwithstanding, this API does not return the actual data packet associated with an event. In fact, when aggregation is enabled, not all of the packets associated with a given event are available on the ESM. Raw packet data can be retrieved from the ELM through the UI, but unfortunately there currently is not a way to do that programmatically.

Related

How do i send webhook data and turn it into a CSV file

I'm a little new to coding but I'd like to know how I can get webhook data from say tradingview and just turn the info into a csv. I use python
edit: ok hi i figured how i want to do my app. i am stuck again but with a more descriptive image
id like to figure out how to turn my current request into a csv file i can place into a path of my choice in my computer. Below is my current code but ill editing it
my current code
things to note is that i am using aws chalice as a rest api to get webhook requests from. My goal is to turn that webhook data into a csv file.
example json: { "name": "kaishi",
"lotsize": "0.03",
"Signal": "op_sell"
}
these signals do change so the request is always different
in the image i posted i do get a error from my powershell that i dont understand because as ive said i am quite new to coding.
( raise TypeError('Object of type %s is not JSON serializable'
TypeError: Object of type DataFrame is not JSON serializable
raise ValueError("If using all scalar values, you must pass an index")
ValueError: If using all scalar values, you must pass an index)
thats just 2 error2, they vary depending on how im testing my app.
any info would be appreciated.
Webhooks are requests that are sent from the remote server (in this case the "tradingview" server) to your server, so you'll need to find an existing package that handles webhooks from "tradingview" and deploy it accordingly, or develop with one of the many web service frameworks exposed through Python. A good place to start is exploring some of the open-source options available on GitHub: https://github.com/search?l=Python&q=tradingview&type=Repositories

How to send json into snowplow using iglu webook in python

I have a json full of event data that I need to send into snowplow in python using an iglu webhook but having trouble finding any solid guidance on this. Most of the documentation I've been able to find relates to tracking specific events and sending the data through but I need to backfill historical data in the same manner I'll fill forward looking data hence having to send a large json with activity history at the outset.
Is this possible using snowplow/python/iglu or am I approaching the problem incorrectly?
This question is getting old and OP may have moved on, but I'll leave an answer for anyone else who might stumble upon it.
A Snowplow collector (eg, the stream-collector) receives data over HTTP. Any method of sending an HTTP request should work in theory, however there are specific SDKs that address common use cases. For Python specifically, there is the snowplow-python-tracker. You can refer to the full documentation here: Snowplow Python Tracker Docs.
You do not need to be using an Iglu webhook. You can point your Python tracker instance directly to your collector via the existing request paths, which are documented here. Yes, one of these paths is for requests via the Iglu webhook adapter but that is meant to be used in specific situations where you don't control the environment, in which the tracker is instantiated, eg third-pary vendor systems.

In python what is a response object returned from a website?

I'm trying to use the etsy API and I was finally able to get it running from the source. I gave it my key, and it returned the following when printed out.
<etsy._v2.EtsyV2 object at 0xb7284ccc>
However I have no idea what to do with it. The github-repo doesn't have much documentation, and the command that is suppose to follow doesn't work. I read the Etsy API and didn't find the mentioned command getFrontFeaturedListings like the github listed.
I've had this issue before with an HTTP response object and I was told to use response.content to check out more info on the object. It didn't work for this object so I'm wondering if there was a simple way to test any generic object, or at least see what this object contains?
When in doubt you can always use the dir built-in method on an arbitrary python object. This will show you methods and fields attached to the object. https://docs.python.org/3/library/functions.html#dir
Anyway, sorry to hear about the poor documentation of the library. Last time I used Etsy's API I just created a little class that used requests. It wasn't much work since Etsy lays out all of the URIs + documentation nicely on their developer site. https://www.etsy.com/developers/documentation/reference/favoritelisting

How can I directly retrieve only part of a JSON document, rather than extracting what I need from the whole thing?

I want to make a JSON request with the Python library requests where I only obtain certain JSON objects.
I know that it is really easy to process the JSON object obtained to only focus in the needed information, but that would throttle the request efficiency (in case it is done repeatedly).
As said, I know this is a possibility:
url = 'www.foo.com'
r = requests.get(url).json()
#Do something with r[3]['data4'], the only one who is going to be used.
But how could I directly only obtain r[3]['data4'] from the request?
Short Answer
To answer your question no, you can't but to understand why you need to know what is happening behind the scenes.
Behind the scenes
When you make a request such as r = requests.get('www.foo.bar') you are making a request to the server and you are viewing the result of that request when you do r.json(). This means that you cannot just get r[3]['data'] as you are parsing what the server sends to you unless the server only sends r[3]['data']. It may be possible to filter out everything else apart from that in the response processing but I am unaware of how to do it.
You can't, if the server does not allow it. If the target server allows you to specify fields you want then you can send that field list in your request and server will return you only those fields in JSON. Otherwise your will have to parse full JSON response and get your desired fields.

Distinguish data received via POST and GET in Cherrypy

The variable cherrypy.request.params as it is described in the API contains the query string and the POST variables in a dictionary. However combing over this, it seems that it contains every variable received after processing the full request URI to pull the GET data. This then becomes indistinguishable from POST data in the dictionary.
There seems to be no way to tell the difference, or perhaps I am wrong.
Can someone please enlighten me as to how to use purely posted data and ignore any data in the query string beyond the request URI. And yes I am aware I can find out whether it was a POST or GET request but this does not stop forgery in requests to URIs containing GET data in a POST request.
>http://localhost:8080/testURL/part2?test=1
>POST username = test
"cherrypy.request.params" has 2 variables
test = 1
username=test
The docs aren't very clear on this point, but starting in CherryPy 3.2, you can reference request.body.params to obtain just the POST/PUT params. In 3.2 and below, try request.body_params. See http://docs.cherrypy.org/dev/refman/_cprequest.html#cherrypy._cprequest.Request.body_params

Categories