I have written a sample python code to post log data to Fluntd endpoint of EFK stack. When I send 400 logs at a time, status_code is 200 and can see all the logs at Kibana Dashboard, But when I send 500 logs at a time, the status_code is 414.
Here is the sample python code:
import sys
import json
from datetime import datetime
import random
import requests
f = open('/etc/td-agent/data2.json',)
data = json.load(f)
input = file(sys.argv[-1])
actions = []
url = ''
u_name = ''
p_word = ''
for line in input:
temp = json.loads(line)
tenantid = temp['HTTP_FLUENT_TAG']
message = temp['message']
message_json = json.loads(message)
h_name = data['account_details'][tenantid]['hostname']
u_name = data['account_details'][tenantid]['username']
p_word = data['account_details'][tenantid]['password']
url = 'https://' + h_name
for element in message_json:
temp = str(element['date'])
url = url + '?time=' + temp
action = {
"msg": element['log'],
"id": element['ID']
}
actions.append(action)
r = requests.post(url, auth=(u_name, p_word), json=actions)
print(r.status_code)
f.close()
Can anyone please help how to send huge load at a time at the Fluentd endpoint.
For Elasticsearch endpoint, we can use the elasticsearch api and it has also bulk feature, which helps to send huge amount of data at a time. I am looking for if there is any such way for Fluentd Endpoint.
There are two ways this can be done, either there are two ways.
create a JSON file of request body JSON objects and zip them to send them all in a single request
Call APIs multiple times with maximum possible JSONs as you mentioned above.
Related
I am making an API call to the ECB and try to return an url for a csv file, but i always get a xml file any help would be appreciated.
My reference: https://www.datacareer.de/blog/accessing-ecb-exchange-rate-data-in-python/
My code:
import requests # 2.18.4
import pandas as pd # 0.23.0
import io
from datetime import datetime, timedelta
# Building blocks for the URL
entrypoint = 'https://sdw-wsrest.ecb.europa.eu/service/' # Using protocol 'https'
resource = 'data' # The resource for data queries is always'data'
flowRef ='EXR' # Dataflow describing the data that needs to be returned, exchange rates in this case
key = 'D.CHF.EUR.SP00.A' # Defining the dimension values, explained below
# Define the parameters
parameters = {
'startPeriod': (datetime.now()-timedelta(1)).strftime("%Y-%m-%d"), # Start date of the time series
'endPeriod': datetime.now().strftime("%Y-%m-%d") # End of the time series
}
# Construct the URL: https://sdw-wsrest.ecb.europa.eu/service/data/EXR/D.CHF.EUR.SP00.A
request_url = entrypoint + resource + '/' + flowRef + '/' + key
# Make the HTTP request again, now requesting for CSV format
response = requests.get(request_url, params=parameters, headers={'Accept': 'text/csv'})
# Response succesful? (Response code 200)
print(response)
print(response.url)
I am trying to request from Twitch's API a list of live streamers.
First, I need to check all streamers from a specific Twitch Team, which the API gives me their IDs. Then, I need to check which are live right now. For this, I have written the following code:
import requests
import json
from concurrent.futures import ThreadPoolExecutor
response = requests.get("https://api.twitch.tv/kraken/teams/rhynoesports",
headers={'Accept': 'application/vnd.twitchtv.v5+json',
'Client-ID': 'KEY'})
ids = []
with ThreadPoolExecutor(max_workers=5) as executor:
for i in response.json()["users"]:
uid = i["_id"]
ids.append(uid)
parameters = {
"channel": ids
}
response_live = requests.get("https://api.twitch.tv/kraken/streams/",
params=parameters,
headers={'Accept': 'application/vnd.twitchtv.v5+json',
'Client-ID': 'KEY'})
status = []
with ThreadPoolExecutor(max_workers=5) as executor:
for s in response_live.json()["streams"]:
sid = s["channel"]["display_name"]
sviewer = s["viewers"]
sgame = s["preview"]["medium"]
status.append(sid)
status.append(sviewer)
status.append(sgame)
print(status)
The first request to the API, appends this to ids the following:
['151725719', '45737168', '156113210', '89293605', '650627666', '136014647',
'99060924', '246849290', '61610474', '602283265', '204979621', '507115885',
'49251436', '265876002', '155784200']
How can I use the ids stored to be used as the parameters for the channel request?
https://api.twitch.tv/helix/streams?user_id=123&user_id=456 230
You can use this API to see if the stream ID has a payload or not
You’ll get returns for only live streams.
Writing a bot for a personal project, and the Bittrex api refuses to validate my content hash. I've tried everything I can think of and all the suggestions from similar questions, but nothing has worked so far. Tried hashing 'None', tried a blank string, tried the currency symbol, tried the whole uri, tried the command & balance, tried a few other things that also didn't work. Reformatted the request a few times (bytes/string/dict), still nothing.
Documentation says to hash the request body (which seems synonymous with payload in similar questions about making transactions through the api), but it's a simple get/chcek balance request with no payload.
Problem is, I get a 'BITTREX ERROR: INVALID CONTENT HASH' response when I run it.
Any help would be greatly appreciated, this feels like a simple problem but it's been frustrating the hell out of me. I am very new to python, but the rest of the bot went very well, which makes it extra frustrating that I can't hook it up to my account :/
import hashlib
import hmac
import json
import os
import time
import requests
import sys
# Base Variables
Base_Url = 'https://api.bittrex.com/v3'
APIkey = os.environ.get('B_Key')
secret = os.environ.get('S_B_Key')
timestamp = str(int(time.time() * 1000))
command = 'balances'
method = 'GET'
currency = 'USD'
uri = Base_Url + '/' + command + '/' + currency
payload = ''
print(payload) # Payload Check
# Hashes Payload
content = json.dumps(payload, separators=(',', ':'))
content_hash = hashlib.sha512(bytes(json.dumps(content), "utf-8")).hexdigest()
print(content_hash)
# Presign
presign = (timestamp + uri + method + str(content_hash) + '')
print(presign)
# Create Signature
message = f'{timestamp}{uri}{method}{content_hash}'
sign = hmac.new(secret.encode('utf-8'), message.encode('utf-8'),
hashlib.sha512).hexdigest()
print(sign)
headers = {
'Api-Key': APIkey,
'Api-Timestamp': timestamp,
'Api-Signature': sign,
'Api-Content-Hash': content_hash
}
print(headers)
req = requests.get(uri, json=payload, headers=headers)
tracker_1 = "Tracker 1: Response =" + str(req)
print(tracker_1)
res = req.json()
if req.ok is False:
print('bullshit error #1')
print("Bittex response: %s" % res['code'], file=sys.stderr)
I can see two main problems:
You are serialising/encoding the payload separately for the hash (with json.dumps and then bytes) and for the request (with the json=payload parameter to request.get). You don't have any way of knowing how the requests library will format your data, and if even one byte is different you will get a different hash. It is better to convert your data to bytes first, and then use the same bytes for the hash and for the request body.
GET requests do not normally have a body (see this answer for more details), so it might be that the API is ignoring the payload you are sending. You should check the API docs to see if you really need to send a request body with GET requests.
I'm not sure why I am receiving this error. There is a decoder.py file in my python folder.
import requests
import json
import common
session = requests.Session()
uri = "http://www.missingkids.com"
json_srv_uri = uri + "/missingkids/servlet/JSONDataServlet"
search_uri = "?action=publicSearch"
child_detail_uri = "?action=childDetail"
session.get(json_srv_uri + search_uri + "&searchLang=en_US&search=new&subjToSearch=child&missState=CA&missCountry=US") #Change missState=All for all states
response = session.get(json_srv_uri + search_uri + "&searchLang=en_US&goToPage=1")
dct = json.loads(response.text)
pgs = int(dct["totalPages"])
print("found {} pages".format(pgs))
missing_persons = {}
The URL http://www.missingkids.com/missingkids/servlet/ returns a 404 Error. Thus, there is no JSON data for Requests to return. Fixing the URL so that it points to a valid destination will allow Requests to return page content.
To make a search for a missing child registered in that website's database, try this URL: http://www.missingkids.com/gethelpnow/search
After every HTTP call you need to check the status code.
Example
import requests
r = requests.get('my_url')
# status code 'OK' is very popular and its numeric value is 200
# note that there are other status codes as well
if r.status_code == requests.codes.ok:
# do your thing
else:
# we have a problem
I only manage to use the Emotion API subscription key for pictures but never for videos. It makes no difference whether I use the API Testing Console or try to call the Emotion API by Pathon 2.7. In both cases I get a response status 202 Accepted, however when opening the Operation-Location it says
{ "error": { "code": "Unauthorized", "message": "Access denied due to
invalid subscription key. Make sure you are subscribed to an API you are
trying to call and provide the right key." } }
On the Emotion API explanatory page it says that Response 202 means that
The service has accepted the request and will start the process later.
In the response, there is a "Operation-Location" header. Client side should further query the operation status from the URL specified in this header.
Then there is Response 401, which is exactly what my Operation-Location contains. I do not understand why I'm getting a response 202 which looks like response 401.
I have tried to call the API with Python using at least three code versions that I found on the Internet that
all amount to the same, I found the code here :
Microsoft Emotion API for Python - upload video from memory
python-upload-video-from-memory
import httplib
import urllib
import base64
import json
import pandas as pd
import numpy as np
import requests
_url = 'https://api.projectoxford.ai/emotion/v1.0/recognizeInVideo'
_key = '**********************'
_maxNumRetries = 10
paramsPost = urllib.urlencode({'outputStyle' : 'perFrame', \
'file':'C:/path/to/file/file.mp4'})
headersPost = dict()
headersPost['Ocp-Apim-Subscription-Key'] = _key
headersPost['content-type'] = 'application/octet-stream'
jsonGet = {}
headersGet = dict()
headersGet['Ocp-Apim-Subscription-Key'] = _key
paramsGet = urllib.urlencode({})
responsePost = requests.request('post', _url + "?" + paramsPost, \
data=open('C:/path/to/file/file.mp4','rb').read(), \
headers = headersPost)
print responsePost.status_code
videoIDLocation = responsePost.headers['Operation-Location']
print videoIDLocation
Note that changing _url = 'https://api.projectoxford.ai/emotion/v1.0/recognizeInVideo' to _url =
'https://westus.api.cognitive.microsoft.com/emotion/v1.0/recognizeInVideo' doesn't help.
However, afterwards I wait and run every half an hour:
getResponse = requests.request('get', videoIDLocation, json = jsonGet,\
data = None, headers = headersGet, params = paramsGet)
print json.loads(getResponse.text)['status']
The outcome has been 'Running' for hours and my video is only about half an hour long.
Here is what my Testing Console looks like Testing Console for Emotion API, Emotion Recognition in Video
Here I used another video that is about 5 minutes long and available on the internet. I found the video in a different usage example
https://benheubl.github.io/data%20analysis/fr/
that uses a very similar code, which again gets me a response status 202 Accepted and when opening the Operation-Location the subscription key is wrong
Here the code:
import httplib
import urllib
import base64
import json
import pandas as pd
import numpy as np
import requests
# you have to sign up for an API key, which has some allowances. Check the
API documentation for further details:
_url = 'https://api.projectoxford.ai/emotion/v1.0/recognizeinvideo'
_key = '*********************' #Here you have to paste your
primary key
_maxNumRetries = 10
# URL direction: I hosted this on my domain
urlVideo = 'http://datacandy.co.uk/blog2.mp4'
# Computer Vision parameters
paramsPost = { 'outputStyle' : 'perFrame'}
headersPost = dict()
headersPost['Ocp-Apim-Subscription-Key'] = _key
headersPost['Content-Type'] = 'application/json'
jsonPost = { 'url': urlVideo }
responsePost = requests.request( 'post', _url, json = jsonPost, data = None,
headers = headersPost, params = paramsPost )
if responsePost.status_code == 202: # everything went well!
videoIDLocation = responsePost.headers['Operation-Location']
print videoIDLocation
There are further examples on the internet and they all seem to work but replicating any of them never worked for me. Does anyone have any idea what could be wrong?
The Video Feature of Emotion API retires October 30th, so maybe you should change your procedure to screenshots anyways.
But for your question: The API returns you an URL where your results are accessible. You cannot open this URL in your browser, this will give you the notice of "invalid key", instead you need to call over python again this URL including your key.
I will post you my code how to get the score, I am using Python 3, so there might be some adjustments necessary. Only "tricky" point is getting the Operation ID, which is just the ID in the URL ( =location in my case) which leads to your request. Rest of the parameters like subscription key etc. is as before.
#extract operation ID from location-string
OID = location[67:]
bod = ""
try:
conn =
http.client.HTTPSConnection('westus.api.cognitive.microsoft.com')
conn.request("GET", "/emotion/v1.0/operations/"+OID+"?%s" %params, bod, headers)
response = conn.getresponse()
data = response.read()
print(data)
conn.close()
except Exception as e:
print("[Errno {0}] {1}".format(e.errno, e.strerror))
Did you verify your API call is working using curl? Always prototype calls using curl first. If it works in curl but not in Python, use Fiddler to observe the API request and response.
I also found an answer in the following link, all steps are explained:
https://gigaom.com/2017/04/10/discover-your-customers-deepest-feelings-using-microsoft-facial-recognition/