I am trying to pass query from Python(eclipse IDE) to extract data from
specific dashboard on SPLUNK enterprises. I am able to get data
printed on my console by passing the required queries however I am not
able to extract data for specific time interval(like if I want data
for 1 hour, 1 day, 1 week or 1 month)
I have tried commands like 'earliest', 'latest' along with my query but every time it throws an error stating "raise HTTPError(response) splunklib.binding.HTTPError: HTTP 400 Bad Request -- Search Factory: Unknown search command 'earliest'"
Here is my code
import splunklib.client as client
import splunklib.results as results
HOST = "my hostname"
PORT = 8089
USERNAME = "my username"
PASSWORD = "my password"
service = client.connect(
host=HOST,
port=PORT,
username=USERNAME,
password=PASSWORD)
rr = results.ResultsReader(service.jobs.export("search index=ccmjimmie | stats count(eval(resCode!=00200)) AS errored | chart sum(errored)|earliest=-1d"))
for result in rr:
if isinstance(result, results.Message):
# Diagnostic messages might be returned in the results
print(result.type, result.message)
elif isinstance(result, dict):
# Normal events are returned as dicts
print (result)
assert rr.is_preview == False
Output I am getting without using time query
OrderedDict([('sum(errored)', '1566')])
OrderedDict([('sum(errored)', '4404')])
OrderedDict([('sum(errored)', '6655')])
OrderedDict([('sum(errored)', '8992')])
etc...
This output is same as expected but not bounded by time. I want the same output but for Given Time Interval. And time interval should be passed from the search query "serch.jobs.export()" in the above Python code
Please let me know how do I pass 'time' query along with my required query.
Any help is most appreciated! Thanks in advance!
You have to put the earliest at the beginning of your search. Example for - 1 day until now:
"search index=ccmjimmie earliest=-1d | stats count(eval(resCode!=00200)) AS errored | chart sum(errored)"
Details see here: https://docs.splunk.com/Documentation/Splunk/7.2.4/SearchReference/SearchTimeModifiers
I may have used a different process to run Splunk queries by Python and get search results in JSON. However passing 'time' is very convenient in this way.
Here I am doing so by passing the earliest and latest time variables in the request body of the post request.
post_data = { 'id' : unique_id,
'search' : search_query,
'earliest_time' : '1',
'latest_time' : 'now',
}
You can find the complete details here:
https://stackoverflow.com/a/66747167/9297984
Related
I've got a python flask app whose job is to work with the Twitter V2.0 API. I got to using the Tweepy API in my app because I was having difficulty cold coding the 3 legged auth flow. Anyway, since I got that working, I'm now running into difficulties executing some basic queries, like get_me() and get_user()
This is my code:
client = tweepy.Client(
consumer_key=private.API_KEY,
consumer_secret=private.API_KEY_SECRET,
access_token=access_token,
access_token_secret=access_token_secret)
user = client.get_me(expansions='author_id', user_fields=['username','created_at','location'])
print(user)
return('success')
And this is invariably the error:
tweepy.errors.BadRequest: 400 Bad Request
The expansions query parameter value [author_id] is not one of [pinned_tweet_id]
Per the Twitter docs for this endpoint, this should certainly work...I fail to understand why I the 'pinned_tweet_id' expansion is the particular issue.
I'm left wondering if I'm missing something basic here or if Tweepy is just a POS and I should considering rolling my own queries like I originally intended.
Tweet Author ID
You may have read the Twitter Docs incorrectly as the expansions parameter value has only pinned_tweet_id, and the tweet fields parameter has the author_id value you're looking for. Here is a screenshot for better clarification:
The code would look like:
client = tweepy.Client(
consumer_key=private.API_KEY,
consumer_secret=private.API_KEY_SECRET,
access_token=access_token,
access_token_secret=access_token_secret)
user = client.get_me(tweet_fields=['author_id'], user_fields=[
'username', 'created_at', 'location'])
print(user)
return('success')
User ID
If you're looking for the user id then try omitting tweet_fields and add id in the user_fields also shown in the Twitter Docs.
The code would look like:
client = tweepy.Client(
consumer_key=private.API_KEY,
consumer_secret=private.API_KEY_SECRET,
access_token=access_token,
access_token_secret=access_token_secret)
user = client.get_me(user_fields=['id', 'username', 'created_at', 'location'])
print(user)
return('success')
You can obtain the user id with user.data.id.
The solution is to drop the 'expansions' kwag and leave 'user_fields' as is. I was further confused by the fact that printing the returned user object does not show the requested user_fields as part of the data attribute. You have to explicitly access them through the data attribute, as below.
I'm trying to get make an API for the first time and I've made my app but it says I have to do a local authentication with instructions here:
Link to TDAmeritrade authentication
But it says I have to go on https://auth.tdameritrade.com/auth?response_type=code&redirect_uri={URLENCODED REDIRECT URI}&client_id={URLENCODED Consumer Key}%40AMER.OAUTHAP where I plug in the "url encoded redirect uri" and "urlencoded consumer key" and I dont know how to get the URI. Let's say if I'm using local host 1111 do i just plug in "localhost:1111"? because that didnt work
Perhaps that doesn't even matter? because I was writing the following:
import requests
from config import consumer_key
#daily prices generator
endpoint = "https://api.tdameritrade.com/v1/marketdata/{}/pricehistory".format("AAPL")
#parameters
import time
timeStamp=time.time()
timeStamp=int(timeStamp)
parameters = {'api_key':consumer_key,
'periodType':'day',
'frequencyType':"minute",
'frequency':'5',
'period':'1',
'endDate':str(timeStamp+86400),
'startDate':str(timeStamp),
'extendedHourData':'true'}
#caller
stuff = requests.get(url = endpoint, params = parameters)
#reformater
lister = stuff.json()
lister
which returned "{'error': 'The API key in request query param is either null or blank or invalid.'}"
TDA has some rules
timeStamp needs to be in milliseconds
Can only get past 31 days in minute format
There is also some format constraints.
frequenceType=minute --> then use periodType=day
frequencyType=daily --> then use periodType=month
I'm trying to get search queries for my site.
import argparse
import sys
from googleapiclient import sample_tools
def execute_request(service, property_uri, request):
"""Executes a searchAnalytics.query request.
Args:
service: The webmasters service to use when executing the query.
property_uri: The site or app URI to request data for.
request: The request to be executed.
Returns:
An array of response rows.
"""
return service.searchanalytics().query(
siteUrl=property_uri, body=request).execute()
# Declare command-line flags.
argparser = argparse.ArgumentParser(add_help=False)
argparser.add_argument('property_uri', type=str,
help=('Site or app URI to query data for (including '
'trailing slash).'))
argparser.add_argument('start_date', type=str,
help=('Start date of the requested date range in '
'YYYY-MM-DD format.'))
argparser.add_argument('end_date', type=str,
help=('End date of the requested date range in '
'YYYY-MM-DD format.'))
service, flags = sample_tools.init(
sys.argv, 'webmasters', 'v3', __doc__, 'client_secrets.json', parents=[argparser],
scope='https://www.googleapis.com/auth/webmasters.readonly')
# First run a query to learn which dates we have data for. You should always
# check which days in a date range have data before running your main query.
# This query shows data for the entire range, grouped and sorted by day,
# descending; any days without data will be missing from the results.
request = {
'startDate': flags.start_date,
'endDate': flags.end_date,
'dimensions': ['date']
}
response = execute_request(service, flags.property_uri, request)
print(response)
When I run the program:
python googleapisearch.py property_uri=http://enquetemaken.be/ start_date=2018-06-12 end_date=2018-06-13
I get the following error:
googleapiclient.errors.HttpError: https://www.googleapis.com/webmasters/v3/sites/property_uri%3Dhttp%3A%2F%2Fenquetemaken.be%2F/searchAnalytics/query?alt=json
returned "'property_uri=http://enquetemaken.be/' is not a valid Search
Console site URL.">
I can not understand what's wrong.
In the dashboard, my url is exactly the same as I enter:
What am I doing wrong?
Correctly run the program as follows:
python googleapisearch.py 'http://enquetemaken.be/' '2018-06-12' '2018-06-13'
I have a Google App Engine API using Python and NDB working except for HTTP response code/error checking. I put in some code to handle 406 (to only accept json requests) and 400 errors (to prevent a user from leaving a required field blank) to the post function for one of my entities but now it seems to have broken my code. This is the code with the error checking included:
class Task_action(webapp2.RequestHandler):
def post(self):
#Only allows a JSON, if not, then error
if 'application/json' not in self.request.accept:
self.response.status = 406
self.response.status_message = "Not Acceptable, API only supports application/json MIME type"
return
new_task = Task(parent=PARENT_KEY,
name = self.request.get("task_name"),
hours = int(self.request.get("task_hours")),
id = self.request.get("task_name"))
#has error code, since name and hours is required
if name:
new_task.name = name
else:
self.response.status = 400
self.response.status_message = "Invalid request, task name is Required."
if hours:
new_task.hours = hours
else:
self.response.status = 400
self.response.status_message = "Invalid request, task hours is Required."
key = new_task.put()
out = new_task.to_dict()
self.response.write(json.dumps(out))
I am using curl to test it:
curl --data-urlencode "name=clean" -H "Accept: application/json" http://localhost:15080/task
I know the problem is in the error checking code (all the if else statements) because when I take it out the curl test works fine and the object is added to the ndb database correctly. However, with the error checking code included my curl test does not add the object as it should. Does anyone have an idea why the error checking code is breaking my post statement? Is there a better way to return HTTP error response codes?
You had some uninitialized variables in the code (name, hours, maybe PARENT_KEY) and also you didn't return after preparing the error response, thus flowing in areas where the code wouldn't work.
I'd suggest re-organizing the error checking code for minimal impact on the functional code (checks should be done as early as possible, to simplify the remaining functional code. Also, I prefer to use the more concise webapp2.abort() function (which doesn't need a return statement).
Something along these lines:
class Task_action(webapp2.RequestHandler):
def post(self):
# Only allows a JSON, if not, then error
if 'application/json' not in self.request.accept:
webapp2.abort(406, details="Not Acceptable, API only supports application/json MIME type")
# request must contain a valid task name
name = self.request.get("task_name")
if not name:
webapp2.abort(400, details="Invalid request, task name is Required.")
# request must contain a valid task hours
try:
hours = int(self.request.get("task_hours"))
except Exception:
hours = 0
if not hours:
webapp2.abort(400, details="Invalid request, task hours is Required.")
new_task = Task(parent=PARENT_KEY, name=name, hours=hours, id=hours)
new_task.name = name # isn't this done by Task() above?
new_task.hours = hours # isn't this done by Task() above?
key = new_task.put()
out = new_task.to_dict()
self.response.write(json.dumps(out))
Another note: you're specifying the id parameter in the Task() call, which doesn't work unless you know each Task() entity has a unique hours value. You may want to let the datastore assign IDs automatically.
I'm trying to read facebook conversations of a page using a python script. With this code
import facebook
at = "page access token"
pid = "page id"
api = facebook.GraphAPI( at )
p = api.get_object( 'me/conversations')
print p
I get a dictionary containing the following
{'paging': {'next': 'https://graph.facebook.com/v2.5/1745249635693902/conversations?access_token=<my_access_token>&limit=25&until=1454344040&__paging_token=<my_access_token>', 'previous': 'https://graph.facebook.com/v2.5/1745249635693902/conversations?access_token=<my_access_token>&limit=25&since=1454344040&__paging_token=<my_access_token>'}, 'data': [{'link': '/Python-1745249635693902/manager/messages/?mercurythreadid=user%3A100000386799941&threadid=mid.1454344039847%3A2e3ac25e0302042916&folder=inbox', 'id': 't_mid.1454344039847:2e3ac25e0302042916', 'updated_time': '2016-02-01T16:27:20+0000'}]}
What are those fields? How can I get the text of the message?
Edit: I tried asking for the "messages" field by adding
msg = api.get_object( p['data'][0]['id']+'/messages')
print msg
but it just returns the same fields. I've searched in the API docs for a while, but I didn't find anything helpful. Is it even possible to read the message content of a facebook page's conversation using python?
I managed to find the answer myself; the question was not well posed and did not match what I was exactly looking for.
I wanted to get the content of the messages of facebook conversations of a page. Following the facebook graph API documentation, this can be achieved by asking for the conversations ({page-id}/conversations), then the messages in said conversations ({conversation-id}/messages, https://developers.facebook.com/docs/graph-api/reference/v2.5/conversation/messages), and finally asking for the message itself should return a dict with all the fields, content included (/{message-id}, https://developers.facebook.com/docs/graph-api/reference/v2.5/message).
At least this is how I believed it should have been; however the last request returned only the fields 'created_time' and 'id'.
What I was really trying to ask was a way to fetch the 'message' (content) field. I was assuming the function graph.get_object() from the official python facebook sdk should have returned all the fields in any case, since it has only one documented argument (http://facebook-sdk.readthedocs.org/en/latest/api.html) - the graph path for the requested object, and adding additional field request is not allowed.
The answer I was looking for was in this other question, Request fields in Python Facebook SDK.
Apparently, it's possible to ask for specific fields ( that are not returned otherwise ) by passing an **args dict with such fields along with the path requested.
In a GET request to the Facebook graph that would be the equivalent of adding
?fields=<requested fieds>
to the object path.
This is the working code:
#!/usr/bin/env python
import facebook
at = <my access token>
pid = <my page id>
api = facebook.GraphAPI( at )
args = {'fields' : 'message'} #requested fields
conv = api.get_object( 'me/conversations')
msg = api.get_object( conv['data'][0]['id']+'/messages')
for el in msg['data']:
content = api.get_object( el['id'], **args) #adding the field request
print content