How to add vars in response data from API - python

I have a function that accepts variables and i need to pass the test, there is a condition in test that all the vars for the function CAN BE found in response data from API.
So how can i add variables to the api, i searched in local files and looked through the api documentation. There are no functions for adding variables or can add me.
test function:
def test_variables_and_functions_for_formulas_exist(connector):
"""Test variables and functions for formula.
Make sure all the vars for the formulas CAN BE found in response data from API.
"""
formulas = {
field["id"]: field["formula"]
for field in field_lists[connector]
if field.get("formula") is not None
}
field_response_names = {
field.get("upstream_api_response_name", field["id"])
for field in field_lists[connector]
}
# field upstream API name in a formula can be combined with a table
# (endpoint) name if the former is unique not per field list but per table
table_field_response_names = {
(f"{field.get('table')}_{field.get('upstream_api_response_name', field['id'])}")
for field in field_lists[connector]
if field.get("table") is not None
}
formula_parser = py_expression_eval.Parser()
formula_parser.functions = dict(formula_parser.functions, **FORMULA_FUNCTIONS)
for field, formula_definition in formulas.items():
formula = formula_parser.parse(formula_definition)
formula_vars = formula.variables()
non_existent_vars = list(
set(formula_vars) - field_response_names - table_field_response_names
)
assert len(non_existent_vars) == 0, (
f"Names {non_existent_vars} for field {field} are missing from "
"the lists of upstream_api_response_name's and available functions"
)

Related

How to add a local variable to pymongo.find if it's not None?

I run a web service with an api function which uses a method I created to interact with MongoDB, using pymongo.
The json data comes with post may or may not include a field: firm. I don't want to create a new method for posts that does not include a firm field.
So I want to use that firm in pymongo.find if it does exists, or I want to just skip it if it doesn't. How can I do this with using one api function and one pymongo method?
API function:
#app.route(f'/{API_PREFIX}/wordcloud', methods=['POST'])
def generate_wc():
request_ = request.get_json()
firm = request_.get("firm").lower()
source = request_["source"]
since = datetime.strptime(request_["since"], "%Y-%m-%d")
until = datetime.strptime(request_["until"], "%Y-%m-%d")
items = mongo.get_tweets(firm, since, until)
...
The pymongo method:
def get_tweets(self, firm: str, since: datetime, until: datetime):
tweets = self.DB.tweets.find(
{
# use firm here if it exists (I mean not None), else just get items by date
'date': {'$gte': since, '$lte': until}
})
...
Here in the second code, comment line in find.
Thanks.
Since it involves two different queries: {date: ...} and {date: ..., firm: ...} depending on the existence of firm in the input, you would have to check if firm is not None in get_tweets and execute the proper query.
For example:
def get_tweets(self, since, until, firm=None):
query = { 'date': { '$gte': since, '$lte': until } }
if firm is not None:
query['firm'] = firm
tweets = self.DB.tweets.find(query)
....
Note that since firm has a default value, it needs to be last in the get_tweets parameter list.

Python Django equals in function arguments. Setting the variable dynamically

I think this is called "Positional arguments" or the opposite or keyword arguments. I have written a script in Python Django and rest framework that has to take in parameters from the user and feed it to the Amazon API. Here is the code;
page = request.query_params.get('page');
search = request.query_params.get('search');
search_field = request.query_params.get('search_field');
search_index = request.query_params.get('search_index');
response = amazon.ItemSearch(Keywords=search, SearchIndex=search_index, ResponseGroup="ItemAttributes,Offers,Images,EditorialReview,Reviews")
In this code the text Keywords=search varies. That is. It could be like this Actor=search, Title=search, or Publisher=search I am not sure how to make the Keywords part dynamic such that it changes to the user's input such as Actor, Title, or Publisher
I think you should build up a dictionary of your kwargs and then use the ** syntax to expand it for the function call. I'm going give an example that presumes the search_field query param changes the search:
search = request.query_params.get('search');
search_field = request.query_params.get('search_field');
search_index = request.query_params.get('search_index');
kwargs = {
'SearchIndex': search_index,
'ResponseGroup': 'ItemAttributes,Offers,Images,EditorialReview,Reviews'
}
# Presuming search_field is the key, i.e. "Keywords", or "Actor"
kwargs[search_field] = search
### kwargs['Keywords'] = search
response = amazon.ItemSearch(**kwargs)
I have used a dictionary instead. This is how I have done it;
page = request.query_params.get('page');
search = request.query_params.get('search');
search_field = request.query_params.get('search_field');
search_index = request.query_params.get('search_index');
dict = {'SearchIndex':search_index, 'ResponseGroup': 'ItemAttributes,Offers,Images,EditorialReview,Reviews'}
dict[search_field] = search;
response = amazon.ItemSearch(**dict)
I just passed a dictionary with my dynamic variable as a key.

How to set group = true in couchdb

I am trying to use map/reduce to find the duplication of the data in couchDB
the map function is like this:
function(doc) {
if(doc.coordinates) {
emit({
twitter_id: doc.id_str,
text: doc.text,
coordinates: doc.coordinates
},1)};
}
}
and the reduce function is:
function(keys,values,rereduce){return sum(values)}
I want to find the sum of the data in the same key, but it just add everything together and I get the result:
<Row key=None, value=1035>
Is that a problem of group? How can I set it to true?
Assuming you're using the couchdb package from pypi, you'll need to pass a dictionary with all of the options you require to the view.
for example:
import couchdb
# the design doc and view name of the view you want to use
ddoc = "my_design_document"
view_name = "my_view"
#your server
server = couchdb.server("http://localhost:5984")
db = server["aCouchDatabase"]
#naming convention when passing a ddoc and view to the view method
view_string = ddoc +"/" + view_name
#query options
view_options = {"reduce": True,
"group" : True,
"group_level" : 2}
#call the view
results = db.view(view_string, view_options)
for row in results:
#do something
pass

Write results to permanent table in bigquery

I am using named parameters in Bigquery SQL and want to write the results to a permanent table. I have two functions 1 for using named query parameters and 1 for writing query results to table. How do I combine the two to get query results written to table; the query having named parameters.
This is the function using parameterized queries :
def sync_query_named_params(column_name,min_word_count,value):
query = """with lsq_results as
(select "%s" = #min_word_count)
replace (%s AS %s)
from lsq.lsq_results
""" % (min_word_count,value,column_name)
client = bigquery.Client()
query_results = client.run_sync_query(query
,
query_parameters=(
bigquery.ScalarQueryParameter('column_name', 'STRING', column_name),
bigquery.ScalarQueryParameter(
'min_word_count',
'STRING',
min_word_count),
bigquery.ScalarQueryParameter('value','INT64',value)
))
query_results.use_legacy_sql = False
query_results.run()
Function to write to permanent table
class BigQueryClient(object):
def __init__(self, bq_service, project_id, swallow_results=True):
self.bigquery = bq_service
self.project_id = project_id
self.swallow_results = swallow_results
self.cache = {}
def write_to_table(
self,
query,
dataset=None,
table=None,
external_udf_uris=None,
allow_large_results=None,
use_query_cache=None,
priority=None,
create_disposition=None,
write_disposition=None,
use_legacy_sql=None,
maximum_billing_tier=None,
flatten=None):
configuration = {
"query": query,
}
if dataset and table:
configuration['destinationTable'] = {
"projectId": self.project_id,
"tableId": table,
"datasetId": dataset
}
if allow_large_results is not None:
configuration['allowLargeResults'] = allow_large_results
if flatten is not None:
configuration['flattenResults'] = flatten
if maximum_billing_tier is not None:
configuration['maximumBillingTier'] = maximum_billing_tier
if use_query_cache is not None:
configuration['useQueryCache'] = use_query_cache
if use_legacy_sql is not None:
configuration['useLegacySql'] = use_legacy_sql
if priority:
configuration['priority'] = priority
if create_disposition:
configuration['createDisposition'] = create_disposition
if write_disposition:
configuration['writeDisposition'] = write_disposition
if external_udf_uris:
configuration['userDefinedFunctionResources'] = \
[ {'resourceUri': u} for u in external_udf_uris ]
body = {
"configuration": {
'query': configuration
}
}
logger.info("Creating write to table job %s" % body)
job_resource = self._insert_job(body)
self._raise_insert_exception_if_error(job_resource)
return job_resource
How do I combine the 2 functions to write a parameterized query and write the results to a permanent table?Or if there is another simpler way. Please suggest.
You appear to be using two different client libraries.
Your first code sample uses a beta version of the BigQuery client library, but for the time being I would recommend against using it, since it needs substantial revision before it is considered generally available. (And if you do use it, I would recommend using run_async_query() to create a job using all available parameters, and then call results() to get the QueryResults object.)
Your second code sample is creating a job resource directly, which is a lower-level interface. When using this approach, you can specify the configuration.query.queryParameters field on your query configuration directly. This is the approach I'd recommend right now.

Python organzing default values as objects

I have a django application that is utilizing a third party API and needs to receive several arguments such as client_id, user_id etc. I currently have these values labeled at the top of my file as variables, but I'd like to store them in an object instead.
My current set up looks something like this:
user_id = 'ID HERE'
client_id = 'ID HERE'
api_key = 'ID HERE'
class Social(LayoutView, TemplateView):
def grab_data(self):
authenticate_user = AuthenticateService(client_id, user_id)
I want the default values set up as an object
SERVICE_CONFIG = {
'user_id': 'ID HERE',
'client_id': 'ID HERE'
}
So that I can access them in my classes like so:
authenticate_user = AuthenticateService(SERVICE_CONFIG.client_id, SERVICE_CONFIG.user_id)
I've tried SERVICE_CONFIG.client_id, and SERVICE_CONFIG['client_id'], as well as setting up the values as a mixin but I can't figure out how to access them any other way.
Python is not Javascript. That's a dictionary, not an object. You access dictionaries using the item syntax, not the attribute syntax:
AuthenticateService(SERVICE_CONFIG['client_id'], SERVICE_CONFIG['user_id'])
You can use a class, an instance, or a function object to store data as properties:
class ServiceConfig:
user_id = 1
client_id = 2
ServiceConfig.user_id # => 1
service_config = ServiceConfig()
service_config.user_id # => 1
service_config = lambda:0
service_config.user_id = 1
service_config.client_id = 2
service_config.user_id # => 1
Normally using a dict is the simplest way to store data, but in some cases higher readability of property access can be preferred, then you can use the examples above. Using a lambda is the easiest way but more confusing for someone reading your code, therefore the first two approaches are preferable.

Categories