I'm trying to register a domain for customers using WHMCS apis with python.
The issue I'm having is with the additional domain fields, in this case for the .eu domain:
additional_domainfields = base64.b64encode(phpserialize.dumps({'Entity Type':'INDIVIDUAL', 'EU Country of Citizenship':'IT'}))
post_data = {
'identifier':'xxx',
'secret':'xxx',
'action':'AddOrder',
'clientid':client_id,
'domain':domain,
'domaintype':"register",
'billingcycle':"annually",
'regperiod':1,
'noinvoice':True,
'noinvoiceemail':True,
'noemail':True,
'paymentmethod':"stripe",
'domainpriceoverride':0,
'domainrenewoverride':0,
'domainfields':additional_domainfields,
'responsetype':'json'
}
postfields = urlencode(post_data)
c.setopt(c.POSTFIELDS, postfields) # type: ignore
c.perform()
c.reset
I can't find what to use for the array of serialized data, so far all of my tests were for nothing.
I tried any combination of fields on the line:
additional_domainfields = base64.b64encode(phpserialize.dumps({'Entity Type':'INDIVIDUAL', 'EU Country of Citizenship':'IT'})).
Also, tried cheching for the php overrides for field names, so far without success.
Any help would be immensely appreciated
Related
I am trying to write a script in order to get the revision history of biographies (the goal is to investigate how a biography changes over time). I have read most of the related articles here and the documentation about the revision module but I can't get the results I want. I post my code, most of it is copied (partially or complete) from the documentation. I changed the value in the titles parameter.
Moreover, I found the allrevisions submodule. I made it to return revisions for a specific biography, but what I get doesn't related to the revision history that someone found on the page.
Code related to "revisions"
import requests
S = requests.session()
URL = "https://www.mediawiki.org/w/api.php"
PARAMS = {
"action": "query",
"prop": "revisions",
"titles": "Albert Einstein",
"rvprop": "timestamp|user|content",
"rvslots": "main",
"formatversion": "2",
"format": "json"
}
R = S.get(url=URL, params=PARAMS)
DATA = R.json()
print(DATA)
Code related to "allrevisions"
URL = "https://www.mediawiki.org/w/api.php"
PARAMS = {
"action": "query",
"list": "allrevisions",
"titles": "Albert Einstein",
"arvprop": "user|timestamp|content",
"arvslots": "main",
"arvstart": "2020-11-12T12:06:00Z",
"formatversion": "2",
"format": "json"
}
R = S.get(url=URL, params=PARAMS)
DATA = R.json()
print(DATA)
Any suggestions to make it work properly? The most important is why the code related to "revisions" doesn't return anything.
As suggested, I want to get the full revision history for a specific page.
prop modules return information about a specific page (or set of pages) you provide. list modules return information about a list of pages where you only provide some abstract criteria and finding the pages matching those criteria is part of the work the API is doing (as such, titles in your second example will essentially be ignored).
You don't explain clearly what you are trying to do, but I'm guessing you want to get the full page history for a specific title, so your first example is mostly right, except you should set a higher rvlimit.
See also the (unfortately not very good) doc on continuing queries since many pages have a history which is too long to return in a single request.
I've been trying update custom_fields per the latest version of Asana's API, very similarly to this post but with a later version of the API (e.g. I need to use update_task method). I can update fields at the top level of a task, but the custom_fields object is proving much more challenging to update. For example, I have many custom fields, and am trying to update a test field called "Update" and just set the text_value to "Hello"...
import asana
asanaPAT = 'myToken'
client = asana.Client.access_token(asanaPAT)
result = client.tasks.get_tasks({'project': 'myProjectID'}, opt_pretty=True)#, iterator_type=None)
for index, result in enumerate(result):
complete_task = client.tasks.find_by_id(result["gid"])
task_name = complete_task['name']
task_id = complete_task['gid']
custom_fields = complete_task['custom_fields']
#I can easily update top-level fields like 'name' and 'completed'...
#result = client.tasks.update_task(task_id, {'name': task_name + '(new)'}, opt_pretty=True)
#result = client.tasks.update_task(task_id, {'completed': False}, opt_pretty=True)
for custom_fieldsRow in custom_fields:
if custom_fieldsRow['name'] == "Updated":
#custom_fieldsRow['text_value'] = 'Hello'
#finished loop through individual custom fields, so update on the level of the task...
#client.tasks.update_task(task_id, {custom_fields}, opt_pretty=True)
manualCustomField = {'data': { 'custom_fields': {'gid': 'theGIDOfCustomField', 'text_value': 'Hello'} }}
resultFromUpdate = client.tasks.update_task(task_id, manualCustomField, opt_pretty=True)
As you can see above, I started off trying to loop through the custom_fields and make changes to the specific field before updating later. But now I'm even trying to manually set the custom_field data (last line of my code), but it does nothing (no error, but doesn't change my task). I'm completely out of ideas to troubleshoot this so appreciate any feedback on where I'm going wrong.
Apologies, I figured out my mistake, I just needed my penultimate line to read...
manualCustomField = { 'custom_fields': {'theGIDOfCustomField':'Hello'} }
Kinda a strange way to do that in the API (not specifically stating which field you'll update or which id you're using) if you ask me, but now it finally works.
I use dynamic mapping in elasticsearch to load my json file into elasticsearch, like this:
es = Elasticsearch([{'host': 'localhost', 'port': 9200}])
def extract():
f = open('tmdb.json')
if f:
return json.loads(f.read())
movieDict = extract()
def index(movieDict={}):
for id, body in movieDict.items():
es.index(index='tmdb', id=id, doc_type='movie', body=body)
index(movieDict)
How can I update mapping for single field? I have field title to which I want to add different analyzer.
title_settings = {"properties" : { "title": {"type" : "text", "analyzer": "english"}}}
es.indices.put_mapping(index='tmdb', body=title_settings)
This fails.
I know that I cannot update already existing index, but what is proper way to reindex mapping generated from json file? My file has a lot of fields, creating mapping/settings manually would be very troublesome.
I am able to specify analyzer for an query, like this:
query = {"query": {
"multi_match": {
"query": userSearch, "analyzer":"english", "fields": ['title^10', 'overview']}}}
How do I specify it for index or field?
I am also able to put analyzer to settings after closing and opening index
analysis = {'settings': {'analysis': {'analyzer': 'english'}}}
es.indices.close(index='tmdb')
es.indices.put_settings(index='tmdb', body=analysis)
es.indices.open(index='tmdb')
Copying exact settings for english analyzers doesn't do 'activate' it for my data.
https://www.elastic.co/guide/en/elasticsearch/reference/7.6/analysis-lang-analyzer.html#english-analyzer
By 'activate' I mean, search is not returned in a form processed by english analyzer ie. there are still stopwords.
Solved it with massive amount of googling....
You cannot change analyzer on already indexed data. This includes opening/closing of index. You can specify new index, create new mapping and load your data (quickest way)
Specifying analyzer for whole index isn't good solution, as 'english' analyzer is specific to 'text' fields. It's better to specify analyzer by field.
If analyzers are specified by field you also need to specify type.
You need to remember that analyzers are used at can be used at/or index and search time. Reference Specifying analyzers
Code:
def create_index(movieDict={}, mapping={}):
es.indices.create(index='test_index', body=mapping)
start = time.time()
for id, body in movieDict.items():
es.index(index='test_index', id=id, doc_type='movie', body=body)
print("--- %s seconds ---" % (time.time() - start))
Now, I've got mapping from dynamic mapping of my json file. I just saved it back to json file for ease of processing (editing). That's because I have over 40 fields to map, doing it by hand would be just tiresome.
mapping = es.indices.get_mapping(index='tmdb')
This is example of how title key should be specified to use english analyzer
'title': {'type': 'text', 'analyzer': 'english','fields': {'keyword': {'type': 'keyword', 'ignore_above': 256}}}
My problem:
I'm writing an NLP program in python and I need to get the entity ID for properties and lexemes. So what I basically want is, e.g. if the input is the word/property "father" I want the return value to be "P22" (property number for father). I already know some methods for getting the Q-number (see below).
from requests import get
def get_qnumber(wikiarticle, wikisite):
resp = get('https://www.wikidata.org/w/api.php', {
'action': 'wbgetentities',
'titles': wikiarticle,
'sites': wikisite,
'props': '',
'format': 'json'
}).json()
return list(resp['entities'])[0]
print(get_qnumber(wikiarticle="Andromeda Galaxy", wikisite="enwiki"))
And I thought getting the P and L-numbers would look something similar, but finding the lexeme and property number seems to be much trickier.
What I've tried:
The closest thing I've found is manually searching for ID numbers with https://www.wikidata.org/wiki/Special:Search And put a "P:" and "L:" in the search string.
I also found some code for SPARQL but it was slow and I don't know how to refine the search to exclude unrelated search results.
query = """
SELECT ?item
WHERE
{
?item rdfs:label "father"#en
}
"""
I'm a total noob about this and haven't found any info google. So am I approaching this thing completely wrong or am I missing something really obvious?
Use action=wbsearchentities with type=property or type=lexeme:
import requests
params = dict (
action='wbsearchentities',
format='json',
language='en',
uselang='en',
type='property',
search='father'
)
response = requests.get('https://www.wikidata.org/w/api.php?', params).json()
print(response.get('search')[0]['id'])
repl.it
I would like to query the number of conversions per click in a google adwords report using the SOAP API. Unfortuately the following query (Python),
# Create report definition.
report = {
'reportName': 'Last 30 days ADGROUP_PERFORMANCE_REPORT',
'dateRangeType': 'LAST_30_DAYS',
'reportType': 'ADGROUP_PERFORMANCE_REPORT',
'downloadFormat': 'CSV',
'selector': {
'fields': ['CampaignId', 'AdGroupId', 'Id',
'Impressions', 'Clicks', 'Cost',
'Conv1PerClick',
'CampaignName','AdGroupName']
},
# Enable to get rows with zero impressions.
'includeZeroImpressions': 'false'
}
results in the following error
AdWordsReportError: HTTP code: 400, type: 'ReportDefinitionError.INVALID_FIELD_NAME_FOR_REPORT', trigger: 'Conv1PerClick', field path: ''
Google documentation (https://developers.google.com/adwords/api/docs/appendix/reports) seems to indicate that such a report should have a conv1PerClick field (I tryed also removing capitalization of the first letter, similar error occurs ).
Does anybody knows a way to query the ad group statistics about conversions per click?
Not sure if I am understanding you correctly, but the field is called Conversions, not Conv1PerClick. If you download the report in XML format, then the corresponding field attribute name used to be conv1PerClick, but this changed in v201402 in line with some changes to the way these metrics are computed:
http://adwords.blogspot.ch/2014/02/a-new-way-to-count-conversions-in.html
https://developers.google.com/adwords/api/docs/guides/migration/v201402