How to remove highlight in the google slides API? - python

I am using the Google API Python Client to replace text placeholders with generated data. In this example, I detect all instances of "bar" and replace them with "foo", in all slides. slides_service is instantiated with apiclient.discovery.build(...)
batch_requests_array = [
{
"replaceAllText": {
"replaceText": "foo",
"containsText": {
"text": "bar",
"matchCase": False
}
}
}
]
batch_requests = {"requests": batch_requests_array}
request = slides_service.presentations().batchUpdate(presentationId=slides_id, body=batch_requests)
res = request.execute()
Now if bar has a highlight color, how can I remove that when I replace it with foo? I think I need to add a separate request to my batch requests array, but I have been scrolling up and down here without finding any clue.
For clarity, this is the highlight option I am talking about as it's presented in the UI

To remove the highlight color of a text, you will have to update the backgroundColor. To give you an idea, this is how the request should look, this will set the background fully transparent:
highlightedTextRequest = [
{
"updateTextStyle": {
"objectId": "objectId",
"style": {
"backgroundColor": {
}
},
"fields": "*"
}
}
]
Note: The objectId in the request, is the ID of the shape or table with the text to be styled.
References:
UpdateTextStyleRequest
Formatting text with the Google Slides API

Related

SQLAlchemyObjectType.Meta two models graphene

I am trying to combine two SQLAlchemyConnectionField. The current error i face is graphql.error.base.GraphQLError: Expected value of type "PostsObject" but got: Tag
My current connections are defined like so posts = SQLAlchemyConnectionField(PostsObject.connection, /* input arguments like posts(name: "") */) Is there a way to add another type or allow to return something like
"data": {
"posts": {
"edges": [
{
"node": "Different table"
},
{
"node": "Other table"
}]}},
I am using graphene_sqlalchemy well anyway thats all thanks !

How to change Notion page property type?

I'm using notion-sdk-py package to interact with Notion SDK.
I want to change property type multi_select -> rich_text (text)
Prop is multi_select type property
My code:
from notion_client import Client
notion = Client(auth=settings["auth"])
any_db = notion.databases.query(settings["databases"]["any"])
page = any_db["results"][0]
notion.pages.update(
page["id"],
properties={"Prop": {"type": "rich_text", "rich_text": []}}
)
# Also tried this
notion.pages.update(
order["id"],
properties={
"Prop": {
"type": "rich_text",
"rich_text": [{
"type": "text",
"text": {
"content": "yemreak.com"
},
"href": None
}]
}
}
)
# Both of them raise same error
# notion_client.errors.APIResponseError: Prop is expected to be multi_select.
Additional Resources
Updating Schema Object ~ Notion API
All Rich text ~ Notion API

How to requests all sizes in stock - Python

I'm trying to request all the sizes in stock from Zalando. I can not quite figure out how to do it since the video I'm watching
showing how to request sizes look different than min.
The video that I watch was this. Video - 5.30
Does anyone know how to request the sizes in stock and print the sizes that in stock?
The site in trying to request sizes of: here
My code looks like this:
import requests
from bs4 import BeautifulSoup as bs
session = requests.session()
def get_sizes_in_stock():
global session
endpoint = "https://www.zalando.dk/nike-sportswear-air-max-90-sneakers-ni112o0bt-a11.html"
response = session.get(endpoint)
soup = bs(response.text, "html.parser")
I have tried to go to the View page source and look for the sizes, but I could not see the sizes in the page source.
I hope someone out there can help me what to do.
The sizes are in the page
I found them in the html, in a javascript tag, in the format
{
"sku": "NI112O0BT-A110090000",
"size": "42.5",
"deliveryOptions": [
{
"deliveryTenderType": "FASTER"
}
],
"offer": {
"price": {
"promotional": null,
"original": {
"amount": 114500
},
"previous": null,
"displayMode": null
},
"merchant": {
"id": "810d1d00-4312-43e5-bd31-d8373fdd24c7"
},
"selectionContext": null,
"isMeaningfulOffer": true,
"displayFlags": [],
"stock": {
"quantity": "MANY"
},
"sku": "NI112O0BT-A110090000",
"size": "42.5",
"deliveryOptions": [
{
"deliveryTenderType": "FASTER"
}
],
"offer": {
"price": {
"promotional": null,
"original": {
"amount": 114500
},
"previous": null,
"displayMode": null
},
"merchant": {
"id": "810d1d00-4312-43e5-bd31-d8373fdd24c7"
},
"selectionContext": null,
"isMeaningfulOffer": true,
"displayFlags": [],
"stock": {
"quantity": "MANY"
}
},
"allOffers": [
{
"price": {
"promotional": null,
"original": {
"amount": 114500
},
"previous": null,
"displayMode": null
},
"merchant": {
"id": "810d1d00-4312-43e5-bd31-d8373fdd24c7"
},
"selectionContext": null,
"isMeaningfulOffer": true,
"displayFlags": [],
"stock": {
"quantity": "MANY"
},
"deliveryOptions": [
{
"deliveryWindow": "2022-05-23 - 2022-05-25"
}
],
"fulfillment": {
"kind": "ZALANDO"
}
}
]
}
}
If you parse the html with bs4 you should be able to find the script tag and extract the JSON.
The sizes for the default color of shoe are shown in html. Alongside this are the urls for the other colors. You can extract these into a dictionary and loop, making requests and pulling the different colors and their availability, which I think is what you are actually requesting, as follows (note: I have kept quite generic to avoid hardcoding keys which change across requests):
import requests, re, json
def get_color_results(link):
headers = {"User-Agent": "Mozilla/5.0"}
r = requests.get(link, headers=headers).text
data = json.loads(re.search(r'(\{"enrichedEntity".*size.*)<\/script', r).group(1))
results = []
color = ""
for i in data["graphqlCache"]:
if "ern:product" in i:
if "product" in data["graphqlCache"][i]["data"]:
if "name" in data["graphqlCache"][i]["data"]["product"]:
results.append(data["graphqlCache"][i]["data"]["product"])
if (
color == ""
and "color" in data["graphqlCache"][i]["data"]["product"]
):
color = data["graphqlCache"][i]["data"]["product"]["color"]["name"]
return (color, results)
link = "https://www.zalando.dk/nike-sportswear-air-max-90-sneakers-ni112o0bt-a11.html"
final = {}
color, results = get_color_results(link)
colors = {
j["node"]["color"]["name"]: j["node"]["uri"]
for j in [
a
for b in [
i["family"]["products"]["edges"]
for i in results
if "family" in i
if "products" in i["family"]
]
for a in b
]
}
final[color] = {
j["size"]: j["offer"]["stock"]["quantity"]
for j in [i for i in results if "simples" in i][0]["simples"]
}
for k, v in colors.items():
if k not in final:
color, results = get_color_results(v)
final[color] = {
j["size"]: j["offer"]["stock"]["quantity"]
for j in [i for i in results if "simples" in i][0]["simples"]
}
print(final)
Explanatory notes from chat:
Use chrome browser to navigate to link
Press Ctrl + U to view page source
Press Ctrl + F to search for 38.5 in html
The first match is the long string you already know about. The string is long and difficult to navigate in page source and identify which tag it is part of. There are a number of ways I could identify the right script from these, but for now, an easy way would be:
from bs4 import BeautifulSoup as bs
link = 'https://www.zalando.dk/nike-sportswear-air-max-90-sneakers-ni112o0bt-a11.html'
headers = {'User-Agent':'Mozilla/5.0'}
r = requests.get(link, headers = headers)
soup = bs(r.text, 'lxml')
for i in soup.select('script[type="application/json"]'):
if '38.5' in i.text:
print(i)
break
Slower method would be:
soup.find("script", text=re.compile(r'.*38.5.*'))
Whilst I used bs4 to get the right script tag contents, this was so I knew the start and end of the string denoting the JavaScript object I wanted to use re to extract, and then to deserialize into a JSON object with json; this in a re-write to use re rather than bs4 i.e. use re on entire response text, from the request, and pass a regex pattern which would pull out the same string
I put the entire page source in a regex tool and wrote a regex to return that same string as identified above. See that regex here
Click on right hand side, match 1 group 1, to see highlighted the same string being returned from regex as you saw with BeautifulSoup. Two different ways of getting the same string containing the sizes
That is the string which I needed to examine, as JSON, the structure of. See in json viewer here
You will notice the JSON is very nested with some keys to dictionaries that are likely dynamic, meaning I needed to write code which could traverse the JSON and use certain more stable keys to pull out the colours available, and for the default shoe colour the sizes and availability
There is an expand all button in that JSON viewer. You can then search with Ctrl + F for 38.5 again
10a) I noticed that size and availability were for the default shoe colour
10b) I also noticed that within JSON, if I searched by one of the other colours from the dropdown, I could find URIs for each colour of show listed
I used Wolf as my search term (as I suspected less matches for that term within the JSON)
You can see one of the alternate colours and its URI listed above
I visited that URI and found the availability and shoe sizes for that colour in same place as I did for the default white shoes
I realised I could make an initial request and get the default colour and sizes with availability. From that same request, extract the other colours and their URIs
I could then make requests to those other URIs and re-use my existing code to extract the sizes/availability for the new colours
This is why I created my get_color_results() function. This was the re-usable code to extract the sizes and availability from each page
results holds all the matches within the JSON to certain keys I am looking for to navigate to the right place to get the sizes and availabilities, as well as the current colour
This code traverses the JSON to get to the right place to extract data I want to use later
results = []
color = ""
for i in data["graphqlCache"]:
if "ern:product" in i:
if "product" in data["graphqlCache"][i]["data"]:
if "name" in data["graphqlCache"][i]["data"]["product"]:
results.append(data["graphqlCache"][i]["data"]["product"])
if (
color == ""
and "color" in data["graphqlCache"][i]["data"]["product"]
):
color = data["graphqlCache"][i]["data"]["product"]["color"]["name"]
The following pulls out the sizes and availability from results:
{
j["size"]: j["offer"]["stock"]["quantity"]
for j in [i for i in results if "simples" in i][0]["simples"]
}
For the first request only, the following gets the other shoes colours and their URIs into a dictionary to later loop:
colors = {
j["node"]["color"]["name"]: j["node"]["uri"]
for j in [
a
for b in [
i["family"]["products"]["edges"]
for i in results
if "family" in i
if "products" in i["family"]
]
for a in b
]
}
This bit gets all the other colours and their availability:
for k, v in colors.items():
if k not in final:
color, results = get_color_results(v)
final[color] = {
j["size"]: j["offer"]["stock"]["quantity"]
for j in [i for i in results if "simples" in i][0]["simples"]
}
Throughout, I update the dictionary final with the found colour and associated size and availabilities
Always check if an hidden api is available, it will save you a looooot of time.
In this case I found this api:
https://www.zalando.dk/api/graphql
You can pass a payload and you obtain a json answer
# I extracted the payload from the network tab of my browser debbuging tools
payload = """[{"id":"0ec65c3a62f6bd0b29a59f22021a44f42e6282b7f8ff930718a1dd5783b336fc","variables":{"id":"ern:product::NI112O0S7-H11"}},{"id":"0ec65c3a62f6bd0b29a59f22021a44f42e6282b7f8ff930718a1dd5783b336fc","variables":{"id":"ern:product::NI112O0RY-A11"}}]"""
conn = http.client.HTTPSConnection("www.zalando.dk")
headers = {
'content-type': "application/json"
}
conn.request("POST", "/api/graphql", payload, headers)
res = conn.getresponse()
res = res.read() # json output
res contains for each product a json leaf containing the available size:
"simples": [
{
"size": "38.5",
"sku": "NI112O0P5-A110060000"
},
{
"size": "44.5",
"sku": "NI112O0P5-A110105000"
},
{
...
It's now easy to extract the informations.
There also is a field that indicate if the product got a promotion or not, cool if you want to track a discount.

DataTables server-side processing - echoing back draw() parameter (Python/Flask)

I'm using DataTables to display data from MySQL. It worked fine, until I was forced to use server-side processing (100k rows). Now, when I load the table in the browser, it works fine until I use some feature of DataTables (search, column sorting..). When I click on e.g. column name, all I got is "Processing..." message.
I noticed, that with every click on the table, the draw is raised by 1 in the XMLHttpRequest, but my 'draw' is still set to 1 in my code.
My definition of draw, recordsTotal, recordsFiltered in python/flask code(shortened):
tick = table.query.all()
rowsCount = table1.query.count()
x = {'draw':1,'recordsTotal':rowsCount,'recordsFiltered':10}
y = dict(data=[i.serialize for i in tick])
z = y.copy()
z.update(x)
#app.route("/api/result")
def result_json():
return jsonify(z)
#app.route('/data')
def get_data():
return render_template('data.html')
My JSON:
{
"data": [
{
"first": "Anton",
"id": 1,
"last": "Spelec"
},
{
"first": "Rosamunde",
"id": 2,
"last": "Pilcher"
},
{
"first": "Vlasta",
"id": 3,
"last": "Burian"
},
{
"first": "Anton",
"id": 4,
"last": "Bernolak"
},
{
"first": "Willy",
"id": 5,
"last": "Wonka"
}
],
"draw": 1,
"recordsFiltered": 5,
"recordsTotal": 5
}
My html page with DataTables inicialisation:
<script>
$(document).ready(function() {
$('#table_id').DataTable( {
"processing": true,
"serverSide": true,
"paging": true,
"pageLength": 10,
"lengthMenu": [[10, 25, 50, -1], [10, 25, 50, "All"]],
"ajax": {
url: 'api/result',
},
columns: [
{ "data": "id" },
{ "data": "first" },
{ "data": "last" }
]
} );
} );
</script>
<table id="table_id">
<thead>
<tr>
<th>id</th>
<th>first</th>
<th>last</th>
</tr>
</thead>
</table>
The XHR is here:
Request URL:
http://10.10.10.12/api/result?draw=7&columns%5B0%5D%5Bdata%5D=id&columns%5B0%5D%5Bname%5D=&columns%5B0%5D%5Bsearchable%5D=true&columns%5B0%5D%5Borderable%5D=true&columns%5B0%5D%5Bsearch%5D%5Bvalue%5D=&columns%5B0%5D%5Bsearch%5D%5Bregex%5D=false&columns%5B1%5D%5Bdata%5D=first&columns%5B1%5D%5Bname%5D=&columns%5B1%5D%5Bsearchable%5D=true&columns%5B1%5D%5Borderable%5D=true&columns%5B1%5D%5Bsearch%5D%5Bvalue%5D=&columns%5B1%5D%5Bsearch%5D%5Bregex%5D=false&columns%5B2%5D%5Bdata%5D=last&columns%5B2%5D%5Bname%5D=&columns%5B2%5D%5Bsearchable%5D=true&columns%5B2%5D%5Borderable%5D=true&columns%5B2%5D%5Bsearch%5D%5Bvalue%5D=&columns%5B2%5D%5Bsearch%5D%5Bregex%5D=false&order%5B0%5D%5Bcolumn%5D=0&order%5B0%5D%5Bdir%5D=asc&start=0&length=10&search%5Bvalue%5D=asdf&search%5Bregex%5D=false&_=1536075500781
DataTables documentation advises to cast this parameter to an integer and send it back.
I found similar question about draw parameter and it supposed the same, but unfortunately I'm not able to make it work. Casting the parameter to integer would not be a problem, I think, but I'm lost in what to do with it next or how to push the raised draw parameter to my JSON.
Thank you.
If DataTables is sending a new value for draw to your server-- just read that value and send it back:
#app.route("/api/result")
def result_json():
return jsonify(z)
Could just become (adjust the code if DataTables sends the values in some other way):
#app.route("/api/result")
def result_json():
z.update({'draw': request.form.get('draw')})
return jsonify(z)
I'm not addressing that your code doesn't seem to do anything with filtering or searching, but at least it gives you a starting point to build from.
Update
From the XHR code you pasted-- it looks like DataTables is passing values in via querystrings-- so request.args.get('draw') would be the way to access that draw data-value.
The draw parameter is only used by DataTables to ensure that the Ajax returns from server-side processing requests are drawn in sequence by DataTables. To use features like sorting, filtering and paging you will have to set up your own system of querying your data based on the sent parameters that are passed by Datatables when server-side processing is used.
The default parameters are here. You can also include your own custom parameters to that object by manipulating the data object in the ajax call.

Google fit data via google python api libraries

I'm using this python library from google but I can't figure out what to use for the 'body' argument. Is there an example body that I can draw from to create the dict that this tool will need?
Here is the code that I'm using:
flow = client.flow_from_clientsecrets(
workLaptop,
scope='https://www.googleapis.com/auth/fitness.activity.read',
redirect_uri='oauth:code:from:somehwere')
auth_uri = flow.step1_get_authorize_url()
webbrowser.open_new(auth_uri)
auth_code = "a;ldskjfa;lsdkfja;ldsfkja;lsdkfjaldgha;"
credentials = flow.step2_exchange(auth_code)
http_auth = credentials.authorize(httplib2.Http())
service = discovery.build('fitness', 'v1',http_auth)
fitData = service.users().dataset().aggregate(userId='me',body=body).execute()
It's all fine until the part where I need to define the body. Here is the body that I'm trying:
body = {
"aggregateBy": [
{
"dataSourceId": "derived:com.google.step_count.delta:com.google.android.gms:estimated_steps",
"dataTypeName": "com.google.step_count.delta"
},
],
"bucketByActivitySegment": {
"minDurationMillis": "A String", # Only activity segments of duration longer than this is used
},
"endTimeMillis": "1435269600000000000",
"bucketBySession": {
"minDurationMillis": "10", # Only sessions of duration longer than this is used
},
"bucketByActivityType": {
"minDurationMillis": "10", # Only activity segments of duration longer than this is used
},
"startTimeMillis": "1435183200000000000", # required time range
"bucketByTime": { # apparently oneof is not supported by reduced_nano_proto
"durationMillis": "10",
},
}
What is wrong with my body dict? Here is the error code:
https://www.googleapis.com/fitness/v1/users/me/dataset:aggregate?alt=json returned "Internal Error">
Here is an example of the object in the API explorer:
Although I'm not 100% o fey with the Google API for the Google Fit, there definitely some issues with your JSON body request in the first instance.
For example:
body = {
"aggregateBy": [
{
"dataSourceId": "derived:com.google.step_count.delta:com.google.android.gms:estimated_steps",
"dataTypeName": "com.google.step_count.delta"
},
],
"bucketByActivitySegment": {
"minDurationMillis": "A String", # Only activity segments of duration longer than this is used
},
"endTimeMillis": "1435269600000000000",
"bucketBySession": {
"minDurationMillis": "10", # Only sessions of duration longer than this is used
},
"bucketByActivityType": {
"minDurationMillis": "10", # Only activity segments of duration longer than this is used
},
"startTimeMillis": "1435183200000000000", # required time range
"bucketByTime": { # apparently oneof is not supported by reduced_nano_proto
"durationMillis": "10",
},
}
Should actually be this;
body = {
"aggregateBy": [
{
"dataSourceId": "derived:com.google.step_count.delta:com.google.android.gms:estimated_steps",
"dataTypeName": "com.google.step_count.delta"
}
],
"bucketByActivitySegment": {
"minDurationMillis": "A String" # Only activity segments of duration longer than this is used
},
"endTimeMillis": "1435269600000000000",
"bucketBySession": {
"minDurationMillis": "10" # Only sessions of duration longer than this is used
},
"bucketByActivityType": {
"minDurationMillis": "10" # Only activity segments of duration longer than this is used
},
"startTimeMillis": "1435183200000000000", # required time range
"bucketByTime": { # apparently oneof is not supported by reduced_nano_proto
"durationMillis": "10"
}
}
JSON Based rest services are really unforgiving for the use of extra comma's where they should not be, it renders the string un-jsonable which will lead to a 500 failure. Give that a try in the first instance ;)
Not an expert myself, but I have been playing with the API for a number of days. Here's a sample from my OAuth playground.
Sample response
From what I understand, your "endTimeMillis": "1435269600000000000" is not properly defined as it's in nanoseconds. For it to be in milli, change it to "1435269600000"

Categories