I am creating posts on my site with the Wordpress API, using Python.
Everything works well.
When I am trying to update a post, it doesn't update.
I have no idea if the problem is in my code, or maybe a setting I should change?
In this example I am trying to change a simple setting on a post with just a title. Nothing fancy.
import requests
import json
import base64
credentials = "username:password"
token = base64.b64encode(credentials.encode())
post_url = "https://www.example.com/wp-json/wp/v2/posts"
header = {"Authorization": "Basic " + token.decode('utf-8'), "Content-Type":"application/json"}
postID = "1122"
data_to_send = {"comment_status": "closed"}
json_to_send = json.dumps(data_to_send)
response = requests.post(post_url + "/" + postID , headers=header, json=json_to_send)
Here is the response. The "modified" time is correct (and is also reflected in the site admin) but the value of comment_status has not changed.
Any help with this will be greatly appreciated.
{
"id": 1122,
"date": "2022-02-03T21:24:32",
"date_gmt": "2022-02-03T19:24:32",
"guid": {
"rendered": "https:\\/\\/www.example.com\\/?p=1122"
},
"modified": "2022-02-03T21:24:32",
"modified_gmt": "2022-02-03T19:24:32",
"slug": "",
"status": "draft",
"type": "post",
"link": "https:\\/\\/www.example.com\\/?p=1122",
"title": {
"rendered": "title"
},
"content": {
"rendered": "",
"protected": false
},
"excerpt": {
"rendered": "",
"protected": false
},
"author": 1,
"featured_media": 0,
"comment_status": "open",
"ping_status": "open",
"sticky": false,
"template": "",
"format": "standard",
"meta": [],
"categories": [1],
"tags": [],
"acf": {
"Description": "",
"first_page_image_id": "",
"Date": "",
"Authors": "",
"Publisher": "",
"Filename": "",
"Format": "",
"Donated_by": "",
"ocr": "",
"Series": ""
},
"_links": {
"self": [{
"href": "https:\\/\\/www.example.com\\/wp-json\\/wp\\/v2\\/posts\\/1122"
}
],
"collection": [{
"href": "https:\\/\\/www.example.com\\/wp-json\\/wp\\/v2\\/posts"
}
],
"about": [{
"href": "https:\\/\\/www.example.com\\/wp-json\\/wp\\/v2\\/types\\/post"
}
],
"author": [{
"embeddable": true,
"href": "https:\\/\\/www.example.com\\/wp-json\\/wp\\/v2\\/users\\/1"
}
],
"replies": [{
"embeddable": true,
"href": "https:\\/\\/www.example.com\\/wp-json\\/wp\\/v2\\/comments?post=1122"
}
],
"version-history": [{
"count": 1,
"href": "https:\\/\\/www.example.com\\/wp-json\\/wp\\/v2\\/posts\\/1122\\/revisions"
}
],
"predecessor-version": [{
"id": 1123,
"href": "https:\\/\\/www.example.com\\/wp-json\\/wp\\/v2\\/posts\\/1122\\/revisions\\/1123"
}
],
"wp:attachment": [{
"href": "https:\\/\\/www.example.com\\/wp-json\\/wp\\/v2\\/media?parent=1122"
}
],
"wp:term": [{
"taxonomy": "category",
"embeddable": true,
"href": "https:\\/\\/www.example.com\\/wp-json\\/wp\\/v2\\/categories?post=1122"
}, {
"taxonomy": "post_tag",
"embeddable": true,
"href": "https:\\/\\/www.example.com\\/wp-json\\/wp\\/v2\\/tags?post=1122"
}
],
"curies": [{
"name": "wp",
"href": "https:\\/\\/api.w.org\\/{rel}",
"templated": true
}
]
}
}
Related
I'm not advanced with Python Json. I have these Json result:
{
"href": "https://api.spotify.com/v1/users/wizzler/playlists",
"items": [
{
"collaborative": false,
"external_urls": {
"spotify": "http://open.spotify.com/user/wizzler/playlists/53Y8wT46QIMz5H4WQ8O22c"
},
"href": "https://api.spotify.com/v1/users/wizzler/playlists/53Y8wT46QIMz5H4WQ8O22c",
"id": "53Y8wT46QIMz5H4WQ8O22c",
"images": [],
"name": "Wizzlers Big Playlist",
"owner": {
"external_urls": {
"spotify": "http://open.spotify.com/user/wizzler"
},
"href": "https://api.spotify.com/v1/users/wizzler",
"id": "wizzler",
"type": "user",
"uri": "spotify:user:wizzler"
},
"public": true,
"snapshot_id": "bNLWdmhh+HDsbHzhckXeDC0uyKyg4FjPI/KEsKjAE526usnz2LxwgyBoMShVL+z+",
"tracks": {
"href": "https://api.spotify.com/v1/users/wizzler/playlists/53Y8wT46QIMz5H4WQ8O22c/tracks",
"total": 30
},
"type": "playlist",
"uri": "spotify:user:wizzler:playlist:53Y8wT46QIMz5H4WQ8O22c"
},
{
"collaborative": false,
"external_urls": {
"spotify": "http://open.spotify.com/user/wizzlersmate/playlists/1AVZz0mBuGbCEoNRQdYQju"
},
"href": "https://api.spotify.com/v1/users/wizzlersmate/playlists/1AVZz0mBuGbCEoNRQdYQju",
"id": "1AVZz0mBuGbCEoNRQdYQju",
"images": [],
"name": "Another Playlist",
"owner": {
"external_urls": {
"spotify": "http://open.spotify.com/user/wizzlersmate"
},
"href": "https://api.spotify.com/v1/users/wizzlersmate",
"id": "wizzlersmate",
"type": "user",
"uri": "spotify:user:wizzlersmate"
},
"public": true,
"snapshot_id": "Y0qg/IT5T02DKpw4uQKc/9RUrqQJ07hbTKyEeDRPOo9LU0g0icBrIXwVkHfQZ/aD",
"tracks": {
"href": "https://api.spotify.com/v1/users/wizzlersmate/playlists/1AVZz0mBuGbCEoNRQdYQju/tracks",
"total": 58
},
"type": "playlist",
"uri": "spotify:user:wizzlersmate:playlist:1AVZz0mBuGbCEoNRQdYQju"
}
],
"limit": 9,
"next": null,
"offset": 0,
"previous": null,
"total": 9
}
Now I need to extract only the Playlist ids. How to do that?
Edit:
I get the Json Data from doing:
r = requests.get(BASE_URL + 'users/' + user_id + '/playlists', headers=headers)
r = r.json()
print(r) returning me the Json Data. When I try to data = json.load(r)
I get these error! AttributeError: 'dict' object has no attribute 'read'
First, load the JSON file using the built in json library.
import json
with open('path/to/json/file.json') as f:
data = json.load(f)
Then, use a list comprehension to get only the IDs.
playlist_ids = [item['id'] for item in data['items']]
Edit: Or, if you've got your JSON parsed already, just use the list comprehension. Don't do r = r.json(), that will reset the request object to the data. Set it to some other variable, data is OK - data = r.json()
playlist_ids = [item['id'] for item in data['items']]
Edit 2: If you only want it where the owner ID is "wizzler", then add a if clause to the list comprehension.
playlist_ids = [item['id'] for item in data['items'] if item['owner']['id'] == 'wizzler']
As title says, im trying to see how if a playlist has a playlist image cover so that it doesn't try to load one that doesnt exist.
Here is my method:
currentPlaylist = spotifyObject.user_playlist(username, playlistManageURI)
if ['images'][0] in currentPlaylist:
playlistCover_url = currentPlaylist['images'][0]['url']
image = QImage()
image.loadFromData(requests.get(playlistCover_url).content)
self.playlistCover.setScaledContents(True)
self.playlistCover.setPixmap(QPixmap(image))
else:
print('Playlist Cover doesnt exist!')
which while if a playlist that does have an image cover does load, if i try to load one with an image cover that doesnt exist, it gives me
IndexError: list index out of range
here is how currentPlaylist looks like with a playlist that does have a cover
{
"collaborative": false,
"description": "",
"external_urls": {
"spotify": "https://open.spotify.com/playlist/xxxxxxxx"
},
"followers": {
"href": null,
"total": 4
},
"href": "https://api.spotify.com/v1/playlists/xxxxxxxx?additional_types=track",
"id": "xxxxxx",
"images": [
{
"height": null,
"url": "https://i.scdn.co/image/ab67706c0000bebbcab54ad44bbf6dd124838df1",
"width": null
}
],
"name": "xxxxx",
"owner": {
"display_name": "xxxxx",
"external_urls": {
"spotify": "https://open.spotify.com/user/xxxxxxxx"
},
"href": "https://api.spotify.com/v1/users/xxxx",
"id": "xxxxx",
"type": "user",
"uri": "spotify:user:xxxx"
and this is how it looks like without a cover (total blank playlist)
{
"collaborative": false,
"description": "xxxx",
"external_urls": {
"spotify": "https://open.spotify.com/playlist/xxxxxx"
},
"followers": {
"href": null,
"total": 0
},
"href": "https://api.spotify.com/v1/playlists/xxxxxx?additional_types=track",
"id": "xxxxxx",
"images": [],
"name": "xxxxxx",
"owner": {
"display_name": "xxxxx",
"external_urls": {
"spotify": "https://open.spotify.com/user/xxxxx"
},
"href": "https://api.spotify.com/v1/users/xxxx",
"id": "xxxxxxxx",
"type": "user",
"uri": "xxxxxxx"
If "images" always exists in currentPlaylist irrespective of whether currentPlaylist['images'] is empty or not.
if currentPlaylist['images']:
...
Otherwise
if "images" in currentPlaylist and currentPlaylist["images"]:
...
solved with this usage:
if currentPlaylist.get('images') == []:
print('no image found in playlist!')
else:
print(['images'][0])
playlistCover_url = currentPlaylist['images'][0]['url']
image = QImage()
image.loadFromData(requests.get(playlistCover_url).content)
self.playlistCover.setScaledContents(True)
self.playlistCover.setPixmap(QPixmap(image))
Have seen a lot of thread but unable to found the solution for mine. I want to convert one nested JSON to CSV in Python 2.7. The sample JSON file is as below:
sample.json # My JSON file that mainly contains a firewall rule
"rulebase": [
{
"from": 1,
"name": "test-policy",
"rulebase": [
{
"action": "6c488338-8eec-4103-ad21-cd461ac2c473",
"action-settings": {},
"comments": "FYI",
"content": [
"97aeb369-9aea-11d5-bd16-0090272ccb30"
],
"content-direction": "any",
"content-negate": false,
"custom-fields": {
"field-1": "",
"field-2": "",
"field-3": ""
},
"destination": [
"97aeb369-9aea-11d5-bd16-0090272ccb30"
],
"destination-negate": false,
"domain": {
"domain-type": "domain",
"name": "SMC User",
"uid": "41e821a0-3720-11e3-aa6e-0800200c9fde"
},
"enabled": true,
"hits": {
"first-date": {
"iso-8601": "2016-09-04T22:21-0500",
"posix": 1473045718000
},
"last-date": {
"iso-8601": "2018-03-19T03:37-0500",
"posix": 1521448660000
},
"level": "low",
"percentage": "0%",
"value": 36737474
},
"install-on": [
"6c488338-8eec-4103-ad21-cd461ac2c476"
],
"meta-info": {
"creation-time": {
"iso-8601": "2016-09-15T12:42-0500",
"posix": 1473961370382
},
"creator": "System",
"last-modifier": "admin",
"last-modify-time": {
"iso-8601": "2018-08-30T18:36-0500",
"posix": 1535672186192
},
"lock": "unlocked",
"validation-state": "ok"
},
"rule-number": 1,
"service": [
"ef245528-9a3d-11d6-9eaa-3e5a6fdd6a6a",
"dff4f7ba-9a3d-11d6-91c1-3e5a6fdd5151",
"24bee257-6b37-49bb-99aa-557d993a0e48",
"97aeb45c-9aea-11d5-bd16-0090272ccb30",
"97aeb471-9aea-11d5-bd16-0090272ccb30"
],
"service-negate": false,
"source": [
"697bb7e0-0dfe-4070-a21a-68858daae98c",
"349fb05c-99b2-4fb2-aea6-7b447d0e661c"
],
"source-negate": true,
"time": [
"97aeb369-9aea-11d5-bd16-0090272ccb30"
],
"track": {
"accounting": false,
"alert": "none",
"per-connection": true,
"per-session": false,
"type": "598ead32-aa42-4615-90ed-f51a5928d41d"
},
"type": "access-rule",
"uid": "2da21174-0af8-4b5b-b02e-2957a24d70e1",
"vpn": [
"97aeb369-9aea-11d5-bd16-0090272ccb30"
]
},
{
"action": "6c488338-8eec-4103-ad21-cd461ac2c472",
"action-settings": {
"enable-identity-captive-portal": false
},
"comments": "",
"content": [
"97aeb369-9aea-11d5-bd16-0090272ccb30"
],
"content-direction": "any",
"content-negate": false,
"custom-fields": {
"field-1": "",
"field-2": "",
"field-3": ""
},
"destination": [
"b17d4573-ad1a-4126-ae6d-c874ea919cda",
"5b78417c-64ed-4566-9c76-e4e1af25a9ae",
"acb8d280-2ec4-46b1-be9f-c676fa255fb5"
],
"destination-negate": false,
"domain": {
"domain-type": "domain",
"name": "SMC User",
"uid": "41e821a0-3720-11e3-aa6e-0800200c9fde"
},
"enabled": true,
"hits": {
"level": "zero",
"percentage": "0%",
"value": 0
},
"install-on": [
"6c488338-8eec-4103-ad21-cd461ac2c476"
],
"meta-info": {
"creation-time": {
"iso-8601": "2018-07-25T16:27-0500",
"posix": 1532554044090
},
"creator": "admin",
"last-modifier": "admin",
"last-modify-time": {
"iso-8601": "2018-08-31T16:00-0500",
"posix": 1535749228997
},
"lock": "unlocked",
"validation-state": "ok"
},
"name": "tom#gmail.com",
"rule-number": 2,
"service": [
"18ec9eaa-1657-4240-ab97-5f234623336b"
],
"service-negate": false,
"source": [
"293ef5ba-5235-464e-9247-bda26229a998",
"b503873f-0c5f-4798-b87a-dd6ed4561b40"
],
"source-negate": false,
"time": [
"97aeb369-9aea-11d5-bd16-0090272ccb30"
],
"track": {
"accounting": false,
"alert": "none",
"per-connection": true,
"per-session": false,
"type": "598ead32-aa42-4615-90ed-f51a5928d41d"
},
"type": "access-rule",
"uid": "fcc5a2c8-3a78-4cc5-9fd3-e7bd59eb36ba",
"vpn": [
"97aeb369-9aea-11d5-bd16-0090272ccb30"
]
},
{
"action": "6c488338-8eec-4103-ad21-cd461ac2c472",
"action-settings": {
"enable-identity-captive-portal": false
},
"comments": "FYI",
"content": [
"97aeb369-9aea-11d5-bd16-0090272ccb30"
],
"content-direction": "any",
"content-negate": false,
"custom-fields": {
"field-1": "",
"field-2": "",
"field-3": ""
},
"destination": [
"b17d4573-ad1a-4126-ae6d-c874ea919cda",
"5b78417c-64ed-4566-9c76-e4e1af25a9ae",
"acb8d280-2ec4-46b1-be9f-c676fa255fb5"
],
"destination-negate": false,
"domain": {
"domain-type": "domain",
"name": "SMC User",
"uid": "41e821a0-3720-11e3-aa6e-0800200c9fde"
},
"enabled": true,
"hits": {
"first-date": {
"iso-8601": "2018-03-14T14:55-0500",
"posix": 1521057347000
},
"last-date": {
"iso-8601": "2018-03-19T03:58-0500",
"posix": 1521449932000
},
"level": "low",
"percentage": "0%",
"value": 11801
},
"install-on": [
"6c488338-8eec-4103-ad21-cd461ac2c476"
],
"meta-info": {
"creation-time": {
"iso-8601": "2018-03-14T09:47-0500",
"posix": 1521038846894
},
"creator": "System",
"last-modifier": "admin",
"last-modify-time": {
"iso-8601": "2018-08-31T16:17-0500",
"posix": 1535750234317
},
"lock": "unlocked",
"validation-state": "ok"
},
"name": "tom1#gmail.com",
}
From the above JSON file my requirement to redirect keys {uid, name, rule-number, comments, destination, source, hits.last-date}, etc. with their values to CSV basically.
By following the below code, I was able to generate the CSV but seems that is only parsing header, nothing else.
import json
import csv
def get_leaves(item, key=None):
if isinstance(item, dict):
leaves = []
for i in item.keys():
leaves.extend(get_leaves(item[i], i))
return leaves
elif isinstance(item, list):
leaves = []
for i in item:
leaves.extend(get_leaves(i, key))
return leaves
else:
return [(key, item)]
with open('sample.json') as f_input, open('output.csv', 'wb') as f_output:
csv_output = csv.writer(f_output)
write_header = True
for entry in json.load(f_input):
leaf_entries = sorted(get_leaves(entry))
if write_header:
csv_output.writerow([k for k, v in leaf_entries])
write_header = False
csv_output.writerows([v for k, v in leaf_entries.items()])
Please guide me as I am very much new to Python scripting.
You're pretty much there. You're just calling csv_output.writerow() on the list you created with [v for k, v in leaf_entries]. You should instead call csv_output.writerows().
Information on these calls is available here:
https://docs.python.org/3/library/csv.html#writer-objects
Just figured it out. The below code properly working and generating valid csv data from my complex JSON file.
# Generate CSV from JSON
fw_access_layers_data = open('show-access-layers.json', 'r')
fw_access_layers_parsed = json.loads(fw_access_layers_data.read())
access_layers = fw_access_layers_parsed['access-layers']
fw_access_layers_csv = open('show-access-layers.csv', 'w')
csvwriter = csv.writer(fw_access_layers_csv)
count = 0
for access_layer in access_layers:
if count == 0:
header = access_layer.keys()
csvwriter.writerow(header)
count += 1
csvwriter.writerow(access_layer.values())
fw_access_layers_csv.close()
Thanks for your help mates.
I am trying to parse JSON with Python. I am trying to get the value of "login" which is michael for "type" which is "CreateEvent".
Here's my JSON:
[
{
"id": "7",
"type": "PushEvent",
"actor": {
"id": 5,
"login": "michael",
"display_login": "michael",
"gravatar_id": "",
"url": "https://ec2",
"avatar_url": "https://ec2"
},
"repo": {
"id": 2,
"name": "myorganization/puppet",
"url": "https://ec2"
},
"payload": {
"push_id": 5,
"size": 1,
"distinct_size": 1,
"ref": "refs/heads/dev",
"head": "5584d504f971",
"before": "e485f37ce935775846f33b",
"commits": [
{
"sha": "5584cd504f971",
"author": {
"email": "michael.conte#gmail.ca",
"name": "michael"
},
"message": "Create dev.pp",
"distinct": true,
"url": "https://ec2"
}
]
},
"public": true,
"created_at": "2018-02-20T16:15:57Z",
"org": {
"id": 6,
"login": "myorganization",
"gravatar_id": "",
"url": "https://ec2",
"avatar_url": "https://ec2"
}
},
{
"id": "6",
"type": "CreateEvent",
"actor": {
"id": 5,
"login": "michael",
"display_login": "michael",
"gravatar_id": "",
"url": "https://ec2",
"avatar_url": "https://ec2"
},
"repo": {
"id": 2,
"name": "myorganization/puppet",
"url": "https://ec2"
},
"payload": {
"ref": "dev",
"ref_type": "branch",
"master_branch": "master",
"description": null,
"pusher_type": "user"
},
"public": true,
"created_at": "2018-02-20T16:15:44Z",
"org": {
"id": 6,
"login": "myorganization",
"gravatar_id": "",
"url": "https://ec2",
"avatar_url": "https://ec2"
}
},
{
"id": "5",
"type": "PushEvent",
"actor": {
"id": 5,
"login": "michael",
"display_login": "michael",
"gravatar_id": "",
"url": "https://ec2",
"avatar_url": "https://ec2"
},
"repo": {
"id": 2,
"name": "myorganization/puppet",
"url": "https://ec2"
},
"payload": {
"push_id": 3,
"size": 1,
"distinct_size": 1,
"ref": "refs/heads/master",
"head": "e485f84b875846f33b",
"before": "f8bb87b952bfb4",
"commits": [
{
"sha": "e485f37ce6f33b",
"author": {
"email": "michael.conte#gmail.ca",
"name": "michael"
},
"message": "Create hello.pp",
"distinct": true,
"url": "https://ec2"
}
]
},
"public": true,
"created_at": "2018-02-20T15:48:42Z",
"org": {
"id": 6,
"login": "myorganization",
"gravatar_id": "",
"url": "https://ec2",
"avatar_url": "https://ec2"
}
},
{
"id": "4",
"type": "CreateEvent",
"actor": {
"id": 5,
"login": "michael",
"display_login": "michael",
"gravatar_id": "",
"url": "https://ec2",
"avatar_url": "https://ec2?"
},
"repo": {
"id": 2,
"name": "myorganization/puppet",
"url": "https://ec2"
},
"payload": {
"ref": "master",
"ref_type": "branch",
"master_branch": "master",
"description": null,
"pusher_type": "user"
},
"public": true,
"created_at": "2018-02-20T15:48:21Z",
"org": {
"id": 6,
"login": "myorganization",
"gravatar_id": "",
"url": "https://ec2",
"avatar_url": "https://ec2"
}
},
{
"id": "3",
"type": "CreateEvent",
"actor": {
"id": 5,
"login": "michael",
"display_login": "michael",
"gravatar_id": "",
"url": "https://ec2",
"avatar_url": "https://ec2"
},
"repo": {
"id": 2,
"name": "myorganization/puppet",
"url": "https://ec2"
},
"payload": {
"ref": null,
"ref_type": "repository",
"master_branch": "master",
"description": null,
"pusher_type": "user"
},
"public": true,
"created_at": "2018-02-20T15:48:05Z",
"org": {
"id": 6,
"login": "myorganization",
"gravatar_id": "",
"url": "https://ec2",
"avatar_url": "https://ec2"
}
}
]
Here's my code:
response = requests.get(url, headers=headers, verify=False)
name = response.json()
fname = (name['type']['actor']['login'])
print(fname)
When I run the above code, I get a type error.
TypeError: list indices must be integers or slices, not str.
What am I doing wrong? I am using Python3 for my code.
Try
fname = name[0]['payload']['commits'][0]['author']['name']
The name Michael you are trying to get, is inside the dictionary named author, which is inside a single item list, which is inside the commits dictionary, which is inside the payload dictionary, which is inside a single item list.
Check out the docs for more info on collection types: http://python-textbok.readthedocs.io/en/1.0/Collections.html
I post some JSON to a view. I want to now parse the data and add it to my database.
I need to get the properties name and theme and iterate over the array pages. My JSON is as follows:
{
"name": "xaAX",
"logo": "",
"theme": "b",
"fullSiteLink": "http://www.hello.com",
"pages": [
{
"id": "1364484811734",
"name": "Page Name",
"type": "basic",
"components": {
"img": "",
"text": ""
}
},
{
"name": "Twitter",
"type": "twitter",
"components": {
"twitter": {
"twitter-username": "zzzz"
}
}
}
]
}
Here is what I have so far:
def smartpage_create_ajax(request):
if request.POST:
# get stuff and loop over each page?
return HttpResponse('done')
python provides json to encode/decode json
import json
json_dict = json.loads(request.POST['your_json_data'])
json_dict['pages']
[
{
"id": "1364484811734",
"name": "Page Name",
"type": "basic",
"components": {
"img": "",
"text": ""
}
},
{
"name": "Twitter",
"type": "twitter",
"components": {
"twitter": {
"twitter-username": "zzzz"
}
}
},
}
]