I'm trying to write a python script that will read in a JSON file with the aim to show how many screens are available and be able to pull the value of the different json fields
JSON
{
"screen": [
{
"id": "1",
"user": "user1#example.com",
"password": "letmein",
"code": "123456"
},
{
"id": "2",
"user": "user2#example.com",
"password": "letmein",
"code": "123455"
},
{
"id": "3",
"user": "user3#example.com",
"password": "letmein",
"code": "223456"
}
]
}
Python
import json
from pprint import pprint
with open('screen.json') as data_file:
data = json.load(data_file)
#json_file.close()
pprint(data)
data["screen"][0]["id"]
As you can see from python script I can successfully print out the json file is pprint but when I try just find print out individual values I'm getting stuck
Am I doing something wrong here?
I want to be able to use all the values in json files as variables later on in python script to be able to be used with selenium to open a web page using this values?
I tested your example code and it works fine. It looks like you just forgot to actually print the value in the final line. That is:
data["screen"][0]["id"]
should be
pprint(data["screen"][0]["id"])
which prints u'1' when I try it.
Related
I'm still learning python, and I'm writing a classic password manager app. It uses an external JSON file to store the data:
{
"info": [
{
"website_url": "1",
"username": "1",
"password": "1"
},
{
"website_url": "2",
"username": "2",
"password": "2"
},
{
"website_url": "3",
"username": "3",
"password": "3"
},
{
"website_url": "4",
"username": "4",
"password": "4"
}
]
}
I want to add an option to delete login information from the file, but I don't really know what to do right now. Here's my code for the feature (I'm using the JSON library):
url = input('Paste Login Page URL Here : ')
with open(INFO_PATH) as file:
data = json.load(file)
for info in data['info']:
if info['website_url'] == url:
del info
Any ideas?
You have to deserialize your json file to a Python object, delete the entry from it and after serialize the object to the same file, so rewriting the content. In your case you json file does not coincide with a json array but it is an json object that contains a json array labelled with "info", so you have to extract this field from it. This can be done like below:
import json
url = input('Paste Login Page URL Here : ')
with open(INFO_PATH, 'r') as file:
data = json.load(file)['info'] #data coincides with the json info array
data = [info for info in data if info['website_url'] != url]
#overwrite the json file content with the new data
with open(INFO_PATH, 'w') as file:
json.dump({'info': data}, file)
I have a cURL script that, when run via the CMD Prompt window, returns the requested data. However I am trying to figure out how to automate this process via Python (either 2.7 - 3.6) so that it returns a csv file instead. (I am using the latest version of Python allowed when working with either ArcGIS ArcMap and/or ArcGIS Pro mapping software.)
I have basic Python skills and need help getting the script right. Below is the cURL script I push via the CMD Prompt window. How do I do this with Python so that it returns a csv file?
I have looked up various examples which seem somewhat different in the parameters required. I have tried to modify a few, but couldn't get them to work. I am not set on one method verses another, so I am open to different solutions, just need to find one that works. My cURL doesn't require a username, just an API key and some basic search parameters. Most examples I found required both a user name and password.
Current cURL code...
curl -H "Authorization: Basic myAPIkey"
“https://website.com/api/incident?search_from=2019-08-01&search_to=2019-08-31”
I want to be able to download a csv file.
The errors I have been getting vary depending upon which example and method I was trying to replicate.
Thanks for any assistance in advance.
UPDATE
I managed to figure out the Python request option to submit my initial request:
import requests,json
headers = {'Authorization': 'Basic APIKEY'}
response = requests.get("https://website.com/api/incident?search_from=2019-08-01&search_to=2019-08-31&country_id=5", headers=headers)
print json.dumps(response.json(), indent=4)
It now returns I believe a json file, however I am having issues still on parsing it properly to either save as a csv or as a new Feature Class in ArcGIS.
The print statement returns the following:
{
"message": "",
"total": 1,
"data": {
"total": 353,
"results": [
{
"mgrs": "33S UR 71235 88779",
"weight": 0,
"type_id": "1",
"longitude": "13.63040400",
"date": "2019-08-31",
"latitude": "32.42868600",
"icon": "assets/images/map_icons/Announcement.png",
"id": "507188",
"quantity": "1"
},
{
"mgrs": "33S US 40233 29235",
"weight": 1,
"type_id": "3",
"longitude": "13.29387300",
"date": "2019-08-31",
"latitude": "32.78945800",
"icon": "assets/images/map_icons/ArmedClashes.png",
"id": "507187",
"quantity": "1"
}
]
}
}
I want to be able to read and right the "results" list into a csv format with a proper header column.
Any advice would be greatly appreciated.
Thanks in advance.
I am a new Elasticsearch user, but I am struggling to accomplish something that was easy for me in Splunk. There are a few specific fields that I want from each event in my search, but the search "hit" outputs are always returned in a big json structure that is 95% useless for me. I do my searches with the python requests module, so I can parse the results I want in python when they return, but I have to access millions of events and performance is important, so I hope there is a faster way.
Here is an example of one single event returned from an Elasticsearch search:
<Response [200]>
{
"hits": {
"hits": [
{
"sort": [
1559438581000
],
"_type": "_doc",
"_source": {
"datapoint": {
"updated_at": "2019-06-02T00:01:02Z",
"value": 102
},
"metadata": {
"id": "AB33",
"property_name": "some_property",
"oem_model": "some_model"
}
},
"_score": null,
"_index": "datapoint-2019.06",
"_id": "datapoint+4+314372003"
},
What I would prefer is for my search to return only results in a table/.csv/dataframe format of the updated_at,value,id,property_name,oem_model values like this:
2019-06-02T00:01:02Z,102,AB33,some_property,some_model
..... and similar for other events ...
Does anyone know if this is possible to do with Elasticsearch or with the requests library without parsing the json after the search output is returned? Thank you very much for any help.
Yes, sure with the source filtering. Doc here
You filter the field to be returned from your query, so in this way tou choose only the useful fields and then you should not parse the json. Have a look here:
from elasticsearch import Elasticsearch
es = Elasticsearch()
query = {
"_source": [ "obj1.*", "obj2.*" ], #this is the list of the fields that you would return as a doc
"query" : {
"term" : { "user" : "kimchy" }
}
}
res = es.search(index="your_index_name", body=query)
So i have build a REST Client that returns JSON response. However, i have an issue, where the JSON output is not exactly what i need:
Current Response:
{
"output": {
"status": "Device 'Test' does not exist",
"result": "null",
"response": {
"output": "success",
"result": 204
}
}
}
This output has an outermost enclosing "output" key, but i don't want that to be present. So basically i want my response to look like below:
{
"status": "Device 'Test' does not exist",
"result": "null",
"response": {
"output": "success",
"result": 204
}
}
I did try converting the JSON to Dict and then remove it, but no luck? any suggestions how to achieve this?
Thank you
if your response is already a dictionary or a json object then you can do following
value_required = response["output"]
if it is in text format (which I think it is) then you just need to do the following
import json
value_required = json.loads(response)["output"]
You should be able to do:
response = json.loads(response)['output']
I need to read this JSON URL: http://www.jsoneditoronline.org/?url=https://api.twitch.tv/kraken/streams/hazretiyasuo
My code:
geourl = "https://api.twitch.tv/kraken/streams/hazretiyasuo"
response = urllib.request.urlopen(geourl)
content = response.read()
data = json.loads(content.decode("utf8"))
for row in data['stream']
print data['game']
I can read 'stream' but i cant read 'game', game is inside of stream.
When I run the code in the question and then print out the json that gets returned, I get the following:
{
"_links": {
"channel": "https://api.twitch.tv/kraken/channels/hazretiyasuo",
"self": "https://api.twitch.tv/kraken/streams/hazretiyasuo"
},
"stream": null
}
The reason you can't get data['game'] is because the json you get back does not contain that information.