So I'm experimenting with json abit and this is the code I've got this far,
import json
from utorrent.client import UTorrentClient
uTorrent = UTorrentClient("xxxx", "xxxx", "xxxx")
data = uTorrent.list()
torrents = json.loads(data)["torrents"]
for torrent in torrents:
print item[0] # hash
print item[2] # name
print item[21] # status
print item[26] # folder
The typical json output can be viewed here. But im getting an "expected string or buffer" error. Anyone with any pointers?
The point with above code is to print out each hash/name.. for each torrent found in the list provided by uTorrent
Did you try using load instead of loads? I was having the same problem and I realized there's a difference.
Related
I want to create a simple python program that calls the colornames.org API for the name of any given hex code inputted by the user. However, all I want my program to output is the "name" info.
How can I make it only output that and not all of the information?
Code below:
import requests
import json
hexcodeinput = input("Hex code you've found (format: FF0000, no #): ")
print(hexcodeinput + " is your selected hex code. Searching...")
response = requests.get("https://colornames.org/search/json/?hex=" + (hexcodeinput))
print(response.text)
You should get the response as json, not as plain text. Then it is a dict which you can use:
import requests
hex_code = input("Hex code you've found (format: FF0000, no #): ")
print("%s is your selected hex code. Searching..." % hex_code)
response = requests.get("https://colornames.org/search/json/?hex="+hex_code)
details = response.json() # a dict with all the info
print(details['name']) # get the name from that dict
Disclaimer: I am very new to python, and have no idea what i am doing, i am teaching myself from the web.
I have some code that looks like this
Code:
from request import get # Ed: added for clarity
myurl = URLBASE + args.key
response = get(myurl)
# check key is valid
json = response.text # Ed: this is a requests.Response object
print(json)
if json is None:
sys.exit("Problem getting API data, check your key")
print("how did i get here")
Output:
null
how did i get here
But I have no idea how that is possible ... it literally says it is null in the print, but then doesn't match in the 'if'. Any help would be appreciated.
thx
So I am sure I still don't fully understand, but this "fixes" my problem.
The requests.Response object has Property/Method json - so i should have been using that, thanks wim, instead of text. So changing the code to this (below), as suggested, makes the code work.
from request import get
myurl = URLBASE + args.key
response = get(myurl)
# check key is valid
json = response.json()
if json is None:
sys.exit("Problem getting API data, check your key")
print("how did i get here")
The question (for me inquisitively) remains, how would I do an if statement to determine if a string is null?
Thanks to Ry and wim, for their help.
I have accessed a list in SharePoint Online with Python and want to save the list data to a file (csv or json) to transform it and sort some metadata for a migration
I have full access to the Sharepoint site I am connecting(client ID, secret..).
from office365.runtime.auth.authentication_context import AuthenticationContext
from office365.runtime.client_request import ClientRequest
from office365.sharepoint.client_context import ClientContext
I have set my settings:
app_settings = {
'url': 'https://company.sharepoint.com/sites/abc',
'client_id': 'id',
'client_secret': 'secret'
}
Connecting to the site:
context_auth = AuthenticationContext(url=app_settings['url'])
context_auth.acquire_token_for_app(client_id=app_settings['client_id'],
client_secret=app_settings['client_secret'])
ctx = ClientContext(app_settings['url'], context_auth)
Getting the lists and checking the titles:
lists = ctx.web.lists
ctx.load(lists)
ctx.execute_query()
for lista in lists:
print(lista.properties["Title"]) # this gives me the titles of each list and it works.
lists is a ListCollection Object
From the previous code, I see that I want to get the list titled: "Analysis A":
a1 = lists.get_by_title("Analysis A")
ctx.load(a1)
ctx.execute_query() # a1 is a List item - non-iterable
Then I get the data in that list:
a1w = a1.get_items()
ctx.load(a1w)
ctx.execute_query() # a1w is a ListItemCollection - iterable
idea 1: df to json/csv
df1 = pd.DataFrame(a1w) #doens't work)
idea 2:
follow this link: How to save a Sharepoint list as a file?
I get an error while executing the json.loads command:
JSONDecodeError: Extra data: line 1 column 5 (char 4)
Alternatives:
I tried Shareplum, but can't connect with it, like I did with office365-python-rest. My guess is that it doesn't have an authorisation option with client id and client secret (as far as I can see)
How would you do it? Or am I missing something?
Sample test demo for your reference.
context_auth = AuthenticationContext(url=app_settings['url'])
context_auth.acquire_token_for_app(client_id=app_settings['client_id'],
client_secret=app_settings['client_secret'])
ctx = ClientContext(app_settings['url'], context_auth)
list = ctx.web.lists.get_by_title("ListA")
items = list.get_items()
ctx.load(items)
ctx.execute_query()
dataList = []
for item in items:
dataList.append({"Title":item.properties["Title"],"Created":item.properties["Created"]})
print("Item title: {0}".format(item.properties["Title"]))
pandas.read_json(json.dumps(dataList)).to_csv("output.csv", index = None,header=True)
Idea 1
It's hard to tell what can go wrong without the error trace. But I suspect it's likely to do with malformed data that you are passing as the argument. See here from the documentation to know exactly what's expected.
Do also consider updating your question with relevant stack error traces.
Idea 2
JSONDecodeError: Extra data: line 1 column 5 (char 4)
This error simply means that the Json string is not a valid format. You can validate JSON strings by using this service. This often tells you the point of error which you can then use it to manually fix the problem.
This error could also be caused if the object that is being parsed is a python object. You can avoid this by jsonifying each line as you go
data_list= []
for line in open('file_name.json', 'r'):
data_list.append(json.loads(line))
This avoids storing intermediate python objects. Also see this related issue if nothing works.
I'm trying to manipulate a dynamic JSON from this site:
http://esaj.tjsc.jus.br/cposgtj/imagemCaptcha.do
It has 3 elements, imagem, a base64, labelValorCaptcha, just a message, and uuidCaptcha, a value to pass by parameter to play a sound in this link bellow:
http://esaj.tjsc.jus.br/cposgtj/somCaptcha.do?timestamp=1455996420264&uuidCaptcha=sajcaptcha_e7b072e1fce5493cbdc46c9e4738ab8a
When I enter in the first site through a browser and put in the second link the uuidCaptha after the equal ("..uuidCaptcha="), the sound plays normally. I wrote a simple code to catch this elements.
import urllib, json
url = "http://esaj.tjsc.jus.br/cposgtj/imagemCaptcha.do"
response = urllib.urlopen(url)
data = json.loads(response.read())
urlSound = "http://esaj.tjsc.jus.br/cposgtj/somCaptcha.do?timestamp=1455996420264&uuidCaptcha="
print urlSound + data['uuidCaptcha']
But I dont know what's happening, the caught value of the uuidCaptcha doesn't work. Open a error web page.
Someone knows?
Thanks!
It works for me.
$ cat a.py
#!/usr/bin/env python
# encoding: utf-8
import urllib, json
url = "http://esaj.tjsc.jus.br/cposgtj/imagemCaptcha.do"
response = urllib.urlopen(url)
data = json.loads(response.read())
urlSound = "http://esaj.tjsc.jus.br/cposgtj/somCaptcha.do?timestamp=1455996420264&uuidCaptcha="
print urlSound + data['uuidCaptcha']
$ python a.py
http://esaj.tjsc.jus.br/cposgtj/somCaptcha.do?timestamp=1455996420264&uuidCaptcha=sajcaptcha_efc8d4bc3bdb428eab8370c4e04ab42c
As I said #Charlie Harding, the best way is download the page and get the JSON values, because this JSON is dynamic and need an opened web link to exist.
More info here.
I am a beginner in python to pull some data from reddit.com
More precisely, I am trying to send a request to http:www.reddit.com/r/nba/.json to get the JSON content of the page and then parse it for entries about a specific team or player.
To automate the data gathering, I am requesting the page like this:
import urllib2
FH = urllib2.urlopen("http://www.reddit.com/r/nba/.json")
rnba = FH.readlines()
rnba = str(rnba[0])
FH.close()
I am also pulling the content like this on a copy of the script, just to be sure:
FH = requests.get("http://www.reddit.com/r/nba/.json",timeout=10)
rnba_json = FH.json()
FH.close()
However, I am not getting the full data that is presented when I manually go to
http://www.reddit.com/r/nba/.json with either method, in particular when I call
print len(rnba_json['data']['children']) # prints 20-something child stories
but when I do the same loading the copy-pasted JSON string like this:
import json
import urllib2
fh = r"""{"kind": "Listing", "data": {"modhash": ..."""# long JSON string
r_nba = json.loads(fh) #loads the json string from the site into json object
print len(r_nba['data']['children']) #prints upwards of 100 stories
I get more story links. I know about the timeout parameter but providing it did not resolve anything.
What am I doing wrong or what can I do to get all the content presented when I pull the page in the browser?
To get the max allowed, you'd use the API like: http://www.reddit.com/r/nba/.json?limit=100