Invalid Argument error while reading json values in python - python

Invalid Argument error while reading external json file's values in python
I tried:
import json
with open('https://www.w3schools.com/js/json_demo.txt') as json_file:
data = json.load(json_file)
#for p in data['people']:
print('Name: ' + data['name'])
Gave me error:
with open('https://www.w3schools.com/js/json_demo.txt') as json_file: OSError: [Errno 22] Invalid argument:
'https://www.w3schools.com/js/json_demo.txt'

As open is for opening local files, not URLs as commented by jonrsharpe so, go with urllib as commented by fl00r.
Though the link provided by him was for python-2
Try this (python-3):
import json
from urllib.request import urlopen
with urlopen('https://www.w3schools.com/js/json_demo.txt') as json_file:
data = json.load(json_file)
#for p in data['people']:
print('Name: ' + data['name'])
John

Use requests
import requests
response = requests.get('https://www.w3schools.com/js/json_demo.txt')
response.encoding = "utf-8-sig"
data = response.json()
print(data['name'])
>>> John

Related

HOW TO SAVE FILE IN ( . PDF ) EXTINCTION [duplicate]

I am trying to download a PDF file from a website and save it to disk. My attempts either fail with encoding errors or result in blank PDFs.
In [1]: import requests
In [2]: url = 'http://www.hrecos.org//images/Data/forweb/HRTVBSH.Metadata.pdf'
In [3]: response = requests.get(url)
In [4]: with open('/tmp/metadata.pdf', 'wb') as f:
...: f.write(response.text)
---------------------------------------------------------------------------
UnicodeEncodeError Traceback (most recent call last)
<ipython-input-4-4be915a4f032> in <module>()
1 with open('/tmp/metadata.pdf', 'wb') as f:
----> 2 f.write(response.text)
3
UnicodeEncodeError: 'ascii' codec can't encode characters in position 11-14: ordinal not in range(128)
In [5]: import codecs
In [6]: with codecs.open('/tmp/metadata.pdf', 'wb', encoding='utf8') as f:
...: f.write(response.text)
...:
I know it is a codec problem of some kind but I can't seem to get it to work.
You should use response.content in this case:
with open('/tmp/metadata.pdf', 'wb') as f:
f.write(response.content)
From the document:
You can also access the response body as bytes, for non-text requests:
>>> r.content
b'[{"repository":{"open_issues":0,"url":"https://github.com/...
So that means: response.text return the output as a string object, use it when you're downloading a text file. Such as HTML file, etc.
And response.content return the output as bytes object, use it when you're downloading a binary file. Such as PDF file, audio file, image, etc.
You can also use response.raw instead. However, use it when the file which you're about to download is large. Below is a basic example which you can also find in the document:
import requests
url = 'http://www.hrecos.org//images/Data/forweb/HRTVBSH.Metadata.pdf'
r = requests.get(url, stream=True)
with open('/tmp/metadata.pdf', 'wb') as fd:
for chunk in r.iter_content(chunk_size):
fd.write(chunk)
chunk_size is the chunk size which you want to use. If you set it as 2000, then requests will download that file the first 2000 bytes, write them into the file, and do this again, again and again, unless it finished.
So this can save your RAM. But I'd prefer use response.content instead in this case since your file is small. As you can see use response.raw is complex.
Relates:
How to download large file in python with requests.py?
How to download image using requests
In Python 3, I find pathlib is the easiest way to do this. Request's response.content marries up nicely with pathlib's write_bytes.
from pathlib import Path
import requests
filename = Path('metadata.pdf')
url = 'http://www.hrecos.org//images/Data/forweb/HRTVBSH.Metadata.pdf'
response = requests.get(url)
filename.write_bytes(response.content)
You can use urllib:
import urllib.request
urllib.request.urlretrieve(url, "filename.pdf")
Please note I'm a beginner. If My solution is wrong, please feel free to correct and/or let me know. I may learn something new too.
My solution:
Change the downloadPath accordingly to where you want your file to be saved. Feel free to use the absolute path too for your usage.
Save the below as downloadFile.py.
Usage: python downloadFile.py url-of-the-file-to-download new-file-name.extension
Remember to add an extension!
Example usage: python downloadFile.py http://www.google.co.uk google.html
import requests
import sys
import os
def downloadFile(url, fileName):
with open(fileName, "wb") as file:
response = requests.get(url)
file.write(response.content)
scriptPath = sys.path[0]
downloadPath = os.path.join(scriptPath, '../Downloads/')
url = sys.argv[1]
fileName = sys.argv[2]
print('path of the script: ' + scriptPath)
print('downloading file to: ' + downloadPath)
downloadFile(url, downloadPath + fileName)
print('file downloaded...')
print('exiting program...')
Generally, this should work in Python3:
import urllib.request
..
urllib.request.get(url)
Remember that urllib and urllib2 don't work properly after Python2.
If in some mysterious cases requests don't work (happened with me), you can also try using
wget.download(url)
Related:
Here's a decent explanation/solution to find and download all pdf files on a webpage:
https://medium.com/#dementorwriter/notesdownloader-use-web-scraping-to-download-all-pdfs-with-python-511ea9f55e48
regarding Kevin answer to write in a folder tmp, it should be like this:
with open('./tmp/metadata.pdf', 'wb') as f:
f.write(response.content)
he forgot . before the address and of-course your folder tmp should have been created already

I make a request to Plaid API and the response is PDF file, I can't save it [duplicate]

I am trying to download a PDF file from a website and save it to disk. My attempts either fail with encoding errors or result in blank PDFs.
In [1]: import requests
In [2]: url = 'http://www.hrecos.org//images/Data/forweb/HRTVBSH.Metadata.pdf'
In [3]: response = requests.get(url)
In [4]: with open('/tmp/metadata.pdf', 'wb') as f:
...: f.write(response.text)
---------------------------------------------------------------------------
UnicodeEncodeError Traceback (most recent call last)
<ipython-input-4-4be915a4f032> in <module>()
1 with open('/tmp/metadata.pdf', 'wb') as f:
----> 2 f.write(response.text)
3
UnicodeEncodeError: 'ascii' codec can't encode characters in position 11-14: ordinal not in range(128)
In [5]: import codecs
In [6]: with codecs.open('/tmp/metadata.pdf', 'wb', encoding='utf8') as f:
...: f.write(response.text)
...:
I know it is a codec problem of some kind but I can't seem to get it to work.
You should use response.content in this case:
with open('/tmp/metadata.pdf', 'wb') as f:
f.write(response.content)
From the document:
You can also access the response body as bytes, for non-text requests:
>>> r.content
b'[{"repository":{"open_issues":0,"url":"https://github.com/...
So that means: response.text return the output as a string object, use it when you're downloading a text file. Such as HTML file, etc.
And response.content return the output as bytes object, use it when you're downloading a binary file. Such as PDF file, audio file, image, etc.
You can also use response.raw instead. However, use it when the file which you're about to download is large. Below is a basic example which you can also find in the document:
import requests
url = 'http://www.hrecos.org//images/Data/forweb/HRTVBSH.Metadata.pdf'
r = requests.get(url, stream=True)
with open('/tmp/metadata.pdf', 'wb') as fd:
for chunk in r.iter_content(chunk_size):
fd.write(chunk)
chunk_size is the chunk size which you want to use. If you set it as 2000, then requests will download that file the first 2000 bytes, write them into the file, and do this again, again and again, unless it finished.
So this can save your RAM. But I'd prefer use response.content instead in this case since your file is small. As you can see use response.raw is complex.
Relates:
How to download large file in python with requests.py?
How to download image using requests
In Python 3, I find pathlib is the easiest way to do this. Request's response.content marries up nicely with pathlib's write_bytes.
from pathlib import Path
import requests
filename = Path('metadata.pdf')
url = 'http://www.hrecos.org//images/Data/forweb/HRTVBSH.Metadata.pdf'
response = requests.get(url)
filename.write_bytes(response.content)
You can use urllib:
import urllib.request
urllib.request.urlretrieve(url, "filename.pdf")
Please note I'm a beginner. If My solution is wrong, please feel free to correct and/or let me know. I may learn something new too.
My solution:
Change the downloadPath accordingly to where you want your file to be saved. Feel free to use the absolute path too for your usage.
Save the below as downloadFile.py.
Usage: python downloadFile.py url-of-the-file-to-download new-file-name.extension
Remember to add an extension!
Example usage: python downloadFile.py http://www.google.co.uk google.html
import requests
import sys
import os
def downloadFile(url, fileName):
with open(fileName, "wb") as file:
response = requests.get(url)
file.write(response.content)
scriptPath = sys.path[0]
downloadPath = os.path.join(scriptPath, '../Downloads/')
url = sys.argv[1]
fileName = sys.argv[2]
print('path of the script: ' + scriptPath)
print('downloading file to: ' + downloadPath)
downloadFile(url, downloadPath + fileName)
print('file downloaded...')
print('exiting program...')
Generally, this should work in Python3:
import urllib.request
..
urllib.request.get(url)
Remember that urllib and urllib2 don't work properly after Python2.
If in some mysterious cases requests don't work (happened with me), you can also try using
wget.download(url)
Related:
Here's a decent explanation/solution to find and download all pdf files on a webpage:
https://medium.com/#dementorwriter/notesdownloader-use-web-scraping-to-download-all-pdfs-with-python-511ea9f55e48
regarding Kevin answer to write in a folder tmp, it should be like this:
with open('./tmp/metadata.pdf', 'wb') as f:
f.write(response.content)
he forgot . before the address and of-course your folder tmp should have been created already

Print JSON data from csv list of multiple urls

Very new to Python and haven't found specific answer on SO but apologies in advance if this appears very naive or elsewhere already.
I am trying to print 'IncorporationDate' JSON data from multiple urls of public data set. I have the urls saved as a csv file, snippet below. I am only getting as far as printing ALL the JSON data from one url, and I am uncertain how to run that over all of the csv urls, and write to csv just the IncorporationDate values.
Any basic guidance or edits are really welcomed!
try:
# For Python 3.0 and later
from urllib.request import urlopen
except ImportError:
# Fall back to Python 2's urllib2
from urllib2 import urlopen
import json
def get_jsonparsed_data(url):
response = urlopen(url)
data = response.read().decode("utf-8")
return json.loads(data)
url = ("http://data.companieshouse.gov.uk/doc/company/01046514.json")
print(get_jsonparsed_data(url))
import csv
with open('test.csv') as f:
lis=[line.split() for line in f]
for i,x in enumerate(lis):
print ()
import StringIO
s = StringIO.StringIO()
with open('example.csv', 'w') as f:
for line in s:
f.write(line)
Snippet of csv:
http://business.data.gov.uk/id/company/01046514.json
http://business.data.gov.uk/id/company/01751318.json
http://business.data.gov.uk/id/company/03164710.json
http://business.data.gov.uk/id/company/04403406.json
http://business.data.gov.uk/id/company/04405987.json
Welcome to the Python world.
For dealing with making http requests, we commonly use requests because it's dead simple api.
The code snippet below does what I believe you want:
It grabs the data from each of the urls you posted
It creates a new CSV file with each of the IncorporationDate keys.
```
import csv
import requests
COMPANY_URLS = [
'http://business.data.gov.uk/id/company/01046514.json',
'http://business.data.gov.uk/id/company/01751318.json',
'http://business.data.gov.uk/id/company/03164710.json',
'http://business.data.gov.uk/id/company/04403406.json',
'http://business.data.gov.uk/id/company/04405987.json',
]
def get_company_data():
for url in COMPANY_URLS:
res = requests.get(url)
if res.status_code == 200:
yield res.json()
if __name__ == '__main__':
for data in get_company_data():
try:
incorporation_date = data['primaryTopic']['IncorporationDate']
except KeyError:
continue
else:
with open('out.csv', 'a') as csvfile:
writer = csv.writer(csvfile)
writer.writerow([incorporation_date])
```
First step, you have to read all the URLs in your CSV
import csv
csvReader = csv.reader('text.csv')
# next(csvReader) uncomment if you have a header in the .CSV file
all_urls = [row for row in csvReader if row]
Second step, fetch the data from the URL
from urllib.request import urlopen
def get_jsonparsed_data(url):
response = urlopen(url)
data = response.read().decode("utf-8")
return json.loads(data)
url_data = get_jsonparsed_data("give_your_url_here")
Third step:
Go through all URLs that you got from CSV file
Get JSON data
Fetch the field what you need, in your case "IncorporationDate"
Write into an output CSV file, I'm naming it as IncorporationDates.csv
Code below:
for each_url in all_urls:
url_data = get_jsonparsed_data(each_url)
with open('IncorporationDates.csv', 'w' ) as abc:
abc.write(url_data['primaryTopic']['IncorporationDate'])

Putting Json into python variables using query (url request)

I'm attempting to use this Python 2 code snippet from the WeatherUnderground's API Page in python 3.
import urllib2
import json
f = urllib2.urlopen('http://api.wunderground.com/api/apikey/geolookup/conditions/q/IA/Cedar_Rapids.json')
json_string = f.read()
parsed_json = json.loads(json_string)
location = parsed_json['location']['city']
temp_f = parsed_json['current_observation']['temp_f']
print "Current temperature in %s is: %s" % (location, temp_f)
f.close()
I've used 2to3 to convert it over but i'm still having some issues. The main conversion here is switching from the old urllib2 to the new urllib. I've tried using the requests library to no avail.
Using urllib from python 3 this is the code I have come up with:
import urllib.request
import urllib.error
import urllib.parse
import codecs
import json
url = 'http://api.wunderground.com/api/apikey/forecast/conditions/q/C$
response = urllib.request.urlopen(url)
#Decoding on the two lines below this
reader = codecs.getreader("utf-8")
obj = json.load(reader(response))
json_string = obj.read()
parsed_json = json.loads(json_string)
currentTemp = parsed_json['current_observation']['temp_f']
todayTempLow = parsed_json['forecast']['simpleforecast']['forecastday']['low'][$
todayTempHigh = parsed_json['forecast']['simpleforecast']['forecastday']['high'$
todayPop = parsed_json['forecast']['simpleforecast']['forecastday']['pop']
Yet i'm getting an error about it being the wrong object type. (Bytes instead of str)
The closest thing I could find to the solution is this question here.
Let me know if any additional information is needed to help me find a solution!
Heres a link to the WU API website if that helps
urllib returns a byte array. You convert it to string using
json_string.decode('utf-8')
Your Python2 code would convert to
from urllib import request
import json
f = request.urlopen('http://api.wunderground.com/api/apikey/geolookup/conditions/q/IA/Cedar_Rapids.json')
json_string = f.read()
parsed_json = json.loads(json_string.decode('utf-8'))
location = parsed_json['location']['city']
temp_f = parsed_json['current_observation']['temp_f']
print ("Current temperature in %s is: %s" % (location, temp_f))
f.close()

Python ValueError: No JSON object could be decoded

I'm trying to read a json and get its values.
I have a folder with the JSON's archives, and I need to open all archives and get the values from them.
This is the code:
# -*- encoding: utf-8 -*-
from pprint import pprint
import json
import os
def start():
for dirname, dirnames, filenames in os.walk('test'):
for filename in filenames:
json_file = open(os.path.join(dirname, filename)).read()
# json_file = unicode(json_file, 'utf-8')
json_data = json.loads(json_file)
pprint(json_data)
for key, value in json_data.items():
print "KEY : ", key
print "VALUE: ", value
start()
This is one of the JSON's
{ "test" : "Search User 1",
"url" : "http://127.0.0.1:8000/api/v1/user/1/?format=json",
"status_code" : 200,
"method" : "get"
}
But when I run it, i get this:
ValueError: No JSON object could be decoded
What the hell is wrong? Yesterday it was working exactly as it is now, or am I crazy
I tried this way:
from pprint import pprint
import json
import os
for dirname, dirnames, filenames in os.walk('test'):
for filename in filenames:
json_file_contents = open(os.path.join(dirname, filename)).read()
try:
json_data = json.loads(json_file_contents)
except ValueError, e:
print e
print "ERROR"
I cant see any error '-'
for filename in filenames:
with open(os.path.join(dirname,filename)) as fd:
json_data = fd.read()
print json_data
This way I can see what the json files contain, but I can't use for example access by the key, like json_data['url']
For me it was an encoding problem,
you can try using Notepad++ to edit your .json file
and change the Encoding to UTF-8 without BOM.
Another thing you could check is if your json script is valid
It's possible the .read() method is moving the cursor to the end of the file. Try:
for filename in filenames:
with open(os.path.join(dirname,filename)) as fd:
json_data = json.load(fd)
and see where that gets you.
This, of course, assumes you have valid JSON, as your example demonstrates. (Look out for trailing commas)
I resolved this error by Converting the json file to UTF-8 with no BOM.
Below is a python snippet and url for conversion
myFile=open(cases2.json, 'r')
myObject=myFile.read()
u = myObject.decode('utf-8-sig')
myObject = u.encode('utf-8')
myFile.encoding
myFile.close()
myData=json.loads(myObject,'utf-8')
The reply suggesting that .read() was moving the cursor led to a resolution of my version of the problem.
I changed
print response.read()
...
json_data = json.loads(response.read())
to
responseStr = response.read()
print responseStr
...
json_data = json.loads(responseStr)
I had the same problem today. Trying to understand the cause, I found this issue related to json module:
http://bugs.python.org/issue18958
Check if the file is UTF8 encoded and if it is the case, then use codecs module to open and read it or just skip the BOM (byte order mark).
Try using this in your ajax/$http with JSON data
contentType: "application/json; charset=utf-8"

Categories