a python script to query MIT START website from local machine - python

I'm learning Python and the project I've currently set myself includes sending a question from my laptop connected to the net, connect to the MIT START NLP database, enter the question, retrieve the response and display the response. I've read through the "HOWTO Fetch Internet Resources Using urllib2" at docs.python.org but I seem to be missing some poignant bit of this idea. Here's my code:
import urllib
import urllib2
question = raw_input("What is your question? ")
url = 'http://start.csail.mit.edu/'
values = question
data = urllib.urlencode(values)
req = urllib2.Request(url, data)
response = urllib2.urlopen(req)
the_page = response.read()
print the_page
and here's the error I'm getting:
Traceback (most recent call last): File "mitstart.py", line 9, in
data = urllib.urlencode(values) File "/usr/lib/python2.7/urllib.py", line 1298, in urlencode
raise TypeError TypeError: not a valid non-string sequence or mapping object
So I'm thinking that the way I set question in vales was wrong, so I did
values = {question}
and values = (question)
and values = ('question')
with no joy.
(I know, and my response is "I'm learning, it's late, and suddenly my wife decided she needed to talk to me about something trivial while I was trying to figure this out)
Can I get some guidance or at least get pointed in the right direction?

Note that your error says: TypeError: not a valid non-string sequence or mapping object
So, while you've created values as a string, you need a non-string sequence or a mapping object.
urlencoding requires key value pairs (e.g. a mapping object or a dict), so you generally pass it a dictionary.
Looking at the source for the form, you'll see:
<input type="text" name="query" size="60">
This means you should create a dict, something like:
values = { 'query': 'What is your question?' }
Then you should be able to pass that as the argument to urlencode().

urllib.urlencode() doesn't accept a string as an argument.
As #ernie said you should specify query parameter. Also the url is missing the /startfarm.cgi part:
<form method="post" action="startfarm.cgi">
Updated example:
import cgi
from urllib import urlencode
from urllib2 import urlopen
data = urlencode(dict(query=raw_input("What is your question?"))).encode('ascii')
response = urlopen("http://start.csail.mit.edu/startfarm.cgi", data)
# extract encoding from Content-Type and print the response
_, params = cgi.parse_header(response.headers.get('Content-Type', ''))
print response.read().decode(params['charset'])

Related

Upload file to Databricks DBFS with Python API

I'm following the Databricks example for uploading a file to DBFS (in my case .csv):
import json
import requests
import base64
DOMAIN = '<databricks-instance>'
TOKEN = '<your-token>'
BASE_URL = 'https://%s/api/2.0/dbfs/' % (DOMAIN)
def dbfs_rpc(action, body):
""" A helper function to make the DBFS API request, request/response is encoded/decoded as JSON """
response = requests.post(
BASE_URL + action,
headers={'Authorization': 'Bearer %s' % TOKEN },
json=body
)
return response.json()
# Create a handle that will be used to add blocks
handle = dbfs_rpc("create", {"path": "/temp/upload_large_file", "overwrite": "true"})['handle']
with open('/a/local/file') as f:
while True:
# A block can be at most 1MB
block = f.read(1 << 20)
if not block:
break
data = base64.standard_b64encode(block)
dbfs_rpc("add-block", {"handle": handle, "data": data})
# close the handle to finish uploading
dbfs_rpc("close", {"handle": handle})
When using the tutorial as is, I get an error:
Traceback (most recent call last):
File "db_api.py", line 65, in <module>
data = base64.standard_b64encode(block)
File "C:\Miniconda3\envs\dash_p36\lib\base64.py", line 95, in standard_b64encode
return b64encode(s)
File "C:\Miniconda3\envs\dash_p36\lib\base64.py", line 58, in b64encode
encoded = binascii.b2a_base64(s, newline=False)
TypeError: a bytes-like object is required, not 'str'
I tried doing with open('./sample.csv', 'rb') as f: before passing the blocks to base64.standard_b64encode but then getting another error:
TypeError: Object of type 'bytes' is not JSON serializable
This happens when the encoded block data is being sent into the API call.
I tried skipping encoding entirely and just passing the blocks into the post call. In this case the file gets created in the DBFS but has 0 bytes size.
At this point I'm trying to make sense of it all. It doesn't want a string but it doesn't want bytes either. What am I doing wrong? Appreciate any help.
In Python we have strings and bytes, which are two different entities note that there is no implicit conversion between them, so you need to know when to use which and how to convert when necessary. This answer provides nice explanation.
With the code snippet I see two issues:
This you already got - open by default reads the file as text. So your block is a string, while standard_b64encode expects bytes and returns bytes. To read bytes from file it needs to be opened in binary mode:
with open('/a/local/file', 'rb') as f:
Only strings can be encoded as JSON. There's no source code available for dbfs_rpc (or I can't find it), but apparently it expects a string, which it internally encodes. Since your data is bytes, you need to convert it to string explicitly and that's done using decode:
dbfs_rpc("add-block", {"handle": handle, "data": data.decode('utf8')})

Assign a json object to request.json in flask

Usually request.json is used to access json objects send to a flask view. I would like to assign something to request.json
from flask import request
print("request.json = ", request.json)
print("request.form['json'] = ", request.form['json'])
request.json = jsonify(request.form['json'])
leads to
request.json = None
request.form['json'] = {
"test": "test"
}
Traceback (most recent call last):
...
File "/ajax_handlers.py", line 952, in X
request.json = jsonify(request.form['json'])
File "/opt/anaconda3/lib/python3.7/site-packages/werkzeug/local.py", line 365, in <lambda>
__setattr__ = lambda x, n, v: setattr(x._get_current_object(), n, v)
AttributeError: can't set attribute
Any idea how I can assign a json object to request.json?
Presumably for safety, you cannot replace the values on a request object; they come in from the client, and they remain as-is. When you think about it, that's good practice; if various pieces of code try to modify your request object's data in various ways, it would lead to inconsistent and hard-to-test code.
As an alternative, you can assign meaningful attributes that aren't part of the default request object. For example, perhaps you're trying to remember, for elsewhere in your stack, who the user is authenticated as:
# Don't do this!
flask.request.json['authenticated_user'] = authenticated_user
That's messing with the original request data; don't do that. It'll make debugging (not to mention security) a nightmare. A better alternative:
# Do this instead
flask.request.authenticated_user = authenticated_user
You can add to the existing request with new attributes, but you can't go replacing existing properties on the request. Nor should you try!

Python “TypeError: 'builtin_function_or_method' object is not subscriptable"

It looks like sth error in it, but i failed to find it.
from urllib.request import Request, urlopen
from urllib.error import URLError,HTTPError
from bs4 import BeautifulSoup
import re
print('https://v.qq.com/x/page/h03425k44l2.html\\\\n\\\\https://v.qq.com/x/cover/dn7fdvf2q62wfka/m0345brcwdk.html\\\\n\\\\http://v.qq.com/cover/2/2iqrhqekbtgwp1s.html?vid=c01350046ds')
web = input('请输入网址:')
if re.search(r'vid=',web) :
patten =re.compile(r'vid=(.*)')
vid=patten.findall(web)
vid=vid[0]
else:
newurl = (web.split("/")[-1])
vid =newurl.replace('.html', ' ')
#从视频页面找出vid
getinfo='http://vv.video.qq.com/getinfo?vids{vid}&otype=xlm&defaultfmt=fhd'.format(vid=vid.strip())
def getpage(url):
req = Request(url)
user_agent = 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit'
req.add_header('User-Agent', user_agent)
try:
response = urlopen(url)
except HTTPError as e:
print('The server couldn\\\'t fulfill the request.')
print('Error code:', e.code)
except URLError as e:
print('We failed to reach a server.')
print('Reason:', e.reason)
html = response.read().decode('utf-8')
return(html)
#打开网页的函数
a = getpage(getinfo)
soup = BeautifulSoup(a, "html.parser")
for e1 in soup.find_all('url'):
ippattent = re.compile(r"((?:(2[0-4]\\\\d)|(25[0-5])|([01]\\\\d\\\\d?))\\\\.){3}(?:(2[0-4]\\\\d)|(255[0-5])|([01]?\\\\d\\\\d?))")
if re.search(ippattent,e1.get_text()):
ip=(e1.get_text())
for e2 in soup.find_all('id'):
idpattent = re.compile(r"\\\\d{5}")
if re.search(idpattent,e2.get_text()):
id=(e2.get_text())
filename=vid.strip()+'.p'+id[2:]+'.1.mp4'
#找到ID和拼接FILENAME
getkey='http://vv.video.qq.com/getkey?format={id}&otype=xml&vt=150&vid{vid}&ran=0%2E9477521511726081\\\\&charge=0&filename={filename}&platform=11'.format(id=id,vid=vid.strip(),filename=filename)
#利用getinfo中的信息拼接getkey网址
b = getpage(getkey)
key=(re.findall(r'<key>(.*)<\\\\/key>',b))
videourl=ip+filename+'?'+'vkey='+key[0]
print('视频播放地址 '+videourl)
#完成了
I run it and get this:
Traceback (most recent call last):
File "C:\Users\DYZ_TOGA\Desktop\qq.py", line 46, in <module>
filename=vid.strip()+'.p'+id[2:]+'.1.mp4'
TypeError: 'builtin_function_or_method' object is not subscriptable
What should I do? I don't know how to change my code to correct it.
The root of your problem is here:
if re.search(idpattent,e2.get_text()):
id=(e2.get_text())
If this is false, you never set id. And that means id is the built-in function of that name, which gets the unique ID of any object. Since it's a function, not the string you expect, you can't do this:
id[2:]
Hence the error you are getting.
My suggestions are:
Use a different variable name; you would have get an error about it not being defined in this case, which would have made solving the problem easier
When you don't find the ID, don't continue the script; it won't work anyway. If you expected to find it, and are not sure why that's not happening, that's a different question you should ask separately.
id is a builtin function in python. it seems you are using the same to store variable. It is bad habit to use keyword as variable name. Use some different name instead.
if re.search(idpattent,e2.get_text()):
id=(e2.get_text())
filename=vid.strip()+'.p'+id[2:]+'.1.mp4'
If the above "if" is not true, id will not be set to string value.
By default id is a function is python . So you cannot do id[2:]
because python expects id().

read xml files online

i'm new to programing and I'm trying to accese the webservice provided in http://indicadoreseconomicos.bccr.fi.cr/indicadoreseconomicos/WebServices/wsindicadoreseconomicos.asmx?op=ObtenerIndicadoresEconomicosXML, i've added the parameters I need to acces it but when I try to read the file in python I get
TypeError: 'HTTPResponse' object cannot be interpreted as an integer
this is my code
import urllib
import http.client
import time
HEADERS={"Content-type":"application/x-www-form-urlencoded","Accept":"text/plain"}
HOST = "indicadoreseconomicos.bccr.fi.cr"
POST = "/indicadoreseconomicos/WebServices/wsIndicadoresEconomicos.asmx/ObtenerIndicadoresEconomicos"
data = urllib.parse.urlencode({'tcIndicador': 317,
'tcFechaInicio':str(time.strftime("%d/%m/%Y")),
'tcFechaFinal':str(time.strftime("%d/%m/%Y")),
'tcNombre' : 'TI1400',
'tnSubNiveles' : 'N'})
conn=http.client.HTTPConnection(HOST)
conn.request("POST",POST,data,headers=HEADERS)
response= conn.getresponse()
responseSTR= response.read(response)
print (response)
Any suggestions are apreciated
response.read() takes an optional argument that is the number of bytes to read from the response; an integer, whole number. Now you passed the response object instead.
As you want to read the entire response, you should omit the argument altogether, thus:
response_str = response.read()
print(response_str)

JSONDecodeError: Expecting value: line 1 column 1 (char 0)

I am getting error Expecting value: line 1 column 1 (char 0) when trying to decode JSON.
The URL I use for the API call works fine in the browser, but gives this error when done through a curl request. The following is the code I use for the curl request.
The error happens at return simplejson.loads(response_json)
response_json = self.web_fetch(url)
response_json = response_json.decode('utf-8')
return json.loads(response_json)
def web_fetch(self, url):
buffer = StringIO()
curl = pycurl.Curl()
curl.setopt(curl.URL, url)
curl.setopt(curl.TIMEOUT, self.timeout)
curl.setopt(curl.WRITEFUNCTION, buffer.write)
curl.perform()
curl.close()
response = buffer.getvalue().strip()
return response
Traceback:
File "/Users/nab/Desktop/myenv2/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response
111. response = callback(request, *callback_args, **callback_kwargs)
File "/Users/nab/Desktop/pricestore/pricemodels/views.py" in view_category
620. apicall=api.API().search_parts(category_id= str(categoryofpart.api_id), manufacturer = manufacturer, filter = filters, start=(catpage-1)*20, limit=20, sort_by='[["mpn","asc"]]')
File "/Users/nab/Desktop/pricestore/pricemodels/api.py" in search_parts
176. return simplejson.loads(response_json)
File "/Users/nab/Desktop/myenv2/lib/python2.7/site-packages/simplejson/__init__.py" in loads
455. return _default_decoder.decode(s)
File "/Users/nab/Desktop/myenv2/lib/python2.7/site-packages/simplejson/decoder.py" in decode
374. obj, end = self.raw_decode(s)
File "/Users/nab/Desktop/myenv2/lib/python2.7/site-packages/simplejson/decoder.py" in raw_decode
393. return self.scan_once(s, idx=_w(s, idx).end())
Exception Type: JSONDecodeError at /pricemodels/2/dir/
Exception Value: Expecting value: line 1 column 1 (char 0)
Your code produced an empty response body, you'd want to check for that or catch the exception raised. It is possible the server responded with a 204 No Content response, or a non-200-range status code was returned (404 Not Found, etc.). Check for this.
Note:
There is no need to use simplejson library, the same library is included with Python as the json module.
There is no need to decode a response from UTF8 to unicode, the simplejson / json .loads() method can handle UTF8 encoded data natively.
pycurl has a very archaic API. Unless you have a specific requirement for using it, there are better choices.
Either the requests or httpx offers much friendlier APIs, including JSON support. If you can, replace your call with:
import requests
response = requests.get(url)
response.raise_for_status() # raises exception when not a 2xx response
if response.status_code != 204:
return response.json()
Of course, this won't protect you from a URL that doesn't comply with HTTP standards; when using arbirary URLs where this is a possibility, check if the server intended to give you JSON by checking the Content-Type header, and for good measure catch the exception:
if (
response.status_code != 204 and
response.headers["content-type"].strip().startswith("application/json")
):
try:
return response.json()
except ValueError:
# decide how to handle a server that's misbehaving to this extent
Be sure to remember to invoke json.loads() on the contents of the file, as opposed to the file path of that JSON:
json_file_path = "/path/to/example.json"
with open(json_file_path, 'r') as j:
contents = json.loads(j.read())
I think a lot of people are guilty of doing this every once in a while (myself included):
contents = json.load(json_file_path)
Check the response data-body, whether actual data is present and a data-dump appears to be well-formatted.
In most cases your json.loads- JSONDecodeError: Expecting value: line 1 column 1 (char 0) error is due to :
non-JSON conforming quoting
XML/HTML output (that is, a string starting with <), or
incompatible character encoding
Ultimately the error tells you that at the very first position the string already doesn't conform to JSON.
As such, if parsing fails despite having a data-body that looks JSON like at first glance, try replacing the quotes of the data-body:
import sys, json
struct = {}
try:
try: #try parsing to dict
dataform = str(response_json).strip("'<>() ").replace('\'', '\"')
struct = json.loads(dataform)
except:
print repr(resonse_json)
print sys.exc_info()
Note: Quotes within the data must be properly escaped
With the requests lib JSONDecodeError can happen when you have an http error code like 404 and try to parse the response as JSON !
You must first check for 200 (OK) or let it raise on error to avoid this case.
I wish it failed with a less cryptic error message.
NOTE: as Martijn Pieters stated in the comments servers can respond with JSON in case of errors (it depends on the implementation), so checking the Content-Type header is more reliable.
Check encoding format of your file and use corresponding encoding format while reading file. It will solve your problem.
with open("AB.json", encoding='utf-8', errors='ignore') as json_data:
data = json.load(json_data, strict=False)
I had the same issue trying to read json files with
json.loads("file.json")
I solved the problem with
with open("file.json", "r") as read_file:
data = json.load(read_file)
maybe this can help in your case
A lot of times, this will be because the string you're trying to parse is blank:
>>> import json
>>> x = json.loads("")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/json/__init__.py", line 348, in loads
return _default_decoder.decode(s)
File "/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
You can remedy by checking whether json_string is empty beforehand:
import json
if json_string:
x = json.loads(json_string)
else:
# Your code/logic here
x = {}
I encounterred the same problem, while print out the json string opened from a json file, found the json string starts with '', which by doing some reserach is due to the file is by default decoded with UTF-8, and by changing encoding to utf-8-sig, the mark out is stripped out and loads json no problem:
open('test.json', encoding='utf-8-sig')
This is the minimalist solution I found when you want to load json file in python
import json
data = json.load(open('file_name.json'))
If this give error saying character doesn't match on position X and Y, then just add encoding='utf-8' inside the open round bracket
data = json.load(open('file_name.json', encoding='utf-8'))
Explanation
open opens the file and reads the containts which later parse inside json.load.
Do note that using with open() as f is more reliable than above syntax, since it make sure that file get closed after execution, the complete sytax would be
with open('file_name.json') as f:
data = json.load(f)
There may be embedded 0's, even after calling decode(). Use replace():
import json
struct = {}
try:
response_json = response_json.decode('utf-8').replace('\0', '')
struct = json.loads(response_json)
except:
print('bad json: ', response_json)
return struct
I had the same issue, in my case I solved like this:
import json
with open("migrate.json", "rb") as read_file:
data = json.load(read_file)
I was having the same problem with requests (the python library). It happened to be the accept-encoding header.
It was set this way: 'accept-encoding': 'gzip, deflate, br'
I simply removed it from the request and stopped getting the error.
Just check if the request has a status code 200. So for example:
if status != 200:
print("An error has occured. [Status code", status, "]")
else:
data = response.json() #Only convert to Json when status is OK.
if not data["elements"]:
print("Empty JSON")
else:
"You can extract data here"
In my case I was doing file.read() two times in if and else block which was causing this error. so make sure to not do this mistake and hold contain in variable and use variable multiple times.
I had exactly this issue using requests.
Thanks to Christophe Roussy for his explanation.
To debug, I used:
response = requests.get(url)
logger.info(type(response))
I was getting a 404 response back from the API.
In my case it occured because i read the data of the file using file.read() and then tried to parse it using json.load(file).I fixed the problem by replacing json.load(file) with json.loads(data)
Not working code
with open("text.json") as file:
data=file.read()
json_dict=json.load(file)
working code
with open("text.json") as file:
data=file.read()
json_dict=json.loads(data)
For me, it was not using authentication in the request.
For me it was server responding with something other than 200 and the response was not json formatted. I ended up doing this before the json parse:
# this is the https request for data in json format
response_json = requests.get()
# only proceed if I have a 200 response which is saved in status_code
if (response_json.status_code == 200):
response = response_json.json() #converting from json to dictionary using json library
I received such an error in a Python-based web API's response .text, but it led me here, so this may help others with a similar issue (it's very difficult to filter response and request issues in a search when using requests..)
Using json.dumps() on the request data arg to create a correctly-escaped string of JSON before POSTing fixed the issue for me
requests.post(url, data=json.dumps(data))
In my case it is because the server is giving http error occasionally. So basically once in a while my script gets the response like this rahter than the expected response:
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html>
<head><title>502 Bad Gateway</title></head>
<body bgcolor="white">
<h1>502 Bad Gateway</h1>
<p>The proxy server received an invalid response from an upstream server.<hr/>Powered by Tengine</body>
</html>
Clearly this is not in json format and trying to call .json() will yield JSONDecodeError: Expecting value: line 1 column 1 (char 0)
You can print the exact response that causes this error to better debug.
For example if you are using requests and then simply print the .text field (before you call .json()) would do.
I did:
Open test.txt file, write data
Open test.txt file, read data
So I didn't close file after 1.
I added
outfile.close()
and now it works
If you are a Windows user, Tweepy API can generate an empty line between data objects. Because of this situation, you can get "JSONDecodeError: Expecting value: line 1 column 1 (char 0)" error. To avoid this error, you can delete empty lines.
For example:
def on_data(self, data):
try:
with open('sentiment.json', 'a', newline='\n') as f:
f.write(data)
return True
except BaseException as e:
print("Error on_data: %s" % str(e))
return True
Reference:
Twitter stream API gives JSONDecodeError("Expecting value", s, err.value) from None
if you use headers and have "Accept-Encoding": "gzip, deflate, br" install brotli library with pip install. You don't need to import brotli to your py file.
In my case it was a simple solution of replacing single quotes with double.
You can find my answer here

Categories