I have following problem with Python 2.7 and Plot.ly API and I am not sure whats going on and where is the problem. Before I write to authors I am going to try to ask here. I have a script that scans a specific websites, their links and analyzes content (words, counts, etc). The result is plotted by Plotly as a bar graph. Everything works fine, script is run every 30 minutes. But what happens every day few times is, that the method that handles data upload through API, like response = py.plot([data]), says "ValueError: No JSON object could be decoded" (data is not empty, counting works fine). What I don't understand is that:
1) It was working with the same script code few minutes ago
2) It doesn't matter what data I put inside the variable data (like simple numbers for x and y)
3) After the above mentioned error, the data are sent and published, but the descriptors - layouts (axis setup, title, size of graph) are not because they are set in the next step separately and script is terminated at the position of creating response (well I could merge that together, but the error still appears and I'd like to know why)
4) when I create empty .py file with basic example like:
import plotly
py = plotly.plotly(username='someUname', key='someApiKey')
x0 = ['a', 'b', 'c'];
y0 = [20, 14, 23];
data = {'x': x0, 'y': y0,'type': 'bar'}
response = py.plot([data])
url = response['url']
filename = response['filename']
Then the result is the same, no JSON object could be decoded, to be exact.
Traceback (most recent call last):
File "<module1>", line 10, in <module>
File "C:\Python27\lib\site-packages\plotly-0.4-py2.7.egg\plotly\plotly.py", line 69, in plot
r = self.__makecall(args, un, key, origin, kwargs)
File "C:\Python27\lib\site-packages\plotly-0.4-py2.7.egg\plotly\plotly.py", line 142, in __makecall
r = json.loads(r.text)
File "C:\Python27\lib\json\__init__.py", line 338, in loads
return _default_decoder.decode(s)
File "C:\Python27\lib\json\decoder.py", line 365, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "C:\Python27\lib\json\decoder.py", line 383, in raw_decode
raise ValueError("No JSON object could be decoded")
ValueError: No JSON object could be decoded
Data are published but I am not able to set layouts. At the time when the word counting script works fine, this small piece of example code works as well.
Does anyone have the same experience? Well I am not a coding pro, but it seems that the problem could be somewhere outside of my code. Or, maybe I missed something, anyway I am not able to debug/understand the reason.
Thank you for tips
Chris here, from Plotly. Thanks for reporting the issue. You definitely aren't doing anything wrong on your end! This error arises because of some transmission issue from plotly to your desktop. The API expects a string in JSON format from the plotly server but received something different. I'll look into it further. Definitely email me if it happens again! --chris[at]plot.ly
Related
Presently I'm trying to make MONKALOT run on a PythonAnywhere account (customized Web Developer). I have basic knowledge of Linux but unfortunately no knowledge of dev'oping python scripts but advanced knowledge of dev'oping Java (hope that helps).
My success log so far:
After upgrading my account to Web Developer level I finally made pip download the (requirements)[https://github.com/NMisko/monkalot/blob/master/requirements.txt] and half the internet (2 of 5GB used). All modules and dependencies seem to be successfully installed.
I configured my own monkalot-channel including OAuth which serves as a staging instance for now. The next challenge was how to get monkalot starting up. Using python3.7 instead of python or any other python3 environment did the trick.
But now I'm stuck. After "completing the training stage" the monkalot-script prematurely ends with the following message:
[22:14] ...chat bot finished training.
Traceback (most recent call last):
File "monkalot.py", line 72, in <module>
bots.append(TwitchBot(path))
File "/home/Chessalot/monkalot/bot/bot.py", line 56, in __init__
self.users = self.twitch.get_chatters()
File "/home/Chessalot/monkalot/bot/data_sources/twitch.py", line 25, in get_chatters
data = requests.get(USERLIST_API.format(self.channel)).json()
File "/usr/local/lib/python3.7/site-packages/requests/models.py", line 900, in json
return complexjson.loads(self.text, **kwargs)
File "/usr/local/lib/python3.7/site-packages/simplejson/__init__.py", line 525, in loads
return _default_decoder.decode(s)
File "/usr/local/lib/python3.7/site-packages/simplejson/decoder.py", line 370, in decode
obj, end = self.raw_decode(s)
File "/usr/local/lib/python3.7/site-packages/simplejson/decoder.py", line 400, in raw_decode
return self.scan_once(s, idx=_w(s, idx).end())
simplejson.errors.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
By now I figured out that monkalot tries to load the chatters list and expects at least an empty json array as result but actually seems to receive an empty string.
So my question is: What can I do to make the monkalot-script work? Is monkalot's current version incompatible to the current Twitch-API? Are there any outdated python libraries which may cause the incompatibility? Or is there an unrecognized configuration issue preventing the script from running successfully?
Thank you all in advance. Any ideas provided by you are highly appreciated.
The most likely cause of that is that you are using a free PythonAnywhere account and have not configured monkalot to use the proxy. Check the documentation of monkalot to determine how you can configure it to use a proxy. See https://help.pythonanywhere.com/pages/403ForbiddenError/ for the proxy details.
Only a quick thought, might not be the problem you are encountering, but it may be due to the project name. E.g.:
From github:
... I believed that the issue was something other than the project name, since I get a different error if I use a project name that doesn't exist. However, I just tried using ben-heil/saged instead of just saged for the project name and that seems to have fixed it.
EDIT: your HTTP 404 error was caused by this:
File "monkalot.py", line 72, in <module>
bots.append(TwitchBot(path))
Now this points out that the function called with path is giving an error. Especially since you see a lot of decode in the traceback error, you can deduce it has something to with your characters you inputted.
Other errors in your traceback that point this out:
simplejson.errors.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
JSONDecodeError: Expecting value: line 1 column 1 (char 0) occurs when we try to parse something that is not valid JSON as if it were. To solve the error, make sure the response or the file is not empty or conditionally check for the content type before parsing.
In most cases your json.loads- JSONDecodeError: Expecting value: line 1 column 1 (char 0) error is due to :
non-JSON conforming quoting
XML/HTML output (that is, a string starting with <), or
incompatible character encoding
In this case, the case caused the error (content type!).
Related sources:
python json decoder
JSONDecodeError: Expecting value: line 1 column 1 (char 0)
After I expected the response, I found out that I received a HTTP 400, Bad Request error WITHOUT any data in the HTTP response body. Since monkalot expects a JSON answer the errors were raised. This was due to the fact that in the channel configuration I used an uppercase letter whereas Twitch expects all letters lowercase.
I'm trying to send my Behave test results to an API Endpoint. I set the output file to be a new JSON file, run my test, and then in the Behave after_all() send the JSON result via the requests package.
I'm running my Behave test like so:
args = ['--outfile=/home/user/nathan/results/behave4.json',
'--for mat=json.pretty']
from behave.__main__ import main as behave_main
behave_main(args)
In my environment.py's after_all(), I have:
def after_all(context):
data = json.load(open('/home/user/myself/results/behave.json', 'r')) # This line causes the error
sendoff = {}
sendoff['results'] = data
r = requests.post(MyAPIEndpoint, json=sendoff)
I'm getting the following error when running my Behave test:
HOOK-ERROR in after_all: ValueError: Expecting object: line 124 column 1
(char 3796)
ABORTED: By user.
The reported error is here in my JSON file:
[
{
...
} <-- line 124, column 1
]
However, behave.json is outputted after the run and according to JSONLint it is valid JSON. I don't know the exact details of after_all(), but I think the issue is that the JSON file isn't done writing by the time I try to open it in after_all(). If I try json.load() a second time on the behave.json file after the file is written, it runs without error and I am able to view my JSON file at the endpoint.
Any better explanation as to why this is happening? Any solution or change in logic to get past this?
Yes, it seems as though the file is still in the process of being written when I try to access it in after_all(). I put in a small delay before I open the file in my code, then I manually viewed the behave.json file and saw that there was no closing ] after the last }.
That explains that. I will create a new question to find out how to get by this, or if a change in a logic is required.
So I can send strings just fine. What I want to do though is to more or less send a string representation of a list. I know quite a few to convert a list into something that should be able to sent as a string and then converted back.
#on sending
l = [1,2,3,4]
l_str = str(l)
#on receiving
l = ast.literal_eval(received_data)
## or pickle
l = pickle.dumps([1,2,3,4])
##then
l = pick.loads(received_data)
My issues however seems to be that something odd is happening between the receiving and sending.
Right now I have this
msg = pickle.dumps([sys.stdin.readline(), person])
s.send(msg)
where sys.stdin.readline() is the line typed into the console and person is a variable containing someone's name.
I then receive it like so.
d1 = sock.recv(4096)
pickles = False
try:
d1 = pickle.loads(d1)
pickles = True
It doesn't matter if I just make the list string by my first method and then use ast.literal_eval or use pickle, it never actually converts back to the list I want.
I currently have a try statement in place because I know there will be times where I will actually not be getting back something that was dumped using pickle or what not, so the idea is that it should fail on those and in the except just continue as if the data received was formatted correctly.
The error that is produced when I try to unpickle them for instance is
Traceback (most recent call last):
File "telnet.py", line 75, in <module>
d1 = pickle.loads(d1)
File "/usr/local/lib/python2.7/pickle.py", line 1382, in loads
return Unpickler(file).load()
File "/usr/local/lib/python2.7/pickle.py", line 858, in load
dispatch[key](self)
KeyError: '\r'
The pickle.loads never succeeds because pickles is never True...... Any ideas?
EDIT: I have the overall solution. The issue for me was actually not in the file you see in the error, that being telnet.py, but in another file. I didn't realize that the intermediate server was receiving input and changing it. However, after some suggestions, I realized that was what exactly was happening.
My issue actually came from another file. At the time I did not realize that this being a chat server / client was important. However, the chat server was actually sending data back to the client that it formatted.. honestly I don't know how that didn't hit me but thats what happened.
I have this piece of code to process a big file in Python:
import urllib2, json, csv
import requests
def readJson(url):
"""
Read a json file.
:param url: url to be read.
:return: a json file.
"""
try:
response = urllib2.urlopen(url)
return json.loads(response.read(), strict=False)
except urllib2.HTTPError as e:
return None
def getRoadsTopology():
nodes = []
edges = []
url = "https://data.cityofnewyork.us/api/geospatial/svwp-sbcd?method=export&format=GeoJSON"
data = readJson(url)
print "Done reading road bed"
print "Processing road bed..."
v_index = 0;
roads = 0
for road in data['features']:
n_index = len(nodes)
# (long, lat)
coordinates = road['geometry']['coordinates'][0]
for i in range(0, len(coordinates)):
lat_long = coordinates[i]
nodes.append((lat_long[1], lat_long[0]))
for i in range(n_index, len(nodes)-1-n_index):
print i, i+1
edges.append((i, i+1))
return nodes, edges
Sometimes it works, but a lot of times I get the same error at different lines:
File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/decoder.py", line 380, in raw_decode
obj, end = self.scan_once(s, idx)
ValueError: Expecting : delimiter: line 7 column 4 (char 74317829)
File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/decoder.py", line 380, in raw_decode
obj, end = self.scan_once(s, idx)
ValueError: Expecting , delimiter: line 5 column 1 (char 72149996)
I'm wondering what causes these error, and at different lines, and how I could solve it.
The site that provide this file has also a successful presentation of it:
https://data.cityofnewyork.us/City-Government/road/svwp-sbcd
It looks like your JSON input is malformed. The error is being thrown from raw_decode, which is part of the JSON library--so it's dumping before it even gets to your processing code. The inconsistency of the results would lead me to think maybe the JSON is somehow getting corrupted, or not completely delivered.
My next step would be to pull the JSON from the source, store in a local file, lint it to make sure it's valid, then test your program by from that file directly.
Update:
Curious, I downloaded the file several times. A couple of them came out being far too small. It seems the real size is around 121M. Once I got a couple of those consistently, I ran your program against it, replacing your url-loader with a file loader. It works perfectly, unless I have too little RAM, which then yields a segfault.
I had the most success downloading the file on a virtual server on DigitalOcean--it successfully got it every time. When doing it on my local machine, the file was truncated, which leads me to believe that perhaps the server sending you the JSON is cutting off the stream after some timeout period. The DigitalOcean server has a massive throughput, averaging 12 MB/s, pulling the entire file in 10 seconds. My local machine could only pull less than 1MB/s, and couldn't finish. It stopped at 2 minutes, only having pulled 75Mb. The sending server probably has a 2 minute time limit on requests.
This would explain why their page works, but your script struggles to get it all. The map data is being processed by another server that can pull the data from the source in the time allowed, then streamed piece by piece as needed to the web viewer.
I'm in the process of writing a python module to POST files to a server , I can upload files of size of upto 500MB but when I tried to upload a 1gb file the upload failed, If I were to use something like cURL it won't fail. I got the code after googling how to upload multipart formdata using python , the code can be found here. I just compiled and ran that code , the error I'm getting is this
Traceback (most recent call last):
File "<pyshell#7>", line 1, in <module>
opener.open("http://127.0.0.1/test_server/upload",params)
File "C:\Python27\lib\urllib2.py", line 392, in open
req = meth(req)
File "C:\Python27\MultipartPostHandler.py", line 35, in http_request
boundary, data = self.multipart_encode(v_vars, v_files)
File "C:\Python27\MultipartPostHandler.py", line 63, in multipart_encode
buffer += '\r\n' + fd.read() + '\r\n'
MemoryError
I'm new to python and having a hard time grasping it. I also came across another program here , I'll be honest I don't know how to run it. I tried running it by guessing based on the function name , but that didn't work.
The script in question isn't very smart and builds the POST body in memory.
Thus, to POST a 1GB file, you'll need 1GB of memory just to hold that data, plus the HTTP headers, boundaries, and python and the code itself.
You'd have to rework the script to use mmap instead, where you first construct the whole body in a temp file before handing that file wrapped in a mmap.mmap value to passing it to request.add_data.
See Python: HTTP Post a large file with streaming for hints on how to achieve that.