Presently I'm trying to make MONKALOT run on a PythonAnywhere account (customized Web Developer). I have basic knowledge of Linux but unfortunately no knowledge of dev'oping python scripts but advanced knowledge of dev'oping Java (hope that helps).
My success log so far:
After upgrading my account to Web Developer level I finally made pip download the (requirements)[https://github.com/NMisko/monkalot/blob/master/requirements.txt] and half the internet (2 of 5GB used). All modules and dependencies seem to be successfully installed.
I configured my own monkalot-channel including OAuth which serves as a staging instance for now. The next challenge was how to get monkalot starting up. Using python3.7 instead of python or any other python3 environment did the trick.
But now I'm stuck. After "completing the training stage" the monkalot-script prematurely ends with the following message:
[22:14] ...chat bot finished training.
Traceback (most recent call last):
File "monkalot.py", line 72, in <module>
bots.append(TwitchBot(path))
File "/home/Chessalot/monkalot/bot/bot.py", line 56, in __init__
self.users = self.twitch.get_chatters()
File "/home/Chessalot/monkalot/bot/data_sources/twitch.py", line 25, in get_chatters
data = requests.get(USERLIST_API.format(self.channel)).json()
File "/usr/local/lib/python3.7/site-packages/requests/models.py", line 900, in json
return complexjson.loads(self.text, **kwargs)
File "/usr/local/lib/python3.7/site-packages/simplejson/__init__.py", line 525, in loads
return _default_decoder.decode(s)
File "/usr/local/lib/python3.7/site-packages/simplejson/decoder.py", line 370, in decode
obj, end = self.raw_decode(s)
File "/usr/local/lib/python3.7/site-packages/simplejson/decoder.py", line 400, in raw_decode
return self.scan_once(s, idx=_w(s, idx).end())
simplejson.errors.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
By now I figured out that monkalot tries to load the chatters list and expects at least an empty json array as result but actually seems to receive an empty string.
So my question is: What can I do to make the monkalot-script work? Is monkalot's current version incompatible to the current Twitch-API? Are there any outdated python libraries which may cause the incompatibility? Or is there an unrecognized configuration issue preventing the script from running successfully?
Thank you all in advance. Any ideas provided by you are highly appreciated.
The most likely cause of that is that you are using a free PythonAnywhere account and have not configured monkalot to use the proxy. Check the documentation of monkalot to determine how you can configure it to use a proxy. See https://help.pythonanywhere.com/pages/403ForbiddenError/ for the proxy details.
Only a quick thought, might not be the problem you are encountering, but it may be due to the project name. E.g.:
From github:
... I believed that the issue was something other than the project name, since I get a different error if I use a project name that doesn't exist. However, I just tried using ben-heil/saged instead of just saged for the project name and that seems to have fixed it.
EDIT: your HTTP 404 error was caused by this:
File "monkalot.py", line 72, in <module>
bots.append(TwitchBot(path))
Now this points out that the function called with path is giving an error. Especially since you see a lot of decode in the traceback error, you can deduce it has something to with your characters you inputted.
Other errors in your traceback that point this out:
simplejson.errors.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
JSONDecodeError: Expecting value: line 1 column 1 (char 0) occurs when we try to parse something that is not valid JSON as if it were. To solve the error, make sure the response or the file is not empty or conditionally check for the content type before parsing.
In most cases your json.loads- JSONDecodeError: Expecting value: line 1 column 1 (char 0) error is due to :
non-JSON conforming quoting
XML/HTML output (that is, a string starting with <), or
incompatible character encoding
In this case, the case caused the error (content type!).
Related sources:
python json decoder
JSONDecodeError: Expecting value: line 1 column 1 (char 0)
After I expected the response, I found out that I received a HTTP 400, Bad Request error WITHOUT any data in the HTTP response body. Since monkalot expects a JSON answer the errors were raised. This was due to the fact that in the channel configuration I used an uppercase letter whereas Twitch expects all letters lowercase.
Related
I'm trying to send my Behave test results to an API Endpoint. I set the output file to be a new JSON file, run my test, and then in the Behave after_all() send the JSON result via the requests package.
I'm running my Behave test like so:
args = ['--outfile=/home/user/nathan/results/behave4.json',
'--for mat=json.pretty']
from behave.__main__ import main as behave_main
behave_main(args)
In my environment.py's after_all(), I have:
def after_all(context):
data = json.load(open('/home/user/myself/results/behave.json', 'r')) # This line causes the error
sendoff = {}
sendoff['results'] = data
r = requests.post(MyAPIEndpoint, json=sendoff)
I'm getting the following error when running my Behave test:
HOOK-ERROR in after_all: ValueError: Expecting object: line 124 column 1
(char 3796)
ABORTED: By user.
The reported error is here in my JSON file:
[
{
...
} <-- line 124, column 1
]
However, behave.json is outputted after the run and according to JSONLint it is valid JSON. I don't know the exact details of after_all(), but I think the issue is that the JSON file isn't done writing by the time I try to open it in after_all(). If I try json.load() a second time on the behave.json file after the file is written, it runs without error and I am able to view my JSON file at the endpoint.
Any better explanation as to why this is happening? Any solution or change in logic to get past this?
Yes, it seems as though the file is still in the process of being written when I try to access it in after_all(). I put in a small delay before I open the file in my code, then I manually viewed the behave.json file and saw that there was no closing ] after the last }.
That explains that. I will create a new question to find out how to get by this, or if a change in a logic is required.
Recently, without changes to codes/libs, I started getting python error_proto: line too long error when reading email (poplib.retr) from hotmail inbox. I am using Python version 2.7.8. I understand that a long line may be caused this error. But is there a way to go around this or a certain version I need to put in place. Thank you for any advice/direction anyone can give.
Here is a traceback error:
"/opt/rh/python27/root/usr/lib64/python2.7/poplib.py", line 232, in retr\n return self._longcmd(\'RETR %s\' % which)\n',
' File "/opt/rh/python27/root/usr/lib64/python2.7/poplib.py", line 167, in _longcmd\n return self._getlongresp()\n',
' File "/opt/rh/python27/root/usr/lib64/python2.7/poplib.py", line 152, in _getlongresp\n line, o = self._getline()\n',
' File "/opt/rh/python27/root/usr/lib64/python2.7/poplib.py", line 377, in _getline\n raise error_proto(\'line too long\')\n',
'error_proto: line too long\n'
A python bug report exists for this issue here: https://bugs.python.org/issue16041
The work around I put inplace was as follows:
import poplib
poplib._MAXLINE=20480
I thought this was a better idea, rather than editing the poplib.py library file directly.
Woody
Are you sure you've not updated poplib? Have a look at the most recent diff, committed last night:
# Added:
...
# maximal line length when calling readline(). This is to prevent
# reading arbitrary length lines. RFC 1939 limits POP3 line length to
# 512 characters, including CRLF. We have selected 2048 just to be on
# the safe side.
_MAXLINE = 2048
...
# in_getline()...
if len(self.buffer) > _MAXLINE:
raise error_proto('line too long')
...it looks suspiciously similar to your problem.
So if you roll back to the previous version, it will probably be OK.
I have following problem with Python 2.7 and Plot.ly API and I am not sure whats going on and where is the problem. Before I write to authors I am going to try to ask here. I have a script that scans a specific websites, their links and analyzes content (words, counts, etc). The result is plotted by Plotly as a bar graph. Everything works fine, script is run every 30 minutes. But what happens every day few times is, that the method that handles data upload through API, like response = py.plot([data]), says "ValueError: No JSON object could be decoded" (data is not empty, counting works fine). What I don't understand is that:
1) It was working with the same script code few minutes ago
2) It doesn't matter what data I put inside the variable data (like simple numbers for x and y)
3) After the above mentioned error, the data are sent and published, but the descriptors - layouts (axis setup, title, size of graph) are not because they are set in the next step separately and script is terminated at the position of creating response (well I could merge that together, but the error still appears and I'd like to know why)
4) when I create empty .py file with basic example like:
import plotly
py = plotly.plotly(username='someUname', key='someApiKey')
x0 = ['a', 'b', 'c'];
y0 = [20, 14, 23];
data = {'x': x0, 'y': y0,'type': 'bar'}
response = py.plot([data])
url = response['url']
filename = response['filename']
Then the result is the same, no JSON object could be decoded, to be exact.
Traceback (most recent call last):
File "<module1>", line 10, in <module>
File "C:\Python27\lib\site-packages\plotly-0.4-py2.7.egg\plotly\plotly.py", line 69, in plot
r = self.__makecall(args, un, key, origin, kwargs)
File "C:\Python27\lib\site-packages\plotly-0.4-py2.7.egg\plotly\plotly.py", line 142, in __makecall
r = json.loads(r.text)
File "C:\Python27\lib\json\__init__.py", line 338, in loads
return _default_decoder.decode(s)
File "C:\Python27\lib\json\decoder.py", line 365, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "C:\Python27\lib\json\decoder.py", line 383, in raw_decode
raise ValueError("No JSON object could be decoded")
ValueError: No JSON object could be decoded
Data are published but I am not able to set layouts. At the time when the word counting script works fine, this small piece of example code works as well.
Does anyone have the same experience? Well I am not a coding pro, but it seems that the problem could be somewhere outside of my code. Or, maybe I missed something, anyway I am not able to debug/understand the reason.
Thank you for tips
Chris here, from Plotly. Thanks for reporting the issue. You definitely aren't doing anything wrong on your end! This error arises because of some transmission issue from plotly to your desktop. The API expects a string in JSON format from the plotly server but received something different. I'll look into it further. Definitely email me if it happens again! --chris[at]plot.ly
I'm trying to track down a Python UnicodeDecodeError in the following log line:
10.210.141.123 - - [09/Nov/2011:14:41:04 -0800] "gfR\x15¢\x09ì|Äbk\x0F[×ÐÖà\x11CEÐÌy\x5C¿DÌj\x08Ï ®At\x07å!;f>\x08éPW¤\x1C\x02ö*6+\x5C\x15{,ªIkCRA\x22 xþP9â\x13h\x01¢è´\x1DzõWiË\x5C\x10sòʨR)¶²\x1F8äl¾¢{ÆNw\x08÷#ï" 400 166 0.000 "-" "-"
I opened the entire log file in Vim, and then yanked the line into a new file so I could test just the one line. However, my parsing script works OK with the new file - it doesn't throw a UnicodeDecodeError. I don't understand why the one file would generate an error and the other one would not, when they are (on the surface) identical.
Here's what I tried: running enca to determine the file encoding, which complained that it Cannot determine (or understand) your language preferences. file -i says that both files are Regular files. I also deleted every other line in the original log file and still got the error in one file and no error in the other. I tried deleting
set encoding=utf-8
from my .vimrc, writing the file again, and I still got the error in one file and not in the other.
The logs are nginx logs. Nginx has this note in their release notes:
*) Change: now the 0x00-0x1F, '"' and '\' characters are escaped as \xXX
in an access_log.
Thanks to Maxim Dounin.
My Python script has with open('log_file') as f and the error comes up when I try to call json.dumps on a dict.
How can I track this down?
Your question: How can I track this down?
Answer:
(1) Show us the full text of the error message that you got -- without knowing what encoding that you were trying to use, we can't tell you anything. A traceback and a snippet of code that reads the file and reproduces the error would also be handy.
(2) Write a tiny Python script to find the line in the file and then do:
print repr(the_line) # Python 2.X
print ascii(the_line) # Python 3.x
and copy/paste the result into an edit of your question, so that we can see unambiguously what is in the line.
(3) It does look like random gibberish except for the but do tell us whether you expect that line to be text (if so, in what human language?).
I am trying to download some data from the datastore using the following
command:
appcfg.py download_data --config_file=bulkloader.yaml --application=myappname
--kind=mykindname --filename=myappname_mykindname.csv
--url=http://myappname.appspot.com/_ah/remote_api
When I didn't have much data in this particular kind/table I could
download the data in one shot - occasionally running into the
following error:
.................................[ERROR ] [Thread-11]
ExportProgressThread:
Traceback (most recent call last):
File "C:\Program Files\Google\google_appengine\google\appengine\tools
\bulkload
er.py", line 1448, in run
self.PerformWork()
File "C:\Program Files\Google\google_appengine\google\appengine\tools
\bulkload
er.py", line 2216, in PerformWork
item.key_end)
File "C:\Program Files\Google\google_appengine\google\appengine\tools
\bulkload
er.py", line 2011, in StoreKeys
(STATE_READ, unicode(kind), unicode(key_start), unicode(key_end)))
OperationalError: unable to open database file
This is what I see in the server log:
Traceback (most recent call last):
File "/base/python_runtime/python_lib/versions/1/google/appengine/
ext/remote_api/handler.py", line 277, in post
response_data = self.ExecuteRequest(request)
File "/base/python_runtime/python_lib/versions/1/google/appengine/
ext/remote_api/handler.py", line 308, in ExecuteRequest
response_data)
File "/base/python_runtime/python_lib/versions/1/google/appengine/
api/apiproxy_stub_map.py", line 86, in MakeSyncCall
return stubmap.MakeSyncCall(service, call, request, response)
File "/base/python_runtime/python_lib/versions/1/google/appengine/
api/apiproxy_stub_map.py", line 286, in MakeSyncCall
rpc.CheckSuccess()
File "/base/python_runtime/python_lib/versions/1/google/appengine/
api/apiproxy_rpc.py", line 126, in CheckSuccess
raise self.exception
ApplicationError: ApplicationError: 4 no matching index found.
When that error appeared I would simply re-run the download and things
would work out well.
Of late, I am noticing that as the size of my kind increases, the
download tool fails much more often. For instance, with a kind with
~3500 entities I had to run to the command 5 times - only the last of
which succeeded. Is there a way around this error? Previously, my only
worry was I wouldn't be able to automate downloads in a script because
of the occasional failures - now I am scared I won't be able to get my
data out at all.
This issue was discussed previously here
but the post is old and I am not sure what the suggested flag does -
hence posting my similar query again.
Some additional details.
As mentioned here I tried the suggestion to proceed with interrupted downloads (in the section Downloading Data from App Engine ). When I resume after the interruption, I get no errors, but the number of rows that are downloaded are lesser than the entity count the datastore admin shows me.This is the message I get:
[INFO ] Have 3220 entities, 3220 previously transferred
[INFO ] 3220 entities (1003 bytes) transferred in 2.9 seconds
The datastore admin tells me this particular kind has ~4300 entities. Why aren't the remaining entities getting downloaded?
Thanks!
I am going to make a completely uneducated guess at this just based on the fact that I saw the word "unicode" in the first error; I had an issue that was related to my data being user generated from the web. A user put in a few unicode characters and a whole load of stuff started breaking - probably my fault - as I had implemented pretty looking repr functions and a load of other stuff. If you can, take a quick scan of your data via the console utility in your live app, maybe (if it's only 4k records), try converting all of the data to ascii strings to find any that don't conform.
And after that, I started "sanitising" user inputs (sorry, but my "public handle" field needs to be ascii only players!)