MemoryError with Discord selfbot during 'bot.run' - python

Before you tell me, yes I am aware that selfbots can get you banned. My selfbot is for work purposes in a server with me and three others. I'm doing nothing shady or weird over here.
I'm using the following selfbot code: https://github.com/Supersebi3/Selfbot
Upon logging in, being that I'm in about 50 servers, I experience the following:
This carries on for several minutes, until I eventually get a MemoryError:
File "main.py", line 96, in <module>
bot.run(token, bot=False)
File "D:\Python\Python36-32\lib\site-packages\discord\client.py", line 519, in run
self.loop.run_until_complete(self.start(*args, **kwargs))
File "D:\Python\Python36-32\lib\asyncio\base_events.py", line 468, in run_until_complete
return future.result()
File "D:\Python\Python36-32\lib\site-packages\discord\client.py", line 491, in start
yield from self.connect()
File "D:\Python\Python36-32\lib\site-packages\discord\client.py", line 448, in connect
yield from self.ws.poll_event()
File "D:\Python\Python36-32\lib\site-packages\discord\gateway.py", line 431, in poll_event
yield from self.received_message(msg)
File "D:\Python\Python36-32\lib\site-packages\discord\gateway.py", line 327, in received_message
log.debug('WebSocket Event: {}'.format(msg))
MemoryError
Can anyone explain to why this is happening and how I can fix it? Is there any way I can skip the chunk processing for the members of every server my selfbot account is in?

Related

How to debug a stuck asyncio coroutine in Python?

There are lots of coroutines in my production code, which are stuck at unknown position while processing request. I attached gdb with Python support extension to the process, but it doesn't show the exact line in the coroutine where the process is stuck, only primary stack trace. Here is a minimal example:
import asyncio
async def hello():
await asyncio.sleep(30)
print('hello world')
asyncio.run(hello())
(gdb) py-bt
Traceback (most recent call first):
File "/usr/lib/python3.8/selectors.py", line 468, in select
fd_event_list = self._selector.poll(timeout, max_ev)
File "/usr/lib/python3.8/asyncio/base_events.py", line 2335, in _run_once
File "/usr/lib/python3.8/asyncio/base_events.py", line 826, in run_forever
None, getaddr_func, host, port, family, type, proto, flags)
File "/usr/lib/python3.8/asyncio/base_events.py", line 603, in run_until_complete
self.run_forever()
File "/usr/lib/python3.8/asyncio/runners.py", line 299, in run
File "main.py", line 7, in <module>
GDB shows a trace that ends on line 7, but the code is obviously stuck on line 4. How to make it show a more complete trace with nested coroutines?
You can use the aiodebug.log_slow_callbacks.enable(0.05)
Follow for more : https://pypi.org/project/aiodebug/

Timestamp out of range for platform on 32bit system

I'm trying to run a script I wrote on my Raspberry Pi Zero, but I keep getting the error OverflowError: timestamp out of range for platform time_t.
I'm relatively certain it's something with the 32-bit ARM architecture of the pi, but I can't seem to figure out a workaround.
Here's the traceback:
File "twitter.py", line 37, in <module>
t.run.Search(c)
File "/home/pi/.local/lib/python3.7/site-packages/twint/run.py", line 288, in Search
run(config, callback)
File "/home/pi/.local/lib/python3.7/site-packages/twint/run.py", line 209, in run
get_event_loop().run_until_complete(Twint(config).main(callback))
File "/usr/lib/python3.7/asyncio/base_events.py", line 584, in run_until_complete
return future.result()
File "/home/pi/.local/lib/python3.7/site-packages/twint/run.py", line 150, in main
await task
File "/home/pi/.local/lib/python3.7/site-packages/twint/run.py", line 194, in run
await self.tweets()
File "/home/pi/.local/lib/python3.7/site-packages/twint/run.py", line 141, in tweets
await output.Tweets(tweet, self.config, self.conn)
File "/home/pi/.local/lib/python3.7/site-packages/twint/output.py", line 142, in Tweets
await checkData(tweets, config, conn)
File "/home/pi/.local/lib/python3.7/site-packages/twint/output.py", line 116, in checkData
panda.update(tweet, config)
File "/home/pi/.local/lib/python3.7/site-packages/twint/storage/panda.py", line 67, in update
day = weekdays[strftime("%A", localtime(Tweet.datetime))]
OverflowError: timestamp out of range for platform time_t
I've done some searching and found similar(ish) issues, but most of them are with directly converting timestamps, where mine appears to be with setting a time. I've tried rebooting the Pi and immediately running the script to see if the issue is with the Pi being on to long, but that returned the same.
Anyone have any tips?
Thanks, Ben

Occasional Flask server error: 'RuntimeError: maximum recursion depth exceeded while calling a Python object'

I'm trying to figure out what causes this error when I run my app using the basic Flask server during development. I start it with this:
from myapp import app
app.run(debug=True, port=5001)
All is well and I'll continue to code and refresh etc, but then after a while I get the recursion error and have to Ctrl-C the server and restart it. Not a big deal, just a little annoying to have to deal with every now and then.
Here's the full traceback, which I tried to use to determine the cause but can't see anything that stands out (possibly something to do with how werkzeug uses Cookie.py?):
Traceback (most recent call last):
File "/Users/jeff/.virtualenvs/fmll/lib/python2.7/site-packages/flask/app.py", line 1701, in __call__
return self.wsgi_app(environ, start_response)
File "/Users/jeff/.virtualenvs/fmll/lib/python2.7/site-packages/werkzeug/wsgi.py", line 411, in __call__
return self.app(environ, start_response)
(last bit repeated a bunch - trimmed to fit in posting size requirements)
File "/Users/jeff/.virtualenvs/fmll/lib/python2.7/site-packages/flask/app.py", line 1685, in wsgi_app
with self.request_context(environ):
File "/Users/jeff/.virtualenvs/fmll/lib/python2.7/site-packages/flask/ctx.py", line 274, in __enter__
self.push()
File "/Users/jeff/.virtualenvs/fmll/lib/python2.7/site-packages/flask/ctx.py", line 238, in push
self.session = self.app.open_session(self.request)
File "/Users/jeff/.virtualenvs/fmll/lib/python2.7/site-packages/flask/app.py", line 792, in open_session
return self.session_interface.open_session(self, request)
File "/Users/jeff/.virtualenvs/fmll/lib/python2.7/site-packages/flask/sessions.py", line 191, in open_session
secret_key=key)
File "/Users/jeff/.virtualenvs/fmll/lib/python2.7/site-packages/werkzeug/contrib/securecookie.py", line 309, in load_cookie
data = request.cookies.get(key)
File "/Users/jeff/.virtualenvs/fmll/lib/python2.7/site-packages/werkzeug/utils.py", line 77, in __get__
value = self.func(obj)
File "/Users/jeff/.virtualenvs/fmll/lib/python2.7/site-packages/werkzeug/wrappers.py", line 418, in cookies
cls=self.dict_storage_class)
File "/Users/jeff/.virtualenvs/fmll/lib/python2.7/site-packages/werkzeug/http.py", line 741, in parse_cookie
cookie.load(header)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/Cookie.py", line 632, in load
self.__ParseString(rawdata)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/Cookie.py", line 665, in __ParseString
self.__set(K, rval, cval)
File "/Users/jeff/.virtualenvs/fmll/lib/python2.7/site-packages/werkzeug/_internal.py", line 290, in _BaseCookie__set
morsel = self.get(key, _ExtendedMorsel())
File "/Users/jeff/.virtualenvs/fmll/lib/python2.7/site-packages/werkzeug/_internal.py", line 271, in __init__
Morsel.__init__(self)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/Cookie.py", line 438, in __init__
dict.__setitem__(self, K, "")
RuntimeError: maximum recursion depth exceeded while calling a Python object
Since it occurs during your developement process, you could increase recursion limit, before starting your server, using :
sys.setrecursionlimit(2000) # Choose the right figure for you here
# the value on my system is 1000 but this is platform-dependant
However, you should use it very carefully and probably not in production unless you have a good knowledge of it's impacts.
Ref : http://docs.python.org/2/library/sys.html#sys.setrecursionlimit

UnknownError finalizing mapreduce job

I'm having a weird error on the completion of a mapreduce job that writes to the google storage, has anybody seen this before?
Final result for job '158354152558......' is 'success'
....
File "/base/data/home/apps/s~app/bqmapper.360899047207944804/libs/mapreduc/handlers.py", line 539, in _finalize_job
mapreduce_spec.mapper.output_writer_class().finalize_job(mapreduce_state)
File "/base/data/home/apps/s~app/bqmapper.360899047207944804/libs/mapreduce/output_writers.py", line 571, in finalize_job
files.finalize(create_filename)
File "/base/data/home/apps/s~app/bqmapper.360899047207944804/libs/mapreduce/lib/files/file.py", line 568, in finalize
f.close(finalize=True)
File "/base/data/home/apps/s~app/bqmapper.360899047207944804/libs/mapreduce/lib/files/file.py", line 291, in close
self._make_rpc_call_with_retry('Close', request, response)
File "/base/data/home/apps/s~app/bqmapper.360899047207944804/libs/mapreduce/lib/files/file.py", line 427, in _make_rpc_call_with_retry
_make_call(method, request, response)
File "/base/data/home/apps/s~app/bqmapper.360899047207944804/libs/mapreduce/lib/files/file.py", line 252, in _make_call
_raise_app_error(e)
File "/base/data/home/apps/s~app/bqmapper.360899047207944804/libs/mapreduce/lib/files/file.py", line 186, in _raise_app_error
raise UnknownError()
UnknownError
After playing with it I found that an open file on the cloud storage has to be finalized in less than 1 hour or it will fail with this lovely UnknownError.
I mitigate the problem increasing the number of shards to make the mapping faster and changed the output_sharding strategy to "input" that creates one file per shard.

Pymongo AssertionError: ids don't match

I use:
MongoDB 1.6.5
Pymongo 1.9
Python 2.6.6
I have 3 types of daemons. 1st load data from web, 2nd analyze it and save result, and 3rd group result. All of them working with Mongodb.
At some time 3rd daemon throws many exceptions like this(mostly when there are big amount of data in DB):
Traceback (most recent call last):
File "/usr/local/lib/python2.6/dist-packages/gevent-0.13.1-py2.6-linux-x86_64.egg/gevent/greenlet.py", line 405, in run
result = self._run(*self.args, **self.kwargs)
File "/data/www/spider/daemon/scripts/mainconverter.py", line 72, in work
for item in res:
File "/usr/local/lib/python2.6/dist-packages/pymongo-1.9_-py2.6-linux-x86_64.egg/pymongo/cursor.py", line 601, in next
if len(self.__data) or self._refresh():
File "/usr/local/lib/python2.6/dist-packages/pymongo-1.9_-py2.6-linux-x86_64.egg/pymongo/cursor.py", line 564, in _refresh
self.__query_spec(), self.__fields))
File "/usr/local/lib/python2.6/dist-packages/pymongo-1.9_-py2.6-linux-x86_64.egg/pymongo/cursor.py", line 521, in __send_message
**kwargs)
File "/usr/local/lib/python2.6/dist-packages/pymongo-1.9_-py2.6-linux-x86_64.egg/pymongo/connection.py", line 743, in _send_message_with_response
return self.__send_and_receive(message, sock)
File "/usr/local/lib/python2.6/dist-packages/pymongo-1.9_-py2.6-linux-x86_64.egg/pymongo/connection.py", line 724, in __send_and_receive
return self.__receive_message_on_socket(1, request_id, sock)
File "/usr/local/lib/python2.6/dist-packages/pymongo-1.9_-py2.6-linux-x86_64.egg/pymongo/connection.py", line 714, in __receive_message_on_socket
struct.unpack("<i", header[8:12])[0])
AssertionError: ids don't match -561338340 0
<Greenlet at 0x2baa628: <bound method Worker.work of <scripts.mainconverter.Worker object at 0x2ba8450>>> failed with AssertionError
Can anyone tell what cause this exeption and how to fix this.
Thanks.
This is likely a threading problem related to how you are using worker threads with gevent coroutines. It seems like the pymongo connection object is reading a response for a request it didn't make.

Categories