Timestamp out of range for platform on 32bit system - python

I'm trying to run a script I wrote on my Raspberry Pi Zero, but I keep getting the error OverflowError: timestamp out of range for platform time_t.
I'm relatively certain it's something with the 32-bit ARM architecture of the pi, but I can't seem to figure out a workaround.
Here's the traceback:
File "twitter.py", line 37, in <module>
t.run.Search(c)
File "/home/pi/.local/lib/python3.7/site-packages/twint/run.py", line 288, in Search
run(config, callback)
File "/home/pi/.local/lib/python3.7/site-packages/twint/run.py", line 209, in run
get_event_loop().run_until_complete(Twint(config).main(callback))
File "/usr/lib/python3.7/asyncio/base_events.py", line 584, in run_until_complete
return future.result()
File "/home/pi/.local/lib/python3.7/site-packages/twint/run.py", line 150, in main
await task
File "/home/pi/.local/lib/python3.7/site-packages/twint/run.py", line 194, in run
await self.tweets()
File "/home/pi/.local/lib/python3.7/site-packages/twint/run.py", line 141, in tweets
await output.Tweets(tweet, self.config, self.conn)
File "/home/pi/.local/lib/python3.7/site-packages/twint/output.py", line 142, in Tweets
await checkData(tweets, config, conn)
File "/home/pi/.local/lib/python3.7/site-packages/twint/output.py", line 116, in checkData
panda.update(tweet, config)
File "/home/pi/.local/lib/python3.7/site-packages/twint/storage/panda.py", line 67, in update
day = weekdays[strftime("%A", localtime(Tweet.datetime))]
OverflowError: timestamp out of range for platform time_t
I've done some searching and found similar(ish) issues, but most of them are with directly converting timestamps, where mine appears to be with setting a time. I've tried rebooting the Pi and immediately running the script to see if the issue is with the Pi being on to long, but that returned the same.
Anyone have any tips?
Thanks, Ben

Related

ModuleNotFoundError("'kafka' is not a valid name. Did you mean one of aiokafka, kafka?")

I am using Celery and Kafka to run some jobs in order to push data to Kafka. I also use Faust to connect the workers. But unfortunately, I got an error after running faust -A project.streams.app worker -l info in order to run the pipeline. I wonder if anyone can help me.
/home/admin/.local/lib/python3.6/site-packages/faust/fixups/django.py:71: UserWarning: Using settings.DEBUG leads to a memory leak, never
use this setting in production environments!
warnings.warn(WARN_DEBUG_ENABLED)
Command raised exception: ModuleNotFoundError("'kafka' is not a valid name. Did you mean one of aiokafka, kafka?",)
File "/home/admin/.local/lib/python3.6/site-packages/mode/worker.py", line 67, in exiting
yield
File "/home/admin/.local/lib/python3.6/site-packages/faust/cli/base.py", line 528, in _inner
cmd()
File "/home/admin/.local/lib/python3.6/site-packages/faust/cli/base.py", line 611, in __call__
self.run_using_worker(*args, **kwargs)
File "/home/admin/.local/lib/python3.6/site-packages/faust/cli/base.py", line 620, in run_using_worker
self.on_worker_created(worker)
File "/home/admin/.local/lib/python3.6/site-packages/faust/cli/worker.py", line 57, in on_worker_created
self.say(self.banner(worker))
File "/home/admin/.local/lib/python3.6/site-packages/faust/cli/worker.py", line 97, in banner
self._banner_data(worker))
File "/home/admin/.local/lib/python3.6/site-packages/faust/cli/worker.py", line 127, in _banner_data
(' transport', app.transport.driver_version),
File "/home/admin/.local/lib/python3.6/site-packages/faust/app/base.py", line 1831, in transport
self._transport = self._new_transport()
File "/home/admin/.local/lib/python3.6/site-packages/faust/app/base.py", line 1686, in _new_transport
return transport.by_url(self.conf.broker_consumer[0])(
File "/home/admin/.local/lib/python3.6/site-packages/mode/utils/imports.py", line 101, in by_url
return self.by_name(URL(url).scheme)
File "/home/admin/.local/lib/python3.6/site-packages/mode/utils/imports.py", line 115, in by_name
f'{name!r} is not a valid name. {alt}') from exc
I don't know what was wrong with Faust but I run pip install faust by chance and it solved the problem.

_gdbm.error: Database needs recovery -- after running out of storage while fetching api data

I really don't know how to help myself, being unfamiliar with this kind of error, and not finding anything on the Google landscape really. My last hope is one of you guys since I don't know where else to go with this. I tried reinstalling all libraries and setting up a new venv. For more action I don't trust myself enough in these kinds of things.
The code triggering the error:
from wetterdienst import DWDObservationData
observations_daily = DWDObservationData(
station_ids=station_ids_d,
parameter=params_daily,
time_resolution=TimeResolution.DAILY,
start_date="2015-01-01",
end_date="2020-10-10",
tidy_data=True,
humanize_column_names=True,
)
for df in observations_hourly.collect_data():
name = str(df.STATION_ID.iloc[0]).strip(".0")
df.to_csv('./data/hourly/{}.csv'.format(name))
print('{} done'.format(name))
API is found here: https://github.com/earthobservations/wetterdienst
Error:
Traceback (most recent call last):
File "/Users/sashakaun/PycharmProjects/wetter2.0/main.py", line 83, in <module>
for df in observations_hourly.collect_data():
File "/Users/sashakaun/PycharmProjects/wetter2.0/venv/lib/python3.8/site-packages/wetterdienst/dwd/observations/api.py", line 178, in collect_data
df_parameter = self._collect_parameter_from_station(
File "/Users/sashakaun/PycharmProjects/wetter2.0/venv/lib/python3.8/site-packages/wetterdienst/dwd/observations/api.py", line 243, in _collect_parameter_from_station
df_period = collect_climate_observations_data(
File "/Users/sashakaun/PycharmProjects/wetter2.0/venv/lib/python3.8/site-packages/wetterdienst/dwd/observations/access.py", line 82, in collect_climate_observations_data
filenames_and_files = download_climate_observations_data_parallel(remote_files)
File "/Users/sashakaun/PycharmProjects/wetter2.0/venv/lib/python3.8/site-packages/wetterdienst/dwd/observations/access.py", line 106, in download_climate_observations_data_parallel
return list(zip(remote_files, files_in_bytes))
File "/usr/local/Cellar/python#3.8/3.8.5/Frameworks/Python.framework/Versions/3.8/lib/python3.8/concurrent/futures/_base.py", line 611, in result_iterator
yield fs.pop().result()
File "/usr/local/Cellar/python#3.8/3.8.5/Frameworks/Python.framework/Versions/3.8/lib/python3.8/concurrent/futures/_base.py", line 432, in result
return self.__get_result()
File "/usr/local/Cellar/python#3.8/3.8.5/Frameworks/Python.framework/Versions/3.8/lib/python3.8/concurrent/futures/_base.py", line 388, in __get_result
raise self._exception
File "/usr/local/Cellar/python#3.8/3.8.5/Frameworks/Python.framework/Versions/3.8/lib/python3.8/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "/Users/sashakaun/PycharmProjects/wetter2.0/venv/lib/python3.8/site-packages/wetterdienst/dwd/observations/access.py", line 124, in _download_climate_observations_data
return BytesIO(__download_climate_observations_data(remote_file=remote_file))
File "<decorator-gen-2>", line 2, in __download_climate_observations_data
File "/Users/sashakaun/PycharmProjects/wetter2.0/venv/lib/python3.8/site-packages/dogpile/cache/region.py", line 1356, in get_or_create_for_user_func
return self.get_or_create(
File "/Users/sashakaun/PycharmProjects/wetter2.0/venv/lib/python3.8/site-packages/dogpile/cache/region.py", line 954, in get_or_create
with Lock(
File "/Users/sashakaun/PycharmProjects/wetter2.0/venv/lib/python3.8/site-packages/dogpile/lock.py", line 185, in __enter__
return self._enter()
File "/Users/sashakaun/PycharmProjects/wetter2.0/venv/lib/python3.8/site-packages/dogpile/lock.py", line 94, in _enter
generated = self._enter_create(value, createdtime)
File "/Users/sashakaun/PycharmProjects/wetter2.0/venv/lib/python3.8/site-packages/dogpile/lock.py", line 178, in _enter_create
return self.creator()
File "/Users/sashakaun/PycharmProjects/wetter2.0/venv/lib/python3.8/site-packages/dogpile/cache/region.py", line 920, in gen_value
self.backend.set(key, value)
File "/Users/sashakaun/PycharmProjects/wetter2.0/venv/lib/python3.8/site-packages/dogpile/cache/backends/file.py", line 239, in set
dbm[key] = pickle.dumps(value, pickle.HIGHEST_PROTOCOL)
_gdbm.error: Database needs recovery
Thanks a lot!!
A GDBM file has been corrupted. You need to use gdbmtool to recover the database. Install gdbmtool then run
gdbmtool FILENAME
Where FILENAME is the name of the GDBM database. A prompt will appear, then you can enter
gdbmtool> recover summary
If the database can be recovered it will display a summary of the recovery results, eg:
Recovery succeeded.
Keys recovered: 6870650, failed: 5, duplicate: 0
Buckets recovered: 64830, failed: 2

MemoryError with Discord selfbot during 'bot.run'

Before you tell me, yes I am aware that selfbots can get you banned. My selfbot is for work purposes in a server with me and three others. I'm doing nothing shady or weird over here.
I'm using the following selfbot code: https://github.com/Supersebi3/Selfbot
Upon logging in, being that I'm in about 50 servers, I experience the following:
This carries on for several minutes, until I eventually get a MemoryError:
File "main.py", line 96, in <module>
bot.run(token, bot=False)
File "D:\Python\Python36-32\lib\site-packages\discord\client.py", line 519, in run
self.loop.run_until_complete(self.start(*args, **kwargs))
File "D:\Python\Python36-32\lib\asyncio\base_events.py", line 468, in run_until_complete
return future.result()
File "D:\Python\Python36-32\lib\site-packages\discord\client.py", line 491, in start
yield from self.connect()
File "D:\Python\Python36-32\lib\site-packages\discord\client.py", line 448, in connect
yield from self.ws.poll_event()
File "D:\Python\Python36-32\lib\site-packages\discord\gateway.py", line 431, in poll_event
yield from self.received_message(msg)
File "D:\Python\Python36-32\lib\site-packages\discord\gateway.py", line 327, in received_message
log.debug('WebSocket Event: {}'.format(msg))
MemoryError
Can anyone explain to why this is happening and how I can fix it? Is there any way I can skip the chunk processing for the members of every server my selfbot account is in?

Error during centOS 7 installation on physical server (anaconda 21.48.22.93-1 exception report))

I am installing centOS 7 minimal version on server using dvd disk, it has iso image. After choosing the language option it gives me the following error :
anaconda 21.48.22.93-1 exception report
Traceback (most recent call first):
File "/usr/lib/python2.7/site-packages/block/device.py", line 719, in get_map if compare_tables(map.table, self.rs.dmTable):
File "/usr/lib64/python2.7/site-packages/block/device.py", line 838, in active self.map.dev.mknod(self.prefix+self.name)
File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 1768, in handleUdevDMRaidMemberFormat rs.activate(mknod=True)
file "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 1979, in handleUdevDeviceFormat seld.handleUdevDMRaidMemberFormat(info, device)
File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 1285, in addUdevDevice serlf.handleUdevDeviceFormat(info, device)
File "/usr/lib7python2.7/site-packages/blivet/devicetree.py", line 2295, in _populate self.addUdevDevice(dev)
file "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 2228, in populate self._populate()
File "7usr/lib/python2.7/site-packages/blivet/__init__py", line 489, in reset self.devicetree.populate(cleanupOnly=cleanupOnly)
File "/usr/lib/python2.7/site-packages/blivet/__init__py",line 184, in storagelnitialize storage.reset()
File "/usr/lib64/python2.7/threading.py", line 764, in run self.__target(*self.__args, **self.__kwargs)
File"/usr/lib64/python2.7/site-packages/anaconda/threads.py", line 227, in run threading.Thread.run(self, *args, **kwargs)
File"/usr/lib64/python2.7/site-packages/anaconda/threads.py", line 112, in wait self.raise_if_error(name)
File"/usr/lib64/python2.7/site-packages/anaconda/timezone.py", line 75, in time_initialize threadMgr.wait(THREAD_STORAGE)
File"/usr/lib64/python2.7/threading.py", line 764, in run self.__target(*self.__args,**self.__kwargs)
File"/usr/lib64/python2.7/site-packages/pythonanaconda/threads.py" line 227, in run threading.Thread.run(self,*args,**kwargs)
ValueError: invalid map 'nglish (the divide/multiply keys toggle the layout)'
The problem can rely in 3 different scenario:
CentOS on VirtualBox: You need to create Root Password and User Creation in the first 30 seconds before the installer starts to install stuff. This might be a bug (CentOS 8). Setup the User Creation and then the Root Password, you will see the installer will go on without problems
Pre-existing data on the HD: https://bugzilla.redhat.com/show_bug.cgi?id=1441891
In this case first boot in rescue mode then run the command dmraid -r -E /dev/sd<x>

UnknownError finalizing mapreduce job

I'm having a weird error on the completion of a mapreduce job that writes to the google storage, has anybody seen this before?
Final result for job '158354152558......' is 'success'
....
File "/base/data/home/apps/s~app/bqmapper.360899047207944804/libs/mapreduc/handlers.py", line 539, in _finalize_job
mapreduce_spec.mapper.output_writer_class().finalize_job(mapreduce_state)
File "/base/data/home/apps/s~app/bqmapper.360899047207944804/libs/mapreduce/output_writers.py", line 571, in finalize_job
files.finalize(create_filename)
File "/base/data/home/apps/s~app/bqmapper.360899047207944804/libs/mapreduce/lib/files/file.py", line 568, in finalize
f.close(finalize=True)
File "/base/data/home/apps/s~app/bqmapper.360899047207944804/libs/mapreduce/lib/files/file.py", line 291, in close
self._make_rpc_call_with_retry('Close', request, response)
File "/base/data/home/apps/s~app/bqmapper.360899047207944804/libs/mapreduce/lib/files/file.py", line 427, in _make_rpc_call_with_retry
_make_call(method, request, response)
File "/base/data/home/apps/s~app/bqmapper.360899047207944804/libs/mapreduce/lib/files/file.py", line 252, in _make_call
_raise_app_error(e)
File "/base/data/home/apps/s~app/bqmapper.360899047207944804/libs/mapreduce/lib/files/file.py", line 186, in _raise_app_error
raise UnknownError()
UnknownError
After playing with it I found that an open file on the cloud storage has to be finalized in less than 1 hour or it will fail with this lovely UnknownError.
I mitigate the problem increasing the number of shards to make the mapping faster and changed the output_sharding strategy to "input" that creates one file per shard.

Categories