How can I update data in a JSON file while using UptimeRobot? - python

I am creating an Economy Discord Bot using Python and I am hosting it on Replit and keeping it online using UptimeRobot. Sometimes, when people use my bot's economy commands, the data is not updated in the JSON file. I have observed that this only happens when my UptimeRobot monitor brings my bot online and not when I manually run the code. Does anyone know how to work around this?
Here is the code I am using to update the JSON file:
with open("data.json", "w") as file:
file.write(json.dumps(data))

The issue here might be with replit. Replit reboots your repl every once in a while, even if you have hacker plan or are using uptime robot. And sometimes the JSON file might not be saved. In this case the file reverts to its last saved state. As far as I'm aware, there is no way to work around this. The only way is using an external database like MongoDB.

I would dump your json a little differently.. PS. I have never seen this kind of thing happen so it might just be the way your json dump code is.
with open('data.json', 'w') as f:
json.dump(data, f, indent=4)
So we just open the json file data.json or whatever your json file is called. We define it as f and we dump your data or whatever you called it into f... the indent=4 just makes it more clean but you can get rid of it if you want.

Related

Python discord bot has problems writing to textfiles

I'm currently working on a little discord bot. To host it for free, I'm using an application on heroku.com which is connected to my github. Everytime I restart the bot it gets some previously stored information from a textfile (works perfectly).
f = open("example_textfile.txt", "r")
example_list = dict(json.loads(f.read()))
f.close()
Everytime a list gets updated it should overwrite the textfile with the updated list like this (does NOT work):
f = open("example_textfile.txt", "w")
f.write(json.dumps(example_list))
f.close()
If I host the bot locally on my PC everything works perfectly (then I need the path, not just the name of the file). But when I host it with Heroku it can only read the files but not overwrite them. Does anyone know why this doesn't work? Or is there any alternative? Would be great if you could help me :D (And sorry for my bad english xD. I'm not native)
This should work
json.dump(example_list, open("example_file.txt", "w"))
The reason the write method may not be working for you is because
json.dumps() automatically writes to the file; That is the purpose of the method.
You're writing to the program is indicating that whatever json.dumps returns is what will get written to the file...
You should use json.dump | Is writes to the file intaking a dictionary instead!

linux server hosting .py files that are reading .txt files but cant store in variable

I have a linux server.
It is reading files in a directory and doing things with the full text of the file.
I've got some code. it retrieves the file path.
And then I'm doing this:
for file in files:
with open(file,'r') as f:
raw_data = f.read()
Its reading the file just fine. And Ive used this exact code outside of the server and it worked as expected.
In this case, when run on the server, the above code is spitting out all the text to the terminal. But then raw_data == None.
Not the behavior I'm used to. I imagine its something very simple as I am new to linux in general.
But I'm wanting the text in the file to be stored in the 'raw_data' variable as a string.
is there a special way I am to do this on linux? Googling so far as not helped much and I feel this is likely a VERY simple problem.
User error.
I thought, due to my noob status in linux, that perhaps the enviroment was causing weird behavior. But buried deep in the functions that use the data from the files was a print statement i had used a while back for testing. That was causing the output to screen.
As for the None type being returned. It was being returned by another subfunction that had a try/except block in it and was failing. The variable being referenced had the same name (raw_data). So i thought it came from the file read. But it was actually from elsewhere.
thanks all who stopped by. User error for this one.

How to write large JSON data?

I have been trying to write large amount (>800mb) of data to JSON file; I did some fair amount of trial and error to get this code:
def write_to_cube(data):
with open('test.json') as file1:
temp_data = json.load(file1)
temp_data.update(data)
file1.close()
with open('test.json', 'w') as f:
json.dump(temp_data, f)
f.close()
to run it just call the function write_to_cube({"some_data" = data})
Now the problem with this code is that it's fast for the small amount of data, but the problem comes when test.json file has more than 800mb in it. When I try to update or add data to it, it takes ages.
I know there are external libraries such as simplejson or jsonpickle, I am not pretty sure on how to use them.
Is there any other way to this problem?
Update:
I am not sure how this can be a duplicate, other articles say nothing about writing or updating a large JSON file, rather they say only about parsing.
Is there a memory efficient and fast way to load big json files in python?
Reading rather large json files in Python
None of the above resolve this question a duplicate. They don't say anything about writing or update.
So the problem is that you have a long operation. Here are a few approaches that I usually do:
Optimize the operation: This rarely works. I wouldn't recommend any superb library that would parse the json a few seconds faster
Change your logic: If the purpose is to save and load data, probably you would like to try something else, like storing your object into a binary file instead.
Threads and callback, or deferred objects in some web frameworks. In case of web applications, sometimes, the operation takes longer than a request can wait, we can do the operation in background (some cases are: zipping files, then send the zip to user's email, sending SMS by calling another third party's api...)

Downloading CSV file from website/server with Python 3.X

Programming beginner here. So for my very first project I was able to make a quick python script that downloaded files from this website:
http://www.wesm.ph/inner.php/downloads/market_prices_&_schedules
I noticed that the link address of the to-be-downloaded file
followed a pattern.
(http://wesm.ph/admin/downloads/download.php?download=../csv/mpas/XXXXX/XXXX.csv)
With some string concatenation and using the datetime module, I was able to create the HTML string of the csv file. After which, I just would use the:
urllib.request.urlopen(HTMLlink).read()
and save it with something like:
with open('output.csv', "w", newline='') as f:
writer = csv.writer(f)
writer.writerows(fullList)
It used to work - now it doesn't. I noticed however whenever I clicked the 'Generate Report' button and THEN run the script, the script would generate the output file. I'm not sure why this works. Is there a way to send a request to their server to generate the actual file? Which module, or commands should I use?
Most likely those files are only temporarily stored on that webserver after you click 'Generate Report'.
In order to grenerate new ones, there might even be a check (in JavaScript or using Cookies, Session-ID), to see if the generation of the new link/file is asked from a human, or a bot.
You might also want to check the HTTP return code (or even the full returned headers to see what exactly the server is answering).

Which one is better ini or json to store hard code data on the server?

I have some data which I need to keep hard code in my project (quite big data), which is likely to be settings of my form. its structure is like this:
{X : [{a,b}, {c,d}], Y:[{e,f},{g,h}], Z: [{i,j},{k,l}], ...}
What is the good way to store it hard code, In Json or in ini or something else?
Keeping all this in settings.py is not good I guess!
If it requires nesting, then you can't use ini files.
Other options are json, pickling, a key/value store (like memcache/redis). If it will require modifications, then don't use the disk. Doing so will also make your code incompatible with many PaaS providers that do not have a "filesystem" that you can use.
My recommendations:
Use a k/v store (like memcache/redis). You don't need to serialize (convert) your data, the APIs are very straight forward and if you go with redis you can store complicated data structures easily. Plus, it will be very very fast.
json and pickling have the same problem; in that you need to use the filesystem. Hits to the file system will slow your execution time down and you will have problems if you want to deploy to heroku or similar as they don't provide file system access. The other problem you will have is you may need to write your own conversion code (serializers) if you plan on storing some custom objects that can't be easily converted. If you want to use json, my recommendation is to store it in a database.
It depends on your definition of "quite big data" and the frequency which it will be changed.
If your settings don't change very often you could use a simple file in the format you like the most. If you go this route I'd recommend taking a look at this project (it supports multiple formats: dict, json, yaml, .ini)
If you'll be constantly making changes to those settings and your data is actually very big (several thousand lines or something like that) you'll probably want to use a proper database or some other storage which provides a better interface for programatically editing those settings. If you're already using some kind of database for your application's non-settings data, why not use it for this as well?
It's true you could read huge settings from a file but it'll probably be easier to interact with those settings if they're stored in a database.
Hope this helps.
In Python, the simplest ways to do this are using straight Python code, pkl, or JSON.
JSON is very easy to load:
import json
with open('data.json', 'r') as f:
data = json.load(f)
And so is pkl:
import pickle
with open('data.pkl', 'r') as f:
data = pickle.load(f)
To generate the pkl file:
with open('data.pkl', 'w') as f:
pickle.dump(your_data, f)

Categories