I encountered a weird situation and I need your help.
I am developing a Restful API using the Python 3.7 with Flask and SQLAlchemy. The application is hosted using AWS EC2 and database in AWS RDS (MySQL).
I also have an application hosted using Raspberry PI which will call the API and communicate with the EC2 Server.
Sometimes, I encountered a long transaction time between Raspberry and my API server, most of the time, I will kill the process in Raspberry PI and try to restart the process again and debug to see where goes wrong. However, when I restart the process I will see an error message related to my database. Then when I check my database, I notice all my tables are gone, nothing left. I am pretty sure that no drop tables in my codes and I have no idea why this occurred.
Is there anyone encountered the same situation? If yes, please tell me the root cause and the solution for this issue.
By the way, there is no error message recorded in MySQL log nor my RestAPI.
Thank you and good day.
To my eye this looks like magic and there is to much guessing involved to point a finger properly.
But there is an easy workaround so it does not happen in the future. A good practice is to separate the admin user (can do anything, including schema migrations) from the connect user (can do insert, update, delete, select, but may not run any DDL scripts). Only the connect user may be used by the applications. In this case no table drop would be performed even if the application is running berserk.
Enabling logging might help too: How to log PostgreSQL queries?
Related
I realize similar questions have been asked however they have all been about a sepficic problem whereas I don't even now how I would go about doing what I need to.
That is: From my Django webapp I need to scrape a website periodically while my webapp runs on a server. The first options that I found were "django-background-tasks" (which doesn't seem to work the way I want it to) and 'celery-beat' which recommends getting another server if i understood correctly.
I figured just running a seperate thread would work but I can't seem to make that work without it interrupting the server and vice-versa and it's not the "correct" way of doing it.
Is there a way to run a task periodically without the need for a seperate server and a request to be made to an app in Django?
'celery-beat' which recommends getting another server if i understood correctly.
You can host celery (and any other needed components) on the same server as your Django app. They would be separate processes entirely.
It's not an uncommon setup to have a Django app + celery worker(s) + message queue all bundled into the same server deployment. Deploying on separate servers may be ideal, just as it would be ideal to distribute your Django app across many servers, but is by no means necessary.
I'm not sure if this is the "correct" way but it was a cheap and easy way for me to do it. I just created custom Django Management Commands and have them run via a scheduler such as CRON or in my case I just utilized Heroku Scheduler for my app.
My Django app connects to a PostgreSQL 9.x database. To this database, a legacy Java app is connected, too. So two different apps use the same database.
Is it possible for PostgreSQL to notify my Django (2.2.x) app when a table changes? I know that I can create a cron job to check on tables on a timely basis, but the Server administrator does not let me do cron jobs (why?) and the Java guy is too busy on other things to do code rewiring (so I can't ask him him to send me url requests after table changes).
Or any other idea would be much appreciated. Thanks.
EDIT: After some google-search I found this tool: pgsql-http but the database admin won't install it.
You can use triggers to NOTIFY when INSERTs, UPDATEs, or DELETEs occur. But something needs to LISTEN to the notices, on a persistent connection. I don't know how easy it is arrange for that to happen under Django.
I know that I can create a cron job to check on tables on a timely basis, but the Server administrator does not let me do cron jobs
You can run cron on any other box on the network you have access to, and have it use curl or wget to hit an endpoint on Django on production, which then does whatever it is you want done.
I am currently creating an application that will be using Python Flask for the back-end and API and PostgreSQL as the database to store my data in JSON format. My plan is to have a front-end in JS to interact with the API which will pull relevant information from my database.
How do I package the database into the program so that if a fresh copy is pulled from GitHub, a user would have everything needed to host and use the service? I am still a new developer and having difficulty taking my hobbyist code and presenting it in a clean, organized way.
Thank you for all help in advance.
Though your question leaves quite a few options open, here are two things you could do:
If you assume your users can install a PostgreSQL database themselves: you could dump the database which contains the minimum required to run your application (using pg_dump). When your application starts on your user's server, it should detect the database it's connecting to is empty, which should trigger an import of your data. The only thing your users should do is fill out their database connection details
If your users don't know anything about configuring servers: You could create a Docker image containing your Python code and PostgreSQL. This package will contain all dependencies of your application and runs anywhere. Admittedly, this is a bit more 'advanced' and could lead to other difficulties both on your side as well as on your users.
I'm writing an installed desktop app that I would like users to try out. I'd like to launch a pre-alpha release and collect some feedback, especially to fix any uncaught exceptions that might be thrown. As the developer would like to know about in the first instant.
i.e. I would like the installed desktop app to automatically submit relevant log entries to a remote server such that I can inspect them and fix the error.
I've considered using cloud-based services (they provide a nice dashboard interface: this is ideal) like but they're not really what I need:
Airbrake.io — quite pricey, geared towards webapps and servers
Loggly — has a forever free plan, but for servers only, based on syslog monitoring. I cannot expect users to install a syslog client as well as my application
I have never done centralized logging over internet connections, but in a local network. I used the standard sockethandler: http://docs.python.org/2/library/logging.handlers.html#sockethandler and it worked for me.
Other alternatives may be:
http://code.google.com/p/python-loggingserver/
https://papertrailapp.com/
http://pyfunc.blogspot.de/2013/08/centralized-logging-for-distributed.html
Also saving to a regular local log on crash may be a solution, and on the next startup of the app check if the log contains errors and send the log to your email.
I've added new models and pushed to our staging server, run syncdb to create their tables, and it locks up. It gets as far as 'Create table photos_photousertag' and postgres output shows the notice for creation of 'photos_photousertag_id_seq', but otherwise i get nothing on either said. I can't ctrl+c the syncdb process and I have no indication of what route to take from here. Has anyone else ran into this?
We use postgres, and while we've not run into this particular issue, there are some steps you may find helpful in debugging:
a. What version of postgres and psycopg2 are you using? For that matter, what version of django?
b. Try running the syncdb command with the "--verbosity=2" option to show all output.
c. Find the SQL that django is generating by running the "manage.py sql " command. Run the CREATE TABLE statements for your new models in the postgres shell and see what develops.
d. Turn the error logging, statement logging, and server status logging on postgres way up to see if you can catch any particular messages.
In the past, we've usually found that either option b or option c points out the problem.
I just experienced this as well, and it turned out to just be a plain old lock on that particular table, unrelated to Django. Once that cleared the sync went through just fine.
Try querying the table that the sync is getting stuck on and make sure that's working correctly first.
Strange here too, but simply restarting the PostgreSQL service (or server) solved it. I'd tried manually pasting the table creation code in psql too, but that wasn't solving it either (well, no way it could if it was a lock thing) - so I just used the restart:
systemctl restart postgresql.service
that's on my Suse box.
Am not sure whether reloading the service/server might lift existing table locks too?