I have created django app that I uploaded on "pythonanywhere website".
This app use quite a big database which I have to update every day and I wanted this database to be on my computer only.
Inside on of my django view I have created TCP client and after user type some information to app, this client send query to database on address when my server with database is running (my computer IP).
Everything was good on local network. I could connect to database from one computer to another and receive answer from it. Unfortunatelly I am unable to do it online on "pythonanywhere" website. I have read that it's impossible to do it through this website right now and I am looking for some alternative way to solve this problem.
Long story short. I am looking for a way to deploy my django website and connect it with server on my computer.
If there is some other, easier way to do it, please advise, but I would rather keep my database on computer.
Thank you.
Related
I have a Python registration and login code using Tkinter.
But, how and where can I save this recorded data so that a person who created the account on one computer can log on to another?
On google, I only find people saving the data in a local database (.db file on the computer) or in a .txt
Thanks in advance!
To use .db you can use sqlite3 and working with .txtis builtin.
I would recommend hosting your data then so it is not a actual file but on the cloud for stuff like that I would recommend checking out MongoDB
You can use a sqlite3 database, but that would not solve your problem.
The database or any other form of data source must be stored on a server that you can access. This can be on the internet or on the local network. (You didn't specify your environment)
you could access this data source by https (don't use http as it is clear text and not a good idea for password requests), connect to an online database or write some kind of socket server that you can connect to and get the needed information.
I just begin to work on a complicated project with django and I need help to choose communication technology because I'm a newbie.
The problem is the following :
I have a django server which received data from different deveice with PUT/PUSH request.
The server store the data.
And I need to send the data to another server when the actual data are update on my django server.
For now, the other server send GET request each second but it's not the good way because in the future, I will have many other server which make this GET requests. And if I don't find a solution, my django server will be saturated.
So I need to find a technologies to send a signal (or data) to the other server only when the data are updated on my django server.
I guess that I can use Websocket to do it but I'm not sure that is the only way.
Do you have any clue on how I can solve my problem ? and if websocket is the good technology to solve it
Thanks
I am attempting to get my app to connect to my database on an RDS, also I am using NGINX. When connecting to the ec2 remotely using a terminal I can connect to the database there fine. It is on the public-facing side where the error exists. I followed a guide given to me by a coding school step by step. The initial login/registration page will load, but if I try to create an account or login error I get a 500 Internal service error.
I set up my security group correctly as far as inbound rules go. Are there any outbound rules I need on the ec2 for it to be able to contact the RDS from the public ip?
Thank you
Note: I am not getting graded on this at this point, I already finished the Python stack, this is just something I still want to figure out. I don't like leaving something unfinished. I've taken the time and read many articles, watched videos, gone through AWS documentation and still cannot figure the issue out.
In this particular instance, the issue turned out to be that the database was not located on RDS, it was on the ec2 along with the flask app. I just needed to go into the mysqlconnection.py file and change the host to 'localhost', username to 'root', and the password.
Thanks
I am creating an inventory application for my company. The app has it's inventory information stored through RDS. I am using MySQL and have pymysql to connect. Throughout development I have had no issue connecting from the laptop that I created the database from. I want to know how to allow other computers with the application to connect. Is there a way to avoid adding each individual IP address to a security group? I would just like those with the application downloaded to have access without requiring additional login credentials.
When I use the application on my home computer I receive an error when trying to connect to the database.
pymysql.connect(db=dbname, host=host, port=port, user=user, password=password)
Side-note on security:
It is typically a very bad idea to grant remote applications direct access to your database, especially without giving each user/app their own password.
You are effectively opening your database to anyone that has the credentials, and you are including those credentials in the app itself. Somebody could obtain those credentials and, presumably, do quite a bit of damage to the contents of the database.
Also, you are locking-in your database schema with the inability to change in future. Let's say you have a table with particular columns and your application directly access that table. In future, if you modify the table, you should need to simultaneously update every application that is using the database. That's not really feasible if other people are running the application on their own systems.
The better approach is to expose an API and have the applications use the API to access the data:
The API should manage authentication, so you can determine who to allow in and, more importantly, track who is doing what. This will also avoid the problem of having to add each individual IP address of users, since you will be managing an authentication layer.
The API will apply a layer of business logic rather than allowing the remote application to do whatever it wishes. This stops remote users being able to do things like deleting the entire database. It also means that, instead of remote apps simply passing an SQL statement, they will need to pass information through the defined API (typically in the form of Action + Parameters).
It provides a stable interface into the application allowing you to make changes in the backend (eg changing the contents of a table) while still presenting the same interface to client applications.
Rule of thumb: Only one application server should be directly accessing a given database. You might have multiple servers running the same software to provide high availability and to support a high load of traffic, but don't let the client apps directly access the database. That's what makes a "3-tier" application so much better than a "2-tier" application. (Client - Server - Database)
I'm trying to build a django app that can monitor and interact with a remote database (to interact with the database in a basic way - just performing a look-up and only sometimes making a little change in the remote data), it also has to sometimes store the remote data to its own database.
The website which sits on the remote database is a community website, and anyone without an account is allowed to post on the website's forums. I want the app to be able to check the database every now and then to see for any updates in the discussions. The site gets at least 100 posts an hour and since anyone is allowed to post on the forums without an account, it occasionally gets spammed, but unfortunately the CMS that is being used does not have a good anti-spam system set up.
Only way that I can think of at the moment is to make a python file, and in that file I can import MySQLdb. I can connect to the remote database (mysql) server and select all the posts that have been made in the last X minutes. Using a function that calculates the probability of a post being a spam or not, I can again talk to the remote database and flag the candidates to be not displayed on the website. I can have this file run "every now and then" using cron.
The problem here is a lack of control. I want to have a user interface that can show all the spam candidates on a single webpage and have an "unflag" button to make accidentally flagged posts to be shown on that website again. This means that I'll probably be better off writing a django web app than to write a single python script that simply just flags spam candidates.
How would I have a django app or perhaps a function within that app (which can perform all actions that the stand-alone python script as described above can perform) to run automatically every now then (say every minute)?
Maybe you should try django-celery?