Remote Postgresql is very slow - python

I run .py files using Django ORM, that connected to Postgresql server on another server.
Both servers working on Ubuntu 20.04
when i run the same file it takes the following time:
2-3 seconds on server with postgresql
8-12 seconds on another server.
When run .py file with more processes, it can take 20 seconds. If I run the same script at the same time on the postgresql server, it takes 2-3 seconds anyway
I tried:
Turn off firewall on both servers (sudo ufw disable)
Change postgres configs and then restart postgres server
Use pgBouncer
I checked internet speed on the servers and its normal
This is postgresql.conf
# Generated by PGConfig 2.0 beta
## http://pgconfig.org
# Memory Configuration
shared_buffers = 2GB
effective_cache_size = 6GB
work_mem = 41MB
maintenance_work_mem = 512MB
# Checkpoint Related Configuration
min_wal_size = 512MB
max_wal_size = 2GB
checkpoint_completion_target = 0.9
wal_buffers = 16MB
# Network Related Configuration
listen_addresses = '*'
max_connections = 1000
# Storage Configuration
random_page_cost = 1.1
effective_io_concurrency = 200
# Worker Processes
max_worker_processes = 8
max_parallel_workers_per_gather = 4
max_parallel_workers = 8
# Logging configuration for pgbadger
logging_collector = on
log_checkpoints = on
log_connections = on
log_disconnections = on
log_lock_waits = on
log_temp_files = 0
lc_messages = 'C'
# Adjust the minimum time to collect data
log_min_duration_statement = '10s'
log_autovacuum_min_duration = 0
# 'csvlog' format configuration
log_destination = 'csvlog'
In pg_hba i just insert 1 string
#
# Database administrative login by Unix domain socket
local all postgres peer
# TYPE DATABASE USER ADDRESS METHOD
# "local" is for Unix domain socket connections only
local all all peer
# IPv4 local connections:
host all all 127.0.0.1/32 md5
# IPv6 local connections:
host all all ::1/128 md5
# Allow replication connections from localhost, by a user with the
# replication privilege.
local replication all peer
host replication all 127.0.0.1/32 md5
host replication all ::1/128 md5
host all all all md5
Is it normal to have this speed or i can configure that?

Related

(psycopg2.OperationalError) FATAL: password authentication failed for user "myuser" FATAL: password authentication failed for user "myuser"

I am trying to build a simple web app using flask and PostgreSQL. I was using SQLALchemy and psycopg2 to complete the task but I got stuck at this error for a long time. I have my configuration set as follows:
app.config['SQLALCHEMY_DATABASE_URI'] = "postgresql://myuser:c2yQdn3e#localhost:5432/hello"
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False
db = SQLAlchemy(app)
My database name was "hello" and my user name is "myuser". I have even entered the password correctly but still it is showing this error. I have looked for a lot of solution on the net and tried everything but I am still getting this error. My pg_hba.conf file looks like this
# TYPE DATABASE USER ADDRESS METHOD
# "local" is for Unix domain socket connections only
local all all md5
# IPv4 local connections:
host all all 127.0.0.1/32 md5
# IPv6 local connections:
host all all ::1/128 md5
# Allow replication connections from localhost, by a user with the
# replication privilege.
local replication all md5
host replication all 127.0.0.1/32 md5
host replication all ::1/128 md5
Thanks a lot in advance.

MySQL - ERROR 2003 (HY000): Can't connect to MySQL server on '192.168.0.2' (113)

Currently i'm working on a school project where I can use a database of my choice in a Python3 assignment. I have to build a Python script that connects to a database and put information into it. So said and done, I have created a Python script:
#!/bin/python3
import psutil
import socket
import mysql.connector
import mysql.connector import Error
machine = socket.gethostname()
memory = psutil.virtual_memory()[2]
disk = psutil.disk_usage('/').percent
cpu = psutil.cpu_percent()
print (machine, memory, disk, cpu)
def insert_data(machine, memory, disk, cpu):
try:
conn = mysql.connector.connect(
user="db_user",
password="welkom01",
host="192.168.0.2",
port=3306,
database="gegevens")
insert_query = """INSERT INTO info (machine, memory, disk, cpu) VALUES (?, ?, ?, ?);"""
verkregen_data = (machine, memory, disk, cpu)
cursor = conn.cursor()
cursor.execute(insert_query, verkregen_data)
cursor.commit()
print ("Total", cursor.rowcount, "Data is succesvol in database gegevens.db geschreven")
conn.commit()
cursor.close()
except Error as error:
print(f"Error connecting to MariaDB Platform: {error}")
finally:
if (conn):
conn.close()
print("MariaDB connection is closed")
insert_data(machine, memory, disk, cpu)
But now I'm trying to figure out how I can let this "agent" as I call it. To put it's information in to the database of the Master. I'm working with two virtual machines that are both using Linux Centos 8, fully updated to the newest version.
For Python I have installed:
As well as the plugins with Pip3:
I have used Digital Ocean to install MySQL server on both of the machines:
How to install mysql on centos 8
The my.cnf file on the master looks like this:
[mysql]
port = 3306
socket = /var/run/mysqld/mysqld.sock
[mysqld]
# Required Settings
basedir = /usr
bind_address = 0.0.0.0
datadir = /var/lib/mysql
max_allowed_packet = 256M
max_connect_errors = 1000000
pid_file = /var/run/mysqld/mysqld.pid
port = 3306
skip_external_locking
skip_name_resolve
socket = /var/run/mysqld/mysqld.sock
# Enable for b/c with databases created in older MySQL/MariaDB versions (e.g. when using null dates)
#sql_mode = ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION,ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES
tmpdir = /tmp
user = mysql
# InnoDB Settings
default_storage_engine = InnoDB
innodb_buffer_pool_instances = 2 # Use 1 instance per 1GB of InnoDB pool size
innodb_buffer_pool_size = 2G # Use up to 70-80% of RAM
innodb_file_per_table = 1
innodb_flush_log_at_trx_commit = 0
innodb_flush_method = O_DIRECT
innodb_log_buffer_size = 16M
innodb_log_file_size = 512M
innodb_stats_on_metadata = 0
# Buffer Settings
join_buffer_size = 4M # UPD
read_buffer_size = 3M # UPD
read_rnd_buffer_size = 4M # UPD
sort_buffer_size = 4M # UPD
# Logging
log_error = /var/lib/mysql/mysql_error.log
log_queries_not_using_indexes = 1
long_query_time = 5
slow_query_log = 0 # Disabled for production
slow_query_log_file = /var/lib/mysql/mysql_slow.log
Selinux has turned off, and I have accepted port 3306 in my iptables with the cmd:
iptables -A INPUT -i eth0 -p tcp --destination-port 3306 -j ACCEPT
Now when i'm trying to connect from the agent with the IP-adress: 192.168.0.3 I'm getting the error:
ERROR 2003 (HY000): Can't connect to MySQL server on '192.168.0.2' (113)
The cmd I'm using to connect with the database is:
mysql --host=192.168.0.2 --protocol=TCP --user=db_user --port=3306 -p
When I'm trying to run the script I'm getting the error msg:
_mysql_connector.MySQLInterfaceError: Can't connect to MySQL server on '192.168.0.2' (113)
I can confirm that on both machines mysql services are running, I can confirm that the user (db_user) is in the user table:
Do you guys have any idea how I can let the two machines talk? They can talk (with ping, with ssh). I have searched a lot, but I can't figure out what i'm doing wrong. Or what i'm missing.
The question has been answerd: Selinux was turned off, but iptables wasn't the firewall that was being active. It is firewalld... As soon as I turnt firewalld off with the cmd's:
systemctl stop firewalld
systemctl disable firewalld
I'm able to connect to the DB with the agent.
The only thing that is left for me to do, is figure out a way to make the python script able to connect to the database.

Receiving 'fe_sendauth: no password supplied' error using AWS Aurora database with Postgres engine, but only when performing unit tests

I've created a Flask API connecting to an RDS Aurora database using the Postgres engine. The endpoints work fine, and I can connect to them using Postman and cURL. When I try to connect via a unittest script, however, I receive the following error:
fe_sendauth: no password supplied
I'm not sure why it would only occur when tests are being run, does anyone have any insight?
try:
export POSTGRES_USER=" your pc username"
export POSTGRES_PW="your pc password"
It worked in my case.
I had a similar issue,to solve it i had to edit
sudo vi /var/lib/pgsql/data/pg_hba.conf
in METHOD column from MD5 to trust
# TYPE DATABASE USER ADDRESS METHOD
# "local" is for Unix domain socket connections only
local all all trust
# IPv4 local connections:
host all all 127.0.0.1/32 trust
# IPv6 local connections:
host all all ::1/128 trust

Python-PostgreSQL: How to connect database via network IP

For my Django project I am using 'postgresql_psycopg2' and my DB is residing at a common server on the network. How can I connect to that DB using the IP address as 'host' like we do in MySQL? I have tried but it always shows the following error:
OperationalError at /
could not connect to server: Connection refused
Is the server running on host "" and accepting TCP/IP connections on port 5432?
Your problem is not related to Django, you just need to simply put the database server's ip in DATABASES['default']['host'], as you did.
The problem is postgresql denies remote access by default. You have to first edit the pg_hba.conf file on your database server and put a line like this in it:
host db_name user_name 192.168.1.1/32 md5
where you put your target database and user (same as those you enter in django settings) and specify the ip range of the allowed hosts to connect to that database via that user. Then you restart postgresql and now you can remotely connect to the your database. Also check if there isn't any firewall blocking you to access that port on the database server.
For more detailed instructions, see [1] or [2].

Paramiko: Port Forwarding Around A NAT Router

Configuration
LOCAL: A local machine that will create an ssh connection and issue commands on a REMOTE box.
PROXY: An EC-2 instance with ssh access to both LOCAL and REMOTE.
REMOTE: A remote machine sitting behind a NAT Router (inaccessible by LOCAL, but will open a connection to PROXY and allow LOCAL to tunnel to it).
Port Forwarding Steps (via command line)
Create an ssh connection from REMOTE to PROXY to forward ssh traffic on port 22 on the REMOTE machine to port 8000 on the PROXY server.
# Run from the REMOTE machine
ssh -N -R 0.0.0.0:8000:localhost:22 PROXY_USER#PROXY_HOSTNAME
Create an ssh tunnel from LOCAL to PROXY and forward ssh traffic from LOCAL:1234 to PROXY:8000 (which then forwards to REMOTE:22).
# Run from LOCAL machine
ssh -L 1234:localhost:8000 PROXY_USER#PROXY_HOSTNAME
Create the forwarded ssh connection from LOCAL to REMOTE (via PROXY).
# Run from LOCAL machine in a new terminal window
ssh -p 1234 REMOTE_USER#localhost
# I have now ssh'd to the REMOTE box and can run commands
Paramiko Research
I have looked at a handful of questions related to port forwarding using Paramiko, but they don't seem to address this specific situation.
My Question
How can I use Paramiko to run steps 2 and 3 above? I essentially would like to run:
import paramiko
# Create the tunnel connection
tunnel_cli = paramiko.SSHClient()
tunnel_cli.connect(PROXY_HOSTNAME, PROXY_PORT, PROXY_USER)
# Create the forwarded connection and issue commands from LOCAL on the REMOTE box
fwd_cli = paramiko.SSHClient()
fwd_cli.connect('localhost', LOCAL_PORT, REMOTE_USER)
fwd_cli.exec_command('pwd')
A detailed explanation of what Paramiko is doing "under the hood" can be found at #bitprohet's blog here.
Assuming the configuration above, the code I have working looks something like this:
from paramiko import SSHClient
# Set up the proxy (forwarding server) credentials
proxy_hostname = 'your.proxy.hostname'
proxy_username = 'proxy-username'
proxy_port = 22
# Instantiate a client and connect to the proxy server
proxy_client = SSHClient()
proxy_client.load_host_keys('~/.ssh/known_hosts/')
proxy_client.connect(
proxy_hostname,
port=proxy_port,
username=proxy_username,
key_filename='/path/to/your/private/key/'
)
# Get the client's transport and open a `direct-tcpip` channel passing
# the destination hostname:port and the local hostname:port
transport = proxy_client.get_transport()
dest_addr = ('0.0.0.0', 8000)
local_addr = ('127.0.0.1', 1234)
channel = transport.open_channel("direct-tcpip", dest_addr, local_addr)
# Create a NEW client and pass this channel to it as the `sock` (along with
# whatever credentials you need to auth into your REMOTE box
remote_client = SSHClient()
remote_client.load_host_keys(hosts_file)
remote_client.connect('localhost', port=1234, username='remote_username', sock=channel)
# `remote_client` should now be able to issue commands to the REMOTE box
remote_client.exec_command('pwd')
Is the point solely to bounce SSH commands off PROXY or do you need to forward other, non SSH ports too?
If you just need to SSH into the REMOTE box, Paramiko supports both SSH-level gatewaying (tells the PROXY sshd to open a connection to REMOTE and forward SSH traffic on LOCAL's behalf) and ProxyCommand support (forwards all SSH traffic through a local command, which could be anything capable of talking to the remote box).
Sounds like you want the former to me, since PROXY clearly already has an sshd running. If you check out a copy of Fabric and search around for 'gateway' you will find pointers to how Fabric uses Paramiko's gateway support (I don't have time to dig up the specific spots myself right now.)

Categories