How to get the existing postgres databases in a list - python

I am trying to write a python script to automate the task of creating database using the most recent production dump. I am using psychopg2 for the purpose.
But after the creation of the new database I want to delete the previously used database. My idea is that if I could get the names of databases in a list and sort them, I can easily delete the unwanted database.
So, my question is : How can I get the names of the DBs in a list.
Thanks

You can list all of your DBs with
SELECT d.datname as "Name",
FROM pg_catalog.pg_database d
ORDER BY 1;
You can filter or order it whatever way you like.

psql -E -U postgres -c "\l"
The output of the above command is like
********* QUERY **********
SELECT d.datname as "Name",
pg_catalog.pg_get_userbyid(d.datdba) as "Owner",
pg_catalog.pg_encoding_to_char(d.encoding) as "Encoding",
d.datcollate as "Collate",
d.datctype as "Ctype",
pg_catalog.array_to_string(d.datacl, E'\n') AS "Access privileges"
FROM pg_catalog.pg_database d
ORDER BY 1;
**************************
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
--------------+----------+----------+---------+-------+-----------------------
mickey | postgres | UTF8 | C | C |
mickeylite | postgres | UTF8 | C | C |
postgres | postgres | UTF8 | C | C |
template0 | postgres | UTF8 | C | C | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | C | C | =c/postgres +
| | | | | postgres=CTc/postgres
(5 rows)

Related

Display all databases names from a given ip, port, username and password using python

I need to show all found databases name.
I have found solutions that obtain information from a known database but in my case I do not know the 'database_name.db'.
someone can help me please!
well, you don't have to :
psql -h localhost -U pguser -c '\l'
this way you will connect to default database (called postgres).
postgres connections need to connect to a database.
output:
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+-------------+-------------+-----------------------
mytestdb | postgres | UTF8 | en_CA.UTF-8 | en_CA.UTF-8 |
imdb | postgres | UTF8 | en_CA.UTF-8 | en_CA.UTF-8 |
postgres | postgres | UTF8 | en_CA.UTF-8 | en_CA.UTF-8 |
template0 | postgres | UTF8 | en_CA.UTF-8 | en_CA.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_CA.UTF-8 | en_CA.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres

MySQL dump not capturing tables located after a view

I have a database that has the following structure:
mysql> show tables;
+--------------------+
| Tables_in_my_PROD |
+--------------------+
| table_A |
| Table_B |
| table_C |
| view_A |
| table_D |
| table_E |
| ... |
+--------------------+
I use a script to make a gzip dump file of my entire database and then I upload that file to Amazon S3. The python code to create the dump file is below:
dump_cmd = ['mysqldump ' +
'--user={mysql_user} '.format(mysql_user=cfg.DB_USER) +
'--password={db_pw} '.format(db_pw=cfg.DB_PW) +
'--host={db_host} '.format(db_host=cfg.DB_HOST) +
'{db_name} '.format(db_name=cfg.DB_NAME) +
'| ' +
'gzip ' +
'> ' +
'{filepath}'.format(filepath=self.filename)]
dc = subprocess.Popen(dump_cmd, shell=True)
dc.wait()
This creates the zip file. Next, I upload it to Amazon S3 using python's boto library.
When I go to restore a database from that zip file, I only get tables A, B and C restored. Tables D and E are nowhere to be found.
Tables D and E are after the view
Is there something about that view that is causing problems? I do not know if the tables are getting dumped to the file because I do not know how to look into that file (table_B has 8 million rows and any attempt to inspect the file crashes everything)
I am using Mariadb version
+-------------------------+------------------+
| Variable_name | Value |
+-------------------------+------------------+
| innodb_version | 5.6.23-72.1 |
| protocol_version | 10 |
| slave_type_conversions | |
| version | 10.0.19-MariaDB |
| version_comment | MariaDB Server |
| version_compile_machine | x86_64 |
| version_compile_os | Linux |
| version_malloc_library | bundled jemalloc |
+-------------------------+------------------+

SqlAlchemy - PostgreSQL - Session connecting with wrong information

I wrote the following code to connect to the database and fill the schema:
db_url = 'postgresql+psycopg2:///bidder:<pass>#localhost:5432/basketball'
Engine = create_engine(db_url, echo=False)
SessionMaker = ORM.sessionmaker(bind=Engine, autoflush=False)
Session = ORM.scoped_session(SessionMaker)
Base.metadata.create_all(Engine)
That last statement, however, raises:
(psycopg2.OperationalError) FATAL: role "fran" does not exist
("fran" is my unix username)
Sqlalchemy is not connecting to the database with the username and password I'm specifying in db_url.
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
------------+----------+----------+-------------+-------------+-----------------------
basketball | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =Tc/postgres +
| | | | | postgres=CTc/postgres+
| | | | | bidder=CTc/postgres
postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
Users table:
rolname
----------
postgres
bidder
(2 rows)
Try removing one of the / from the db_url.
db_url = 'postgresql+psycopg2://bidder:<pass>#localhost:5432/basketball'

Illegal mixing of collation error utf8mb4 with peewee

I have a collation error with MySQL, specifically it is the following:
OperationalError: (1267, "Illegal mix of collations (utf8mb4_unicode_ci,IMPLICIT) and (utf8_general_ci,COERCIBLE) for operation '='")
After checking, I noticed that it is related to certain emojis, mostly smileys.
I've checked my MySQL database and it is by default using utf8mb4 charset as shown:
+------------+---------------------------------------------------------------------------------------------------+
| Database | Create Database |
+------------+---------------------------------------------------------------------------------------------------+
| Dictionary | CREATE DATABASE `Dictionary` /*!40100 DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci */ |
+------------+---------------------------------------------------------------------------------------------------+
My MySQL settings also indicate that everything needed is set to utf8mb4:
+--------------------------+--------------------+
| Variable_name | Value |
+--------------------------+--------------------+
| character_set_client | utf8mb4 |
| character_set_connection | utf8mb4 |
| character_set_database | utf8mb4 |
| character_set_filesystem | binary |
| character_set_results | utf8mb4 |
| character_set_server | utf8mb4 |
| character_set_system | utf8 |
| collation_connection | utf8mb4_unicode_ci |
| collation_database | utf8mb4_unicode_ci |
| collation_server | utf8mb4_unicode_ci |
+--------------------------+--------------------+
I am using Peewee (the Python module) to make my queries and the following line is the one that is causing the collation error
SingleWords.get(SingleWords.word == lowercase)
It is not much of a problem for me if I can't insert certain emojis, but I would still like to if possible. I have no idea why this is happening, any thoughts?
try this
SELECT * COLLATE latin1_general_ci FROM table;
this will select all with collate latin1_general_ci

Load data local infile does not work in Ubuntu 12.04 and MySQL

Using MySQL I cannot import a file using load data local infile. My server is on AWS RDS. This works on Ubuntu 10.04. I installed the client using apt-get install mysql-client. Same error if I use mysqldb or mysql.connector in Python.
File "/usr/lib/pymodules/python2.7/mysql/connector/protocol.py", line 479, in cmd_query
return self.handle_cmd_result(self.conn.recv())
File "/usr/lib/pymodules/python2.7/mysql/connector/connection.py", line 179, in recv_plain
errors.raise_error(buf)
File "/usr/lib/pymodules/python2.7/mysql/connector/errors.py", line 82, in raise_error
raise get_mysql_exception(errno,errmsg)
mysql.connector.errors.NotSupportedError: 1148: The used command is not allowed with this MySQL version
I have a lot of data to upload... I can't believe 12.04 is not supported and I have to use 12.04.
Not really a python question... but the long and short of the matter is that mysql, as compiled and distributed by Ubuntu > 12.04, does not support using load data local infile directly from the mysql client as is.
If you search the MySQL Reference Documentation for Error 1148, further down the page linked above, in the comments:
Posted by Aaron Peterson on November 9 2005 4:35pm
With a defalut installation from FreeBSD ports, I had to use the command line
mysql -u user -p --local-infile menagerie
to start the mysql monitor, else the LOAD DATA LOCAL command failed with an error like
the following:
ERROR 1148 (42000): The used command is not allowed with this MySQL version
... which does work.
monte#oobun2:~$ mysql -h localhost -u monte -p monte --local-infile
Enter password:
...
mysql> LOAD DATA LOCAL INFILE 'pet.txt' INTO TABLE pet;
Query OK, 8 rows affected (0.04 sec)
Records: 8 Deleted: 0 Skipped: 0 Warnings: 0
mysql> SELECT * FROM pet;
+----------+--------+---------+------+------------+------------+
| name | owner | species | sex | birth | death |
+----------+--------+---------+------+------------+------------+
| Fluffy | Harold | cat | f | 1993-02-04 | NULL |
| Claws | Gwen | cat | m | 1994-03-17 | NULL |
| Buffy | Harold | dog | f | 1989-05-13 | NULL |
| Fang | Benny | dog | m | 1990-08-27 | NULL |
| Bowser | Diane | dog | m | 1979-08-31 | 1995-07-29 |
| Chirpy | Gwen | bird | f | 1998-09-11 | NULL |
| Whistler | Gwen | bird | NULL | 1997-12-09 | NULL |
| Slim | Benny | snake | m | 1996-04-29 | NULL |
| Puffball | Diane | hamster | f | 1999-03-30 | NULL |
+----------+--------+---------+------+------------+------------+
9 rows in set (0.00 sec)
mysql>
I generally don't need to load data via code, so that suffices for my needs. If you do, and have the ability/permissions to edit your mysql config file, then the local-infile=1 line in the appropriate section(s) may be simpler.

Categories