MySQL dump not capturing tables located after a view - python

I have a database that has the following structure:
mysql> show tables;
+--------------------+
| Tables_in_my_PROD |
+--------------------+
| table_A |
| Table_B |
| table_C |
| view_A |
| table_D |
| table_E |
| ... |
+--------------------+
I use a script to make a gzip dump file of my entire database and then I upload that file to Amazon S3. The python code to create the dump file is below:
dump_cmd = ['mysqldump ' +
'--user={mysql_user} '.format(mysql_user=cfg.DB_USER) +
'--password={db_pw} '.format(db_pw=cfg.DB_PW) +
'--host={db_host} '.format(db_host=cfg.DB_HOST) +
'{db_name} '.format(db_name=cfg.DB_NAME) +
'| ' +
'gzip ' +
'> ' +
'{filepath}'.format(filepath=self.filename)]
dc = subprocess.Popen(dump_cmd, shell=True)
dc.wait()
This creates the zip file. Next, I upload it to Amazon S3 using python's boto library.
When I go to restore a database from that zip file, I only get tables A, B and C restored. Tables D and E are nowhere to be found.
Tables D and E are after the view
Is there something about that view that is causing problems? I do not know if the tables are getting dumped to the file because I do not know how to look into that file (table_B has 8 million rows and any attempt to inspect the file crashes everything)
I am using Mariadb version
+-------------------------+------------------+
| Variable_name | Value |
+-------------------------+------------------+
| innodb_version | 5.6.23-72.1 |
| protocol_version | 10 |
| slave_type_conversions | |
| version | 10.0.19-MariaDB |
| version_comment | MariaDB Server |
| version_compile_machine | x86_64 |
| version_compile_os | Linux |
| version_malloc_library | bundled jemalloc |
+-------------------------+------------------+

Related

Storing Imagehash in mysql database

I am trying to save hash_value in mysql database using python. I have obtained hash value hash = imagehash.dhash(Image.open('temp_face.jpg')) but after the execution of insert query cursor.execute("INSERT INTO image(hash,name,photo) VALUES(%d,%s,%s )", (hash,name, binary_image))it gives me error "Python 'imagehash' cannot be converted to a MySQL type".
| Field | Type | Null | Key | Default | Extra |
+---------+-------------+------+-----+-------------------+-------------------+
| hash | binary(32) | NO | PRI | NULL | |
| name | varchar(25) | NO | | NULL | |
| photo | blob | NO | | NULL | |
| arrival | datetime | NO | | CURRENT_TIMESTAMP | DEFAULT_GENERATED |
+---------+-------------+------+-----+-------------------+-------------------+
So what can be done to store the value or is there any other way to do the same task?

Python - SQLAlchemy getting 'Table' object is not callable error

I have defined an existing DB Table in my python script and whenever I tried to insert a row to db table, I receive an error message stating the "Table object is not callable"
Below you can find the code and error message I receive. Any support will be appreciated:
engine = create_engine('postgresql://user:pwd#localhost:5432/dbname',
client_encoding='utf8')
metadata = MetaData()
MyTable = Table('target_table', metadata, autoload=True, autoload_with=engine)
Session = sessionmaker()
Session.configure(bind=engine)
session = Session()
:
:
:
def recod_to_db(db_hash):
db_instance = MyTable(**db_hash)
session.add(db_instance)
session.commit()
return
Error Message:
File "myprog.py", line 319, in recod_to_db
db_instance = MyTable(**db_hash)
TypeError: 'Table' object is not callable
This is how the table looks like
Table "public.target_table"
Column | Type | Modifiers | Storage | Stats target | Description
-------------------+-----------------------------+--------------------------------------------------------+----------+--------------+-------------
id | integer | not null default nextval('target_table_id_seq'::regclass) | plain | |
carid | integer | | plain | |
triplecode | character varying | | extended | |
lookup | integer | | plain | |
type | character varying | | extended | |
make | character varying | | extended | |
series | character varying | | extended | |
model | character varying | | extended | |
year | integer | | plain | |
fuel | character varying | | extended | |
transmission | character varying | | extended | |
mileage | integer | | plain | |
hp | integer | | plain | |
color | character varying | | extended | |
door | integer | | plain | |
location | character varying | | extended | |
url | character varying | | extended | |
register_date | date | | plain | |
auction_end_time | timestamp without time zone | | plain | |
body_damage | integer | | plain | |
mechanical_damage | integer | | plain | |
target_buy | integer | | plain | |
price | integer | | plain | |
currency | character varying | | extended | |
auctionid | integer | | plain | |
seller | character varying | | extended | |
auction_type | character varying | | extended | |
created_at | timestamp without time zone | not null | plain | |
updated_at | timestamp without time zone | not null | plain | |
estimated_value | integer | | plain | |
Indexes:
"target_table_pkey" PRIMARY KEY, btree (id)
Another way of inserting without auto_map is using the table's method for insert. Documentation is here
insert(dml, values=None, inline=False, **kwargs)
Generate an insert() construct against this TableClause.
E.g.:
table.insert().values(name='foo')
In code it would look like this:
def record_to_db(MyTable):
insert_stmnt = MyTable.insert().values(column_name=value_you_want_to_insert)
session.execute(insert_stmnt)
session.commit()
return
Ideally, you'd have your table defined in a separate folder other than in your app.py. You can also have utils functions that yields the session and then commits or catches an exception and that a rollback on it. Something like this:
def get_db_session_scope(sql_db_session):
session = sql_db_session()
try:
yield session
session.commit()
except:
session.rollback()
raise
finally:
session.close()
Then your function would look like this:
def record_to_db(MyTable):
with get_db_session_scope(db) as db_session:
insert_stmnt =
MyTable.insert().values(column_name=value_you_want_to_insert)
session.execute(insert_stmnt)
return
You can get db from your app.py through
from flask_sqlalchemy import SQLAlchemy
db = SQLAlchemy(app)

What might be causing MySQL to hang my Python script?

I have a pretty straightforward Python script. It kicks off a pool of 10 processes that each:
Make an external API request for 1,000 records
Parses the XML response
Inserts each record into a MySQL database
There's nothing particularly tricky here, but about the time I reach 90,000 records the script hangs.
mysql> show processlist;
+----+------+-----------------+---------------+---------+------+-------+------------------+
| Id | User | Host | db | Command | Time | State | Info |
+----+------+-----------------+---------------+---------+------+-------+------------------+
| 44 | root | localhost:48130 | my_database | Sleep | 57 | | NULL |
| 45 | root | localhost:48131 | NULL | Sleep | 6 | | NULL |
| 59 | root | localhost | my_database | Sleep | 506 | | NULL |
| 60 | root | localhost | NULL | Query | 0 | NULL | show processlist |
+----+------+-----------------+---------------+---------+------+-------+------------------+
I have roughly a million records to import in the way so I have a long, long way to go.
What can I do to prevent this hang and keep my script moving?
Python 2.7.6
MySQL-python 1.2.5
Not exactly what I wanted to do, but I have found that opening and closing the connection as required seems to move things along.

How to get the existing postgres databases in a list

I am trying to write a python script to automate the task of creating database using the most recent production dump. I am using psychopg2 for the purpose.
But after the creation of the new database I want to delete the previously used database. My idea is that if I could get the names of databases in a list and sort them, I can easily delete the unwanted database.
So, my question is : How can I get the names of the DBs in a list.
Thanks
You can list all of your DBs with
SELECT d.datname as "Name",
FROM pg_catalog.pg_database d
ORDER BY 1;
You can filter or order it whatever way you like.
psql -E -U postgres -c "\l"
The output of the above command is like
********* QUERY **********
SELECT d.datname as "Name",
pg_catalog.pg_get_userbyid(d.datdba) as "Owner",
pg_catalog.pg_encoding_to_char(d.encoding) as "Encoding",
d.datcollate as "Collate",
d.datctype as "Ctype",
pg_catalog.array_to_string(d.datacl, E'\n') AS "Access privileges"
FROM pg_catalog.pg_database d
ORDER BY 1;
**************************
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
--------------+----------+----------+---------+-------+-----------------------
mickey | postgres | UTF8 | C | C |
mickeylite | postgres | UTF8 | C | C |
postgres | postgres | UTF8 | C | C |
template0 | postgres | UTF8 | C | C | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | C | C | =c/postgres +
| | | | | postgres=CTc/postgres
(5 rows)

Load data local infile does not work in Ubuntu 12.04 and MySQL

Using MySQL I cannot import a file using load data local infile. My server is on AWS RDS. This works on Ubuntu 10.04. I installed the client using apt-get install mysql-client. Same error if I use mysqldb or mysql.connector in Python.
File "/usr/lib/pymodules/python2.7/mysql/connector/protocol.py", line 479, in cmd_query
return self.handle_cmd_result(self.conn.recv())
File "/usr/lib/pymodules/python2.7/mysql/connector/connection.py", line 179, in recv_plain
errors.raise_error(buf)
File "/usr/lib/pymodules/python2.7/mysql/connector/errors.py", line 82, in raise_error
raise get_mysql_exception(errno,errmsg)
mysql.connector.errors.NotSupportedError: 1148: The used command is not allowed with this MySQL version
I have a lot of data to upload... I can't believe 12.04 is not supported and I have to use 12.04.
Not really a python question... but the long and short of the matter is that mysql, as compiled and distributed by Ubuntu > 12.04, does not support using load data local infile directly from the mysql client as is.
If you search the MySQL Reference Documentation for Error 1148, further down the page linked above, in the comments:
Posted by Aaron Peterson on November 9 2005 4:35pm
With a defalut installation from FreeBSD ports, I had to use the command line
mysql -u user -p --local-infile menagerie
to start the mysql monitor, else the LOAD DATA LOCAL command failed with an error like
the following:
ERROR 1148 (42000): The used command is not allowed with this MySQL version
... which does work.
monte#oobun2:~$ mysql -h localhost -u monte -p monte --local-infile
Enter password:
...
mysql> LOAD DATA LOCAL INFILE 'pet.txt' INTO TABLE pet;
Query OK, 8 rows affected (0.04 sec)
Records: 8 Deleted: 0 Skipped: 0 Warnings: 0
mysql> SELECT * FROM pet;
+----------+--------+---------+------+------------+------------+
| name | owner | species | sex | birth | death |
+----------+--------+---------+------+------------+------------+
| Fluffy | Harold | cat | f | 1993-02-04 | NULL |
| Claws | Gwen | cat | m | 1994-03-17 | NULL |
| Buffy | Harold | dog | f | 1989-05-13 | NULL |
| Fang | Benny | dog | m | 1990-08-27 | NULL |
| Bowser | Diane | dog | m | 1979-08-31 | 1995-07-29 |
| Chirpy | Gwen | bird | f | 1998-09-11 | NULL |
| Whistler | Gwen | bird | NULL | 1997-12-09 | NULL |
| Slim | Benny | snake | m | 1996-04-29 | NULL |
| Puffball | Diane | hamster | f | 1999-03-30 | NULL |
+----------+--------+---------+------+------------+------------+
9 rows in set (0.00 sec)
mysql>
I generally don't need to load data via code, so that suffices for my needs. If you do, and have the ability/permissions to edit your mysql config file, then the local-infile=1 line in the appropriate section(s) may be simpler.

Categories