I have an online database and connect to it by using MySQLdb.
db = MySQLdb.connect(......)
cur = db.cursor()
cur.execute("SELECT * FROM YOUR_TABLE_NAME")
data = cur.fetchall()
Now, I want to write the whole database to my localhost (overwrite). Is there any way to do this?
Thanks
If I'm reading you correctly, you have two database servers, A and B (where A is a remote server and B is running on your local machine) and you want to copy a database from server A to server B?
In all honesty, if this is a one-off, consider using the mysqldump command-line tool, either directly or calling it from python.
If not, the last answer on http://bytes.com/topic/python/answers/24635-dump-table-data-mysqldb details the SQL needed to define a procedure to output tables and data, though this may well miss subtleties mysqldump does not
Related
def backup_action():
# read connection parameters
conn = psycopg2.connect(clicked.get())
# connect to the PostgreSQL server
print('Connecting to the PostgreSQL database...')
cursor = conn.cursor()
f = open(cur_path+"/"+"kopia"+".csv", 'w')
cursor.copy_to(f, 'mtr', sep=",")
cursor.close()
I have a problem with copy_to executing only partialy. I have 2 databases, one for testing with almost embpy small tables and one big database with actual data. When I execute this for the smaller one it works just fine but when I try to do it for the bigger one, it modifies the csv file but leaves it empty.
I've had a similar problem once with doing an actual backup in pyodbc and I resolved it by making delaying closing the connection. I have no idea if that's actually the problem here and I don't really know if psycopg2 offers a similar solution.
Please help.
I don't know the exact cause of the problem, but using the copy_expert worked for me.
I'd post the code but for some reason the page doesn't let me to.
I can currently connect to my SQL Server and query any database I want to directly.
The problem is when I want to query a linked server. I cannot directly reference the linked servers name in the connect() method and I have to connect to a local database first and then run an OPENQUERY() against the linked server.
This seams like a odd work around. Is there a way to query the linked server directly (from my research you cannot connect directly to a linked server) or at least connect to the server without specifying a database where I can then run the OPENQUERY() for anything without having to first connect to a database?
Example of what I have to do currently:
import pyodbc
ex_value = "SELECT * FROM OPENQUERY(LinkedServerName,'SELECT * FROM LinkedServerName.SomeTable')"
# I have to connect to some local database on the server and cannot connect to linked server initially.
odbc_driver, server, db = '{ODBC Driver 17 for SQL Server}', 'MyServerName', 'LocalDatabase'
with pyodbc.connect(driver=odbc_driver, host=server, database=db, trusted_connection='yes') as conn:
conn.autocommit = False
cursor = conn.cursor()
cursor.execute(ex_value)
tables = cursor.fetchall()
for row in tables:
print('Row: {}'.format(row))
cursor.close()
As Sean mentioned, a linked server is just a reference to another server that's stored within the local server.
You do not need to manage 100+ user credentials though. If you have the users using Windows auth, and you have Kerberos working between the servers, the linked server can just impersonate you when it connects to the other server via the linked server definition.
Then you can use either 4 part names to refer to objects on the other server, or use OPENQUERY when you want more control over what gets executed where.
Finally, if they're both SQL Servers and both use the same collation, make sure you set the linked server option to say they are collation compatible. That can make a major difference to your linked server performance. I regularly see systems where that isn't set and it should be.
I'm relatively new to databases, so this question may have a simple answer (I've been searching for hours).
I want to write a Python script that pulls the SQL Code stored in SQL Server Management Studio. I can successfully connect to the database using pyodbc and run queries against the database tables, but I would like to be able to pull, for example, the SQL code stored in a procedure, function, view, etc. without running it.
This seems like something that would be relatively simple. I know it can be done in Powershell, but I would prefer to use Python. Is there some sort of module or pyodbc hack that will do this?
You can use the sp_helptext command in SQL Server, which will give you the SQL Server object source code line by line:
stored_proc_text = ""
res = cursor.execute('sp_helptext my_stored_procedure')
for row in res:
stored_proc_text += row[0]
print(stored_proc_text)
Good luck!
I'm writing a Python application that interacts with a MySql database. The basic way to do this is to connect to the database and execute the queries using the database's cursor object. Should I hardcode each query in the python code, or should I put each query in a stored procedure ? Which is the better design choice (keeping security and elegance in mind) ? Note that many of the queries are single-liners (select queries).
I have a Plone 4 site which uses an additional Postgres database via a Z Psycopg 2 Database Connection object. Since the ZODB is sometimes replicated for testing and development purposes, there are a few fellow database connection objects, in a project_suffix naming scheme; this way, I can select one of the existing database adapters via buildout configuraton script.
However, I noticed that all existing database connection objects are apparently opened when Plone starts up. I don't know whether this is a real problem (e.g. when applying changes to the schema of the database of another instance), but I'd rather have Plone open only the single database which is actually used. How can I achieve this?
(Plone 4.2.4, Postgres 9.1.9, psycopg2 2.5.1, Debian Linux)
Update:
I added some code to the __init__.py of my product, which looks roughly like this:
from Shared.DC.ZRDB.Connection import Connection
...
dbname = env['DATABASE']
db = None
for id, obj in portalfolder.objectItems():
if isinstance(obj, Connection):
if id == dbname:
db = obj
else:
print 'before:', obj._v_connected
obj._v_database_connection.close()
print 'after: ', obj._v_connected
However, this seems not to work; there are no exceptions I'm aware of, but for both before and after, I get a timestamp, and when looking in the ZMI afterwards, the connections seem to be open.
Any ideas, please?