I imported a table from sql server database as a dataframe, I am trying to export it as PostgreSQL table
this is what I am doing
from sqlalchemy import create_engine
import psycopg2
engine = create_engine('postgresql://postgres:000000#localhost:5432/sinistrePY')
df.to_sql('table_name3', engine)
and this is the result
the data integration is working fine but
I get the table with read-only privileges
data types are not as I should be
no primary key
I don't need the index column
how can I fix that and control how I want my table to be, from my notebook or directly from PostgreSQL server if needed, thanks.
Related
I have a Pandas dataframe df which I want to push to a relational database as a table. I setup a connection object (<Connection>) using SQLAlchemy (pyodbc is the connection engine), and called the command
df.to_sql(<Table_Name>, <Connection>)
which I was able to confirm was written as a table to the desired relational database by visual examination of it in SQL Server Management Studio (SSMS). But in the left-hand-side list of databases and their tables in SSMS I see that it has named it
<Sender_Server>\<Username>.<Table_Name>
where <Sender_Server> is (I think) related to the name of the server I ran the Python command from, <Username> is my username on that server, and <Table_Name> is the desired table name.
When I right-click on the table and select to query the top one thousand rows I get a query of the form
SELECT * FROM [<Data_Base_Name>].[<Sender_Server>\<Username>].[<Table_Name>]
which also has the <Sender_Server>\<Username> info in it. The inclusion of <Sender_Server>\<Username> is undesired behaviour in our use case.
How can I instead have the data pushed such that
SELECT * FROM [<Data_Base_Name>].[<Table_Name>]
is the appropriate query?
By default, .to_sql() assumes the default schema for the current user unless schema="schema_name" is provided. Say, for example, the database contains a table named dbo.thing and the database user named joan has a default schema named engineering. If Joan does
df.to_sql("thing", engine, if_exists="append", index=False)
it will not append to dbo.thing but will instead try to create an engineering.thing table and append to that. If Joan wants to append to dbo.thing she needs to do
df.to_sql("thing", engine, schema="dbo", if_exists="append", index=False)
How to convert a existing postgresql db into a Microsoft Access db with python?
I want to convert my postgresql db into a Microsoft Access db.
There are many possible solutions, like transfer table by table and inside the tables row by row.
But which of the solution mide be the best in terms of performance?
Install the ODBC driver and link the tables from PostgreSQL
Mark the link tables and choose Convert to local table
(Optional) Go to Database Tools, Access Database, and select to split the database to have the tables in an external Access database
I have a SQL file titled "DreamMarket2017_product.sql". I believe it's MySQL.
How do I read this file into a Jupyter Notebook using PyMySQL? Or, should I use Psycopg2?
I'm much more familiar w/ Psycopg2 than PyMySQL.
Both PyMySQL and Psycopg request a database name. There is no database. I solely have the files.
Do I need to create a database using a GUI like Pgadmin2 and load those the SQL tables into the newly created database?
Also, I'm still waiting to hear from the university that created the dataset.
Yes, u need to create a database and load data into table or import table backup u have
connection = psycopg2.connect(user = "dummy",password = "1234",host = "any",port = "1234",database = "demo")
Using below code to connect to postgres connects me to the default public schema. Where can i mention
the schema name in the connection string? I am trying to insert the data. So when I use dataframe.to_sql('myschema.mytable', engine, if_exists='append', index=False)
It creates a table name with myschema.mytable in public schema instead of inserting the data into mytable which already exist under myschema.
I am using sqlalchemy library in python. Below is my connection string.
engine = create_engine('postgres://user:password#host:5432/dbname')
I tried the jdbc way by appending ?currentSchema=schemaname and ?schema=schemaname but both does not work.
#Belayer, thanks for the response. After some more research, it seems you can mention the schema name while loading the data to database.
dataframe.to_sql('mytable', con=connection, schema='myschema',if_exists='append', index=False)
Please suggest a way to execute SQL statement and pandas dataframe .to_sql() in one transaction
I have the dataframe and want to delete some rows on the database side before insertion
So basically I need to delete and then insert in one transaction using .to_sql of dataframe
I use sqlalchemy engine with pandas.df.to_sql()
After further investigation I realized that it is possible to do only with sqllite3, because to_sql supports both sqlalchemy engine and plain connection object as conn parameter, but as a connection it is supported only for sqllite3 database
In other words you have no influence on connection which will be created by to_sql function of dataframe