Where to mention schema while connecting to postgres using sqlalchemy in python - python

Using below code to connect to postgres connects me to the default public schema. Where can i mention
the schema name in the connection string? I am trying to insert the data. So when I use dataframe.to_sql('myschema.mytable', engine, if_exists='append', index=False)
It creates a table name with myschema.mytable in public schema instead of inserting the data into mytable which already exist under myschema.
I am using sqlalchemy library in python. Below is my connection string.
engine = create_engine('postgres://user:password#host:5432/dbname')
I tried the jdbc way by appending ?currentSchema=schemaname and ?schema=schemaname but both does not work.

#Belayer, thanks for the response. After some more research, it seems you can mention the schema name while loading the data to database.
dataframe.to_sql('mytable', con=connection, schema='myschema',if_exists='append', index=False)

Related

pd.DataFrame.to_sql() is prepending the server name and username to the table name

I have a Pandas dataframe df which I want to push to a relational database as a table. I setup a connection object (<Connection>) using SQLAlchemy (pyodbc is the connection engine), and called the command
df.to_sql(<Table_Name>, <Connection>)
which I was able to confirm was written as a table to the desired relational database by visual examination of it in SQL Server Management Studio (SSMS). But in the left-hand-side list of databases and their tables in SSMS I see that it has named it
<Sender_Server>\<Username>.<Table_Name>
where <Sender_Server> is (I think) related to the name of the server I ran the Python command from, <Username> is my username on that server, and <Table_Name> is the desired table name.
When I right-click on the table and select to query the top one thousand rows I get a query of the form
SELECT * FROM [<Data_Base_Name>].[<Sender_Server>\<Username>].[<Table_Name>]
which also has the <Sender_Server>\<Username> info in it. The inclusion of <Sender_Server>\<Username> is undesired behaviour in our use case.
How can I instead have the data pushed such that
SELECT * FROM [<Data_Base_Name>].[<Table_Name>]
is the appropriate query?
By default, .to_sql() assumes the default schema for the current user unless schema="schema_name" is provided. Say, for example, the database contains a table named dbo.thing and the database user named joan has a default schema named engineering. If Joan does
df.to_sql("thing", engine, if_exists="append", index=False)
it will not append to dbo.thing but will instead try to create an engineering.thing table and append to that. If Joan wants to append to dbo.thing she needs to do
df.to_sql("thing", engine, schema="dbo", if_exists="append", index=False)

Can I use same code (SQLALchemy) to create a database in other database system

I have a Postgresql database which is based on SQLAlchemy. I have now to create the same database schema in MS-server. Can I use the same code I've been using to create it? Just by changing the connection engine?

python dataframe to postgresql

I imported a table from sql server database as a dataframe, I am trying to export it as PostgreSQL table
this is what I am doing
from sqlalchemy import create_engine
import psycopg2
engine = create_engine('postgresql://postgres:000000#localhost:5432/sinistrePY')
df.to_sql('table_name3', engine)
and this is the result
the data integration is working fine but
I get the table with read-only privileges
data types are not as I should be
no primary key
I don't need the index column
how can I fix that and control how I want my table to be, from my notebook or directly from PostgreSQL server if needed, thanks.

Reading in SQL files into Pandas Table

I have a SQL file titled "DreamMarket2017_product.sql". I believe it's MySQL.
How do I read this file into a Jupyter Notebook using PyMySQL? Or, should I use Psycopg2?
I'm much more familiar w/ Psycopg2 than PyMySQL.
Both PyMySQL and Psycopg request a database name. There is no database. I solely have the files.
Do I need to create a database using a GUI like Pgadmin2 and load those the SQL tables into the newly created database?
Also, I'm still waiting to hear from the university that created the dataset.
Yes, u need to create a database and load data into table or import table backup u have
connection = psycopg2.connect(user = "dummy",password = "1234",host = "any",port = "1234",database = "demo")

SAS to Python - Remote Access to Oracle database

I understand how to connect remotely to Oracle database in python:
import cx_Oracle
connstr = 'Oracle_Username/Oracle_Password#IP_Address:Port/Instance'
conn = cx_Oracle.connect(connstr)
However I have SAS scripts and want to mimic the same procedure in Python but am struggling to understand the role of path and schema in the following SAS script and if it needs to be incorporated into the Python script?
libname ora oracle user=oracle-user
password=oracle-password
path=oracle-path
schema=schema-name;
I have read through documentation but not being familiar with SAS, it is still very vague.
The PATH= option specifies the TNS entry for the Oracle database. Get your DBA to translate that for you into the syntax you need to replace the #IP_Address:Port/Instance in your connection string.
The value after USER= is what you called Oracle_Username and the value after PASSWORD= is what you called Oracle_Password.
The value of the SCHEMA= option specifies which schema in Oracle the SAS libref will use. So if the SAS code later references a dataset by the name ORA.MYTABLE then it means the table MYTABLE in the schema schema-name. In direct Oracle code you could reference that table directly as schema-name.MYTABLE.
Pathname= is TNS entry configured in Oracle(sever related details are configured here)
Schema= is user schema
If you are able to connect Oracle you can access any table like below
Schema_name.table_name

Categories