I'm trying to get a list of column names from a table in a SQL database. For example, if my database is called "book_shop" and the table I want to return the columns is called "books".
It's just the string formatting I'm after. I've tried the following...
SELECT *
from information_schema.columns
WHERE table_schema = 'book_shop'
ORDER BY table_name,ordinal_position
Ive got the fetchall and executed commands but it says there's something up with my SQL syntax.
In SQLAlchemy, a query like
user = sess.query(User).first()
will emit SQL with each column in the select clause qualified with the schema name
select
myschema.user.id, -- (vs user.id)
....
from
myschema.user
For some dialects, like Presto views, that is a syntax error.
Is there any way to make sqlalchemy skip the schema name in the columns of the select statement? e.g. user.id vs myschema.user.id without using aliased on every table? or a setting such that sqlalchemy automatically uses an alias?
I want to import data of file "save.csv" into my actian PSQL database table "new_table" but i got error
ProgrammingError: ('42000', "[42000] [PSQL][ODBC Client Interface][LNA][PSQL][SQL Engine]Syntax Error: INSERT INTO 'new_table'<< ??? >> ('name','address','city') VALUES (%s,%s,%s) (0) (SQLPrepare)")
Below is my code:
connection = 'Driver={Pervasive ODBC Interface};server=localhost;DBQ=DEMODATA'
db = pyodbc.connect(connection)
c=db.cursor()
#create table i.e new_table
csv = pd.read_csv(r"C:\Users\user\Desktop\save.csv")
for row in csv.iterrows():
insert_command = """INSERT INTO new_table(name,address,city) VALUES (row['name'],row['address'],row['city'])"""
c.execute(insert_command)
c.commit()
Pandas have a built-in function that empty a pandas-dataframe into a sql-database called pd.to_sql(). This might be what you are looking for. Using this you dont have to manually insert one row at a time but you can insert the entire dataframe at once.
If you want to keep using your method, the issue might be that the table "new_table" hasn't been created yet in the database. And thus you first need something like this:
CREATE TABLE new_table
(
Name [nvarchar](100) NULL,
Address [nvarchar](100) NULL,
City [nvarchar](100) NULL
)
EDIT:
You can use to_sql() like this on tables that already exist in the database:
df.to_sql(
"new_table",
schema="name_of_the_schema",
con=c.session.connection(),
if_exists="append", # <--- This will append an already existing table
chunksize=10000,
index=False,
)
I have tried the same, in my case the table is created , I just want to insert each row from pandas dataframe into the database using Actian PSQL
I'm connecting to an Oracle database from sqlalchemy and I want to know when the tables in the database were created. I can access this information through the sql developer application so I know that it is stored somewhere, but I don't know if its possible to get this information from sqlalchemy.
Also if its not possible, how should I be getting it?
SqlAlchemy doesn't provide anything to help you get that information. You have to query the database yourself.
something like:
with engine.begin() as c:
result = c.execute("""
SELECT created
FROM dba_objects
WHERE object_name = <<your table name>>
AND object_type = 'TABLE'
""")
Is there anyway I can get the datatypes of fields returned in a query in mysql. Say I have a query:
SELECT a.*,b.* FROM tbl_name a LEFT JOIN other_tbl b ON a.id=b.first_id
Is there a command I can use in mysql that will return the names of the fields that this query will return and their datatypes. I know I can potentially create a view using this query and then DESCRIBE that view, but is there any other way I can do it on the fly?
I'm using SQLAlchemy to perform this raw query and my tables are dynamically generated. Is there a SQLAlchemy way if not a MySQL way.
You can get the datatypes from a table with this in MySQL
SELECT COLUMN_TYPE
FROM information_schema.COLUMNS
WHERE TABLE_NAME = 'a'