I have a SQL ODBC statement in Python but it always returns an error. On the SQL Server the statement works fine. Can anyone help me here?
execute("IF NOT EXISTS (SELECT * FROM sysobjects WHERE name='tablename' AND xtype='U')
CREATE TABLE tablename (id INTEGER PRIMARY KEY, fieldA NVARCHAR(Max) NOT NULL, fieldB NVARCHAR(Max) NOT NULL)")
EDIT: Please note that you have to use triple quotations for multi-line string in Python, if the execute statement is pasted above as written in your script it most likely fails from the newline.
execute("""IF NOT EXISTS (SELECT * FROM sysobjects WHERE name='tablename' AND xtype='U')
CREATE TABLE tablename (id INTEGER PRIMARY KEY, fieldA NVARCHAR(Max) NOT NULL, fieldB NVARCHAR(Max) NOT NULL)""")
I recommend just using the pandas to_sql function. It will create the table with all necessary columns in case it does not exist.
You can use the if_exists parameter to handle an already existing table ('append'/'replace'/'fail')
Related
I have a simple insert statement in sqlite3, but somehow the python version doesn't work, but the exact same statement works fine in console. And python doesn't show any insert error. Any suggestion on how to fix this insert statement in python?
Insert statement:
INSERT INTO facilities (facility_id,facility_name) VALUES ('239SIE39', 'Lobby A');
Table creation statement:
CREATE TABLE IF NOT EXISTS facilities (id INTEGER PRIMARY KEY AUTOINCREMENT UNIQUE, facility_id, facility_name)
Function to store data:
def store_data(table_name: str, rec: dict):
keys = ','.join(rec.keys())
values = tuple(rec.values())
stmt = f"INSERT INTO {table_name} ({keys}) VALUES {values}"
cur.execute(stmt)
I test commands for sql by python. Generaly everything is okey, in this case, its doesn't work.
import sqlite3
conn = sqlite3.connect('Chinook_Sqlite.sqlite')
cursor = conn.cursor()
result = None
try:
cursor.executescript("""CREATE TABLE <New>;""")
result = cursor.fetchall()
except sqlite3.DatabaseError as err:
print("Error: ", err)
else:
conn.commit()
print(result)
conn.close()
Name writes with out <> and must include: name, type, default value after in ().
https://www.sqlite.org/lang_createtable.html - thanks #deceze
The "CREATE TABLE" command is used to create a new table in an SQLite
database. A CREATE TABLE command specifies the following attributes of
the new table:
he name of the new table.
The database in which the new table is created. Tables may be created in the main database, the temp database, or in any attached
database.
The name of each column in the table.
The declared type of each column in the table.
A default value or expression for each column in the table.
A default collation sequence to use with each column.
Optionally, a PRIMARY KEY for the table. Both single column and composite (multiple column) primary keys are supported.
A set of SQL constraints for each table. SQLite supports UNIQUE, NOT NULL, CHECK and FOREIGN KEY constraints.
Optionally, a generated column constraint.
Whether the table is a WITHOUT ROWID table.
cursor.executescript("""CREATE TABLE New ( AuthorId INT IDENTITY (1, 1) NOT NULL, AuthorFirstName NVARCHAR (20) NOT NULL, AuthorLastName NVARCHAR (20) NOT NULL, AuthorAge INT NOT NULL);""")
So I am trying to add a new entry into my mySQL Database.
The problem here is, that it increases the id, but does add the entry.
After a little bit of googling I found that a failed INSERT query also increases the AUTO_INCREMENTd value (id in my case).
The mySQL Table is created using
CREATE TABLE IF NOT EXISTS TS3_STAMM_1 (id INT(6) UNSIGNED AUTO_INCREMENT PRIMARY KEY, name VARCHAR(64) NOT NULL, ts3_uid VARCHAR(64) NOT NULL, points INT(8) UNSIGNED NOT NULL); which is called by the function qServer.execute(querystring) inside python's MySQLdb module.
Then I use qString = "INSERT INTO TS3_STAMM_1 (name, ts3_uid, points) VALUES ('{}', '{}', {})".format(name, uid, pnts) (the datatypes are correct, I at least quadrouplechecked) with the function qServer.exectue(qString) to insert a new entry into the database.
But it is incrementing the ID, but its not adding an entry. So my guess would be its a failed query, but why? How does it happen? How to fix it?
Simple SELECT querys work fine the same way, also adding data manually works fine. Only the python query fails.
Note: qServer is the connection to the server, and its defined with:
try:
qConn = MySQLdb.connect(host="...", user="...", passwd="...", db="...")
qServer = qConn.cursor()
except OperationalError:
print("Cannot connect to mySQL Database! Aborting...")
exit(1)
Use commit Luke.
>>> cursor.execute("INSERT INTO employees (first_name) VALUES (%s)", ('Jane', ))
>>> qConn.commit()
Using str.format for creating SQL query is bad idea.
I am migrating from sqlite to postgres.
Here is dumpfile.sql
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
CREATE TABLE apscheduler_jobs (
id VARCHAR(191) NOT NULL,
next_run_time FLOAT,
job_state BLOB NOT NULL,
PRIMARY KEY (id)
);
I am trying to create the same schema like APScheduler does in the sqlite by following the migration from this repo https://github.com/jarekwg/django-apscheduler/blob/master/django_apscheduler/migrations/0001_initial.py
Then the sql command is:
CREATE TABLE "apscheduler_jobs" ("id" varchar(255) NOT NULL PRIMARY KEY, "next_run_time" NUMERIC(11,2) NOT NULL, "job_state" bytea NOT NULL);
CREATE INDEX "apscheduler_jobs_83d3412e" ON "apscheduler_jobs" ("next_run_time");
CREATE INDEX "apscheduler_jobs_id_9f0be75e_like" ON "apscheduler_jobs" ("id" varchar_pattern_ops);
I had prepared the database by the followings:
CREATE DATABASE apscheduler;
GRANT ALL PRIVILEGES ON database apscheduler to uih;
My question:
Refer to this url. I have tried casting the input to be a bytea
https://www.postgresql.org/docs/9.4/static/functions-string.html
INSERT INTO "apscheduler_jobs"
VALUES('c891c2288a0f4585b169a335dd57b971',
1.51008480006619596482e+09,
decode(X'800495EA010000000000007D94288C126D6973666972655F67726163655F74696D65944B018C086578656375746F72948C0764656661756C74948C0D6D61785F696E7374616E636573944B018C0466756E63948C1E7363686564756C65722E747269676765723A747269676765725F66756E63948C08636F616C6573636594888C066B7761726773947D948C0774726967676572948C1961707363686564756C65722E74726967676572732E64617465948C0B44617465547269676765729493942981947D94288C0872756E5F64617465948C086461746574696D65948C086461746574696D65949394430A07E10B08030000010294948C0E646174657574696C2E747A2E747A948C08747A6F66667365749493942981947D94288C075F6F6666736574948C086461746574696D65948C0974696D6564656C74619493944B004D70624B00879452948C055F6E616D65944E7562869452948C0776657273696F6E944B01756268234B018C046E616D65948C206338393163323238386130663435383562313639613333356464353762393731948C0269649468258C0D6E6578745F72756E5F74696D659468228C04617267739468258C382F6170692F7363686564756C65732F747269676765722F63383931633232383861306634353835623136396133333564643537623937312F948694752E', 'base64')
)
[2016-11-17 17:16:47] [42883] ERROR: function decode(bit, unknown) does not exist
Hint: No function matches the given name and argument types. You might need to add explicit type casts.
Position: 117
What is the correct syntax to restore my sqlite dump file to postgres?
Update:
I have new question on it. 2nd line comes from manual INSERT. I am somewhat worry on it. I will try test and back to here again.
Update:
Decimal is not a problem
I'm not sure if I get you right. I truncated your long string and changed from binary to bytea and inserted value. Is it what you try to do?:
t=# create table b5 (bta bytea);
CREATE TABLE
t=# insert into b5 select X'800495EA0100';
ERROR: column "bta" is of type bytea but expression is of type bit
LINE 1: insert into b5 select X'800495EA0100';
^
HINT: You will need to rewrite or cast the expression.
t=# insert into b5 select E'\\x800495EA0100';
INSERT 0 1
t=# select * from b5;
bta
----------------
\x800495ea0100
(1 row)
for your insert to work it should look like:
INSERT INTO "apscheduler_jobs"
VALUES('c891c2288a0f4585b169a335dd57b971',
1.51008480006619596482e+09,
E'\\x800495EA010000000000007D94288C126D6973666972655F67726163655F74696D65944B018C086578656375746F72948C0764656661756C74948C0D6D61785F696E7374616E636573944B018C0466756E63948C1E7363686564756C65722E747269676765723A747269676765725F66756E63948C08636F616C6573636594888C066B7761726773947D948C0774726967676572948C1961707363686564756C65722E74726967676572732E64617465948C0B44617465547269676765729493942981947D94288C0872756E5F64617465948C086461746574696D65948C086461746574696D65949394430A07E10B08030000010294948C0E646174657574696C2E747A2E747A948C08747A6F66667365749493942981947D94288C075F6F6666736574948C086461746574696D65948C0974696D6564656C74619493944B004D70624B00879452948C055F6E616D65944E7562869452948C0776657273696F6E944B01756268234B018C046E616D65948C206338393163323238386130663435383562313639613333356464353762393731948C0269649468258C0D6E6578745F72756E5F74696D659468228C04617267739468258C382F6170692F7363686564756C65732F747269676765722F63383931633232383861306634353835623136396133333564643537623937312F948694752E'
)
In the sqlite3 faq, it is mentioned that an integer primary key being fed a null value would autoincrement. But this is not happening for me.
to replicate, a table in sqlite3, CREATE TABLE dummy( serial_num INTEGER PRIMARY KEY, name TEXT); and fill it using python,
import sqlite3 as lite
con = lite.connect('some.db')
cur=con.cursor()
data = "someone's name"
cur.execute("INSERT INTO dummy VALUES(NULL, ?)", data)
con.commit()
The first attribute serial_num is being shown blank while the name attribute is fine. When I do SELECT serial_num FROM dummy I just get a bunch of blank spaces. What am I doing wrong?
This is one of SQLite's quirks. From the fine manual:
According to the SQL standard, PRIMARY KEY should always imply NOT NULL. Unfortunately, due to a long-standing coding oversight, this is not the case in SQLite. Unless the column is an INTEGER PRIMARY KEY SQLite allows NULL values in a PRIMARY KEY column. We could change SQLite to conform to the standard (and we might do so in the future), but by the time the oversight was discovered, SQLite was in such wide use that we feared breaking legacy code if we fixed the problem.
The documentation on INTEGER PRIMARY KEY is a little unclear about what precisely is required for a column to be this special INTEGER PRIMARY KEY that auto-increments but the reality is that the column needs to be NOT NULL if you want to use the NULL value to mean "give me the next auto-incrementing value" when inserting:
create table dummy (
serial_num integer primary key not null,
name text
);
If you leave out the not null, you need to do your inserts like this:
insert into dummy (name) values (?)
to get the auto-increment value for serial_num. Otherwise, SQLite has no way of telling the difference between a NULL meaning "give me the next auto-increment value" and a NULL meaning "put a NULL value in serial_num because the column allows NULLs".
The insert syntax provided above does not seem to work in the absence of not null.
Here's an example - note that the ID field is not autoincremented even though I use the insert format that you specified above.
sqlite> .schema logTable
CREATE TABLE logTable (ID INTEGER PRIMARY_KEY, ts REAL, level TEXT, message TEXT);
sqlite> INSERT into LOGTABLE (ts, level, message) VALUES (111, "autoinc test", "autoinc test");
sqlite> select * from logtable where ts = 111;
|111.0|autoinc test|autoinc test
sqlite>
It does work with the NOT NULL workaround.
sqlite> create TABLE logTable (ID INTEGER PRIMARY KEY NOT NULL, ts REAL, level TEXT, message TEXT);
sqlite> INSERT into LOGTABLE (ts, level, message) VALUES (222, "autoinc test", "autoinc test");
sqlite> select * from logtable where ts = 222;
1|222.0|autoinc test|autoinc test
I apologize for posting this as a new answer instead of commenting on the previous answer, but my reputation score is too low to add comments, and I thought that it was important to note that the alternate insert statement is not an adequate workaround.