I am migrating from sqlite to postgres.
Here is dumpfile.sql
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
CREATE TABLE apscheduler_jobs (
id VARCHAR(191) NOT NULL,
next_run_time FLOAT,
job_state BLOB NOT NULL,
PRIMARY KEY (id)
);
I am trying to create the same schema like APScheduler does in the sqlite by following the migration from this repo https://github.com/jarekwg/django-apscheduler/blob/master/django_apscheduler/migrations/0001_initial.py
Then the sql command is:
CREATE TABLE "apscheduler_jobs" ("id" varchar(255) NOT NULL PRIMARY KEY, "next_run_time" NUMERIC(11,2) NOT NULL, "job_state" bytea NOT NULL);
CREATE INDEX "apscheduler_jobs_83d3412e" ON "apscheduler_jobs" ("next_run_time");
CREATE INDEX "apscheduler_jobs_id_9f0be75e_like" ON "apscheduler_jobs" ("id" varchar_pattern_ops);
I had prepared the database by the followings:
CREATE DATABASE apscheduler;
GRANT ALL PRIVILEGES ON database apscheduler to uih;
My question:
Refer to this url. I have tried casting the input to be a bytea
https://www.postgresql.org/docs/9.4/static/functions-string.html
INSERT INTO "apscheduler_jobs"
VALUES('c891c2288a0f4585b169a335dd57b971',
1.51008480006619596482e+09,
decode(X'800495EA010000000000007D94288C126D6973666972655F67726163655F74696D65944B018C086578656375746F72948C0764656661756C74948C0D6D61785F696E7374616E636573944B018C0466756E63948C1E7363686564756C65722E747269676765723A747269676765725F66756E63948C08636F616C6573636594888C066B7761726773947D948C0774726967676572948C1961707363686564756C65722E74726967676572732E64617465948C0B44617465547269676765729493942981947D94288C0872756E5F64617465948C086461746574696D65948C086461746574696D65949394430A07E10B08030000010294948C0E646174657574696C2E747A2E747A948C08747A6F66667365749493942981947D94288C075F6F6666736574948C086461746574696D65948C0974696D6564656C74619493944B004D70624B00879452948C055F6E616D65944E7562869452948C0776657273696F6E944B01756268234B018C046E616D65948C206338393163323238386130663435383562313639613333356464353762393731948C0269649468258C0D6E6578745F72756E5F74696D659468228C04617267739468258C382F6170692F7363686564756C65732F747269676765722F63383931633232383861306634353835623136396133333564643537623937312F948694752E', 'base64')
)
[2016-11-17 17:16:47] [42883] ERROR: function decode(bit, unknown) does not exist
Hint: No function matches the given name and argument types. You might need to add explicit type casts.
Position: 117
What is the correct syntax to restore my sqlite dump file to postgres?
Update:
I have new question on it. 2nd line comes from manual INSERT. I am somewhat worry on it. I will try test and back to here again.
Update:
Decimal is not a problem
I'm not sure if I get you right. I truncated your long string and changed from binary to bytea and inserted value. Is it what you try to do?:
t=# create table b5 (bta bytea);
CREATE TABLE
t=# insert into b5 select X'800495EA0100';
ERROR: column "bta" is of type bytea but expression is of type bit
LINE 1: insert into b5 select X'800495EA0100';
^
HINT: You will need to rewrite or cast the expression.
t=# insert into b5 select E'\\x800495EA0100';
INSERT 0 1
t=# select * from b5;
bta
----------------
\x800495ea0100
(1 row)
for your insert to work it should look like:
INSERT INTO "apscheduler_jobs"
VALUES('c891c2288a0f4585b169a335dd57b971',
1.51008480006619596482e+09,
E'\\x800495EA010000000000007D94288C126D6973666972655F67726163655F74696D65944B018C086578656375746F72948C0764656661756C74948C0D6D61785F696E7374616E636573944B018C0466756E63948C1E7363686564756C65722E747269676765723A747269676765725F66756E63948C08636F616C6573636594888C066B7761726773947D948C0774726967676572948C1961707363686564756C65722E74726967676572732E64617465948C0B44617465547269676765729493942981947D94288C0872756E5F64617465948C086461746574696D65948C086461746574696D65949394430A07E10B08030000010294948C0E646174657574696C2E747A2E747A948C08747A6F66667365749493942981947D94288C075F6F6666736574948C086461746574696D65948C0974696D6564656C74619493944B004D70624B00879452948C055F6E616D65944E7562869452948C0776657273696F6E944B01756268234B018C046E616D65948C206338393163323238386130663435383562313639613333356464353762393731948C0269649468258C0D6E6578745F72756E5F74696D659468228C04617267739468258C382F6170692F7363686564756C65732F747269676765722F63383931633232383861306634353835623136396133333564643537623937312F948694752E'
)
Related
I test commands for sql by python. Generaly everything is okey, in this case, its doesn't work.
import sqlite3
conn = sqlite3.connect('Chinook_Sqlite.sqlite')
cursor = conn.cursor()
result = None
try:
cursor.executescript("""CREATE TABLE <New>;""")
result = cursor.fetchall()
except sqlite3.DatabaseError as err:
print("Error: ", err)
else:
conn.commit()
print(result)
conn.close()
Name writes with out <> and must include: name, type, default value after in ().
https://www.sqlite.org/lang_createtable.html - thanks #deceze
The "CREATE TABLE" command is used to create a new table in an SQLite
database. A CREATE TABLE command specifies the following attributes of
the new table:
he name of the new table.
The database in which the new table is created. Tables may be created in the main database, the temp database, or in any attached
database.
The name of each column in the table.
The declared type of each column in the table.
A default value or expression for each column in the table.
A default collation sequence to use with each column.
Optionally, a PRIMARY KEY for the table. Both single column and composite (multiple column) primary keys are supported.
A set of SQL constraints for each table. SQLite supports UNIQUE, NOT NULL, CHECK and FOREIGN KEY constraints.
Optionally, a generated column constraint.
Whether the table is a WITHOUT ROWID table.
cursor.executescript("""CREATE TABLE New ( AuthorId INT IDENTITY (1, 1) NOT NULL, AuthorFirstName NVARCHAR (20) NOT NULL, AuthorLastName NVARCHAR (20) NOT NULL, AuthorAge INT NOT NULL);""")
I have a few tables in SQLite and I am trying to figure out how to reset the auto-incremented database field.
I read that DELETE FROM tablename should delete everything and reset the auto-incremement field back to 0, but when I do this it just deletes the data. When a new record is inserted the autoincrement picks up where it left off before the delete.
My ident field properties are as follows:
Field Type: integer
Field Flags: PRIMARY KEY, AUTOINCREMENT, UNIQUE
Does it matter I built the table in SQLite Maestro and I am executing the DELETE statement in SQLite Maestro as well?
Any help would be great.
Try this:
delete from your_table;
delete from sqlite_sequence where name='your_table';
SQLite Autoincrement
SQLite keeps track of the largest
ROWID that a table has ever held using
the special SQLITE_SEQUENCE table. The
SQLITE_SEQUENCE table is created and
initialized automatically whenever a
normal table that contains an
AUTOINCREMENT column is created. The
content of the SQLITE_SEQUENCE table
can be modified using ordinary UPDATE,
INSERT, and DELETE statements. But
making modifications to this table
will likely perturb the AUTOINCREMENT
key generation algorithm. Make sure
you know what you are doing before you
undertake such changes.
You can reset by update sequence after deleted rows in your-table
UPDATE SQLITE_SEQUENCE SET SEQ=0 WHERE NAME='table_name';
As an alternate option, if you have the Sqlite Database Browser and are more inclined to a GUI solution, you can edit the sqlite_sequence table where field name is the name of your table. Double click the cell for the field seq and change the value to 0 in the dialogue box that pops up.
If you want to reset every RowId via content provider try this
rowCounter=1;
do {
rowId = cursor.getInt(0);
ContentValues values;
values = new ContentValues();
values.put(Table_Health.COLUMN_ID,
rowCounter);
updateData2DB(context, values, rowId);
rowCounter++;
while (cursor.moveToNext());
public static void updateData2DB(Context context, ContentValues values, int rowId) {
Uri uri;
uri = Uri.parseContentProvider.CONTENT_URI_HEALTH + "/" + rowId);
context.getContentResolver().update(uri, values, null, null);
}
If you are working with python and you want to delete all records from some table and reset AUTOINCREMENT.
You have this table
tables_connection_db.execute("CREATE TABLE MY_TABLE_DB (id_record INTEGER PRIMARY KEY AUTOINCREMENT, value_record real)")
So if you had added some records
connection_db=sqlite3.connect("name_file.db")
tables_connection_db=connection_db.cursor()
tables_connection_db.execute("DELETE FROM MY_TABLE_DB ") # delete records
connection_db.commit()
name_table="MY_TABLE_DB"
tables_connection_db.execute("UPDATE sqlite_sequence SET seq=1 WHERE name=? ",(name_table,))
connection_db.commit()
connection_db.close()
Probably a stupid question, but I'm just trying to make a new table in my database called 'projects'. I can't find any syntax errors, but every time I try to run it it says there is a syntax error on the final line sqlite3.OperationalError: near "MAX": syntax error. The table is not created, and it works without the pjct_pic varbinary(MAX) not null . The table should be created but every time I run it it just churns out the same error.
connie = sqlite3.connect('eeg.db')
c = connie.cursor()
c.execute("""
CREATE TABLE IF NOT EXISTS projects(
id INTEGER PRIMARY KEY AUTOINCREMENT,
pjct_name TEXT,
pjct_nick TEXT,
pjct_time TEXT,
pjct_pic varbinary(MAX) not null
)
""")
The VARBINARY(MAX) type is available on SQL Server, but not SQLite. On SQLite, the closest actual type would be BLOB. There might be an affinity in SQLite to which varbinary would map, but that affinity would be BLOB. So, I recommend just using this create statement:
CREATE TABLE IF NOT EXISTS projects (
id INTEGER PRIMARY KEY AUTOINCREMENT,
pjct_name TEXT,
pjct_nick TEXT,
pjct_time TEXT,
pjct_pic BLOB NOT NULL
);
I have a SQL ODBC statement in Python but it always returns an error. On the SQL Server the statement works fine. Can anyone help me here?
execute("IF NOT EXISTS (SELECT * FROM sysobjects WHERE name='tablename' AND xtype='U')
CREATE TABLE tablename (id INTEGER PRIMARY KEY, fieldA NVARCHAR(Max) NOT NULL, fieldB NVARCHAR(Max) NOT NULL)")
EDIT: Please note that you have to use triple quotations for multi-line string in Python, if the execute statement is pasted above as written in your script it most likely fails from the newline.
execute("""IF NOT EXISTS (SELECT * FROM sysobjects WHERE name='tablename' AND xtype='U')
CREATE TABLE tablename (id INTEGER PRIMARY KEY, fieldA NVARCHAR(Max) NOT NULL, fieldB NVARCHAR(Max) NOT NULL)""")
I recommend just using the pandas to_sql function. It will create the table with all necessary columns in case it does not exist.
You can use the if_exists parameter to handle an already existing table ('append'/'replace'/'fail')
So I am trying to add a new entry into my mySQL Database.
The problem here is, that it increases the id, but does add the entry.
After a little bit of googling I found that a failed INSERT query also increases the AUTO_INCREMENTd value (id in my case).
The mySQL Table is created using
CREATE TABLE IF NOT EXISTS TS3_STAMM_1 (id INT(6) UNSIGNED AUTO_INCREMENT PRIMARY KEY, name VARCHAR(64) NOT NULL, ts3_uid VARCHAR(64) NOT NULL, points INT(8) UNSIGNED NOT NULL); which is called by the function qServer.execute(querystring) inside python's MySQLdb module.
Then I use qString = "INSERT INTO TS3_STAMM_1 (name, ts3_uid, points) VALUES ('{}', '{}', {})".format(name, uid, pnts) (the datatypes are correct, I at least quadrouplechecked) with the function qServer.exectue(qString) to insert a new entry into the database.
But it is incrementing the ID, but its not adding an entry. So my guess would be its a failed query, but why? How does it happen? How to fix it?
Simple SELECT querys work fine the same way, also adding data manually works fine. Only the python query fails.
Note: qServer is the connection to the server, and its defined with:
try:
qConn = MySQLdb.connect(host="...", user="...", passwd="...", db="...")
qServer = qConn.cursor()
except OperationalError:
print("Cannot connect to mySQL Database! Aborting...")
exit(1)
Use commit Luke.
>>> cursor.execute("INSERT INTO employees (first_name) VALUES (%s)", ('Jane', ))
>>> qConn.commit()
Using str.format for creating SQL query is bad idea.