The below JSON data is one of the field "JsonData" in the table "Profiles". In the below JSON data, I need to replace the value "Name" to "Other name" use sqlite in python
{"Id":"jwefwawlct6hlb6vs2ekotettc1dxvfv00d238jmbupfr1fnrz","Name":"CarlRisinger20409#outlook.com,"SaveType":1,"IdOnClould":"j0ZyVflWPD"}
I have executed SELECT JSON_REPLACE(JsonData, "$.Name", "Other name") FROM "Profiles" WHERE name = "CarlRisinger20409#outlook.com" in SQLITE
It showed {"Id":"jwefwawlct6hlb6vs2ekotettc1dxvfv00d238jmbupfr1fnrz","Name":"Other name","SaveType":1,"IdOnClould":"j0ZyVflWPD"} but it can't save on database.
Please know me any method to replace JSON values in the database with python. Thank you.
Since it seems you do it in python and you did not show the code, I suggest you checking if you commit the changes.
con.commit()
check out the sample code
Related
I am trying to use pyodbc to update an existing MS Access database table with a very long multiline string. The string is actually a csv that has been turned into a string.
The query I am trying to use to update the table is as follows:
query = """
UPDATE Stuff
SET Results = '{}'
WHERE AnalyteName =
'{}'
""".format(df, analytename)
The full printed statement looks as follows:
UPDATE Stuff
SET Results =
'col a,col b,col c,...,col z,
Row 1,a1,b1,c1,
...,...,...,...,
Row 3000,a3000,b3000,c3000'
WHERE AnalyteName = 'Serotonin'
However this does not seem to be working, and I keep getting the following error:
pyodbc.ProgrammingError: ('42000', '[42000] [Microsoft][ODBC Microsoft Access Driver] Syntax error in UPDATE statement. (-3503) (SQLExecDirectW)')
Which I assume is due to the format of the csv string I am trying to use to update the table with.
I have tried using INSERT and inserting a new row with the csv string and other relevant information and that seems to work. However, I need to use UPDATE as I will eventually be adding other csv strings to these columns. This leads me to believe that there is A) Something is wrong with the syntax of my UPDATE query (I am new to SQL syntax) or B) I am missing something from the documentation regarding UPDATE queries.
Is executing an UPDATE query like this possible? If so, where am I going wrong?
It would be determined by the table's field type.
For large amounts of text you'd need a blob field in your database table.
A blob field will store binary info so using blob will not 'see' illegal characters.
Answering my own question in case anyone else wants to use this.
It turns out what I was missing was brackets around the table column fields from my UPDATE statement. My final code looked something like this.
csv = df.to_csv(index=False)
name = 'some_name'
query = """
UPDATE Stuff
SET
[Results] = ?
WHERE
[AnalyteName] = ?
"""
self.cursor.execute(query, (csv, name))
I've seen several other posts here where brackets were not around the column names. However, since this is MS Access, I believe they were required for this query, or rather this specific query since it included a very long strong in the SET statement.
I welcome anyone else here to provide a more efficient method of performing this task or someone else who can provide more insight into why this is what worked for me.
I've now spent countless of hours troubleshooting this error, before making a post here, to no avail.
Here's what I'm trying to do:
Import data from a CSV file into an SQL database using Python and psycopg2. Ideally without making any changes to the CSV file.
Here's my issue:
TLDR: DB is set up for varchar/string data, yet somehow I'm getting a "invalid input syntax for type integer:" error.
Here's the code that successfully creates the "Customers_1" table:
cur.execute('DROP TABLE IF EXISTS customers_1;')
cur.execute('CREATE TABLE customers_1 (customer_id SERIAL NOT NULL PRIMARY KEY,'
'Kundenummer varchar(100),'
'Start_year integer,'
'Navn varchar(100),'
'Land varchar(100),'
'Segment varchar(20));'
)
Everthing should be in order here, and when I run a basic INSERT INTO command I can enter the values just fine as seen in this screenshot from pgAdmin:
customers_1 table with 1 row successfully added
I attempt to import the CSV data using the "copy_from" function as seen here:
f = open(r'Eksamen2\customer_segment.csv', 'r')
cur.copy_from(f, 'customers_1', sep=',')
f.close()
My CSV file includes a header, however, removing it did not fix the issue. Here's a look at the first two lines of the CSV file:
Kundenummer,Start_year,Navn,Land,Segment
K816058744,2019,GearBicycle ,Norge,Medium
The program returns this error message:
cur.copy_from(f, 'customers_1', sep=',')
psycopg2.errors.InvalidTextRepresentation: invalid input syntax for type integer: "Kundenummer"
CONTEXT: COPY customers_1, line 1, column customer_id: "Kundenummer"
I don't understand why it is expecting an integer here, as the column "Kundenummer" is a varchar.
I'm considering simply turning the SQL text into a query and creating a for loop that goes through the CSV file, however that seems more complicated than simply using the copy_from function.
Any help would be greatly appreciated!
Bonus question: I'd like to ditch the "customer_id" PK in favour of simply using the "Kundenummer" column as the PK. However, I'm unable to create a SERIAL for that PK that would automatically generate a customer number starting with K, followed by 8 or 9 numbers, like the customer IDs in my CSV file.
Using python 3, I want to download API data, which is returned as JSON, and then I want to insert only specific (columns or fields or whatever?) into a sqlite database. So, here's what I've got and the issues I have:
Using python's request module:
##### import modules
import sqlite3
import requests
import json
headers = {
'Authorization' : 'ujbOsdlknfsodiflksdonosB4aA=',
'Accept' : 'application/json'
}
r = requests.get(
'https://api.lendingclub.com/api/investor/v1/accounts/94837758/detailednotes',
headers=headers
)
Okay, first issue is how I get the requested JSON data into something (a dictionary?) that python can use. Is that...
jason.loads(r.text)
Then I create the table into which I want to insert the specific data:
curs.execute('''CREATE TABLE data(
loanId INTEGER NOT NULL,
noteAmount REAL NOT NULL,
)''')
No problem there...but now, even though the JSON data looks something like this (although there are hundreds of records)...
{
"myNotes": [
{
"loanId":11111,
"noteId":22222,
"orderId":33333,
"purpose":"Debt consolidation",
"canBeTraded":true,
"creditTrend":"DOWN",
"loanAmount":10800,
"noteAmount":25,
"paymentsReceived":5.88,
"accruedInterest":12.1,
"principalPending":20.94,
},
{
"loanId":11111,
"noteId":22222,
"orderId":33333,
"purpose":"Credit card refinancing",
"canBeTraded":true,
"creditTrend":"UP",
"loanAmount":3000,
"noteAmount":25,
"paymentsReceived":7.65,
"accruedInterest":11.92,
"principalPending":19.76,
}]
}
I only want to insert 2 data points into the sqlite database, the "loanId" and the "noteAmount". I believe inserting the data into the database will look something like this (but know this is incorrect):
curs.execute('INSERT INTO data (loanId, noteAmount) VALUES (?,?)', (loanID, noteAmount))
But I am now at a total loss as to how to do that, so I guess I have 2 main issues; getting the downloaded data into something that python can use to then insert specific data into the database; and then how exactly do I insert the data into the database from the object that holds the downloaded data. I'm guessing looping is part of the answer...but from what? Thanks in advance!
As the documentation says:
The sqlite3 module supports two kinds of placeholders: question marks
(qmark style) and named placeholders (named style).
Note that you can even insert all rows at once using executemany.
So in your case:
curs.executemany('INSERT INTO data (loanId, noteAmount) '
'VALUES (:loanId,:noteAmount)', json.loads(...)['myNotes'])
First off, it's js = json.loads(r.text)` so you're very close.
Next, if you want to insert just the loanID and noteAmount fields of each record, then you'll need to loop and do something like
for record in js['myNotes']:
curs.execute('INSERT INTO data (loanId, noteAmount) VALUES (?,?)', (record['loanID'], record['noteAmount']))
If you play with it a bit, you could coerce the JSON into one big INSERT call.
I got a table named test in MySQL database.
There are some fields in the test table, say, name.
However, there is only 0 or 1 record in the table.
When new record , say name = fox, comes, I'd like to update the targeted field of the table test.
I use python to handle MySQL and my question is how to write the sql.
PS. I try not to use where expression, but failed.
Suppose I've got the connection to the db, like the following:
conn = MySQLdb.connect(host=myhost, ...)
What you need here is a query which does the Merge kind of operation on your data. Algorithmically:
When record exists
do Update
Else
do Insert
You can go through this article to get a fair idea on doing things in this situation:
http://www.xaprb.com/blog/2006/06/17/3-ways-to-write-upsert-and-merge-queries-in-mysql/
What I personally recommend is the INSERT.. ON DUPLICATE KEY UPDATE
In your scenario, something like
INSERT INTO test (name)
VALUES ('fox')
ON DUPLICATE KEY UPDATE
name = 'fox';
Using this kind of a query you can handle the situation in one single shot.
In sqlite3 in python, I'm trying to make a program where the new row in the table to be written will be inserted next, needs to be printed out. But I just read the documentation here that an INSERT should be used in execute() statement. Problem is that the program I'm making asks the user for his/her information and the primary key ID will be assigned for the member as his/her ID number must be displayed. So in other words, the execute("INSERT") statement must not be executed first as the ID Keys would be wrong for the assignment of the member.
I first thought that lastrowid can be run without using execute("INSERT") but I noticed that it always gave me the value "None". Then I read the documentation in sqlite3 in python and googled alternatives to solve this problem.
I've read through google somewhere that SELECT last_insert_rowid() can be used but would it be alright to ask what is the syntax of it in python? I've tried coding it like this
NextID = con.execute("select last_insert_rowid()")
But it just gave me an cursor object output ""
I've also been thinking of just making another table where there will always only be one value. It will get the value of lastrowid of the main table whenever there is a new input of data in the main table. The value it gets will then be inserted and overwritten in another table so that every time there is a new set of data needs to be input in the main table and the next row ID is needed, it will just access the table with that one value.
Or is there an alternative and easier way of doing this?
Any help is very much appreciated bows deeply
You could guess the next ID if you would query your table before asking the user for his/her information with
SELECT MAX(ID) + 1 as NewID FROM DesiredTable.
Before inserting the new data (including the new ID), start a transaction,
only rollback if the insert failes (because another process was faster with the same operation) and ask your user again. If eveything is OK just do a commit.
Thanks for the answers and suggestions posted everyone but I ended up doing something like this:
#only to get the value of NextID to display
TempNick = "ThisIsADummyNickToBeDeleted"
cur.execute("insert into Members (Nick) values (?)", (TempNick, ))
NextID = cur.lastrowid
cur.execute("delete from Members where ID = ?", (NextID, ))
So basically, in order to get the lastrowid, I ended up inserting a Dummy data then after getting the value of the lastrowid, the dummy data will be deleted.
lastrowid
This read-only attribute provides the rowid of the last modified row. It is only set if you issued an INSERT statement using the execute() method. For operations other than INSERT or when executemany() is called, lastrowid is set to None.
from https://docs.python.org/2/library/sqlite3.html