SQLAlchemy - Update table using list of dictionaries - python

I have a table containing user data and I would like to update information for many of the users using a list of dictionaries. At the moment I am using a for loop to send an update statement one dictionary at a time, but it is slow and I am hoping that there is a bulk method to do this.
user_data = [{'user_id' : '12345', 'user_name' : 'John'}, {'user_id' : '11223', 'user_name' : 'Andy'}]
connection = engine.connect()
metadata = MetaData()
for row in user_data:
stmt = update(users_table).where(users_table.columns.user_id == row['user_id'])
results = connection.execute(stmt, row)
Thanks in advance!

from sqlalchemy.sql.expression import bindparam
connection = engine.connect()
stmt = users_table.update().\
where(users_table.c.id == bindparam('_id')).\
values({
'user_id': bindparam('user_id'),
'user_name': bindparam('user_name'),
})
connection.execute(stmt, [
{'user_id' : '12345', 'user_name' : 'John', '_id': '12345'},
{'user_id' : '11223', 'user_name' : 'Andy', '_id': '11223'}
])

Related

sqlalchemy return constant dict in select

I am using sqlalchemy with MySql. I need to return a dict in select. Adding that dict value to fetched data is not an option. Is there any way in sqlalchemy or MySQL by which I can add JSON here
case(
[
(
and_(
SomeTable.somefield.notin_(
constants.SLUGS),
OtherTable.otherfield == 1
),
[{
'label': 'Success',
'value': 'True'
}]
)
],
else_=False
).label('status')
Casting to JSON ought to work for MySQL (but not for Mariadb):
>>> import sqlalchemy as sa
>>>
>>> engine = sa.create_engine('mysql:///test', echo=True, future=True)
>>> with engine.connect() as conn:
... res = conn.execute(sa.select(sa.case([(True, sa.cast([{'label': 'success', 'value': True}], sa.JSON))], else_=False).label('status')))
... print(res)
[{"label": "success", "value": true}]

Confused about python data types to insert into database

I am trying to insert this value into SQL Server table and I'm not sure is this supposed to be a list or a dictionary.
For some context I am pulling the data from a Sharepoint list using shareplum with code like this
import json
import pandas
import pyodbc
from shareplum import Site
from shareplum import Office365
authcookie = Office365('https://company.sharepoint.com', username='username', password='password').GetCookies()
site = Site('https://company.sharepoint.com/sites/sharepoint/', authcookie=authcookie)
sp_list = site.List('Test')
data = sp_list.GetListItems('All Items')
cnxn = pyodbc.connect("Driver={SQL Server Native Client 11.0};"
"Server=Server;"
"Database=db;"
"Trusted_Connection=yes;")
cursor = cnxn.cursor()
insert_query = "INSERT INTO SharepointTest(No,Name) VALUES (%(No)s,%(Name)s)"
cursor.executemany(insert_query,data)
cnxn.commit
Here's the result when I used print(data)
[
{ 'No': '1', 'Name': 'Qwe' },
{ 'No': '2', 'Name': 'Asd' },
{ 'No': '3', 'Name': 'Zxc' },
{ 'No': '10', 'Name': 'jkl' }
]
If I tried to execute that code will shows me this message
TypeError: ('Params must be in a list, tuple, or Row', 'HY000')
What should I fix in the code?
convert your list of dictionaries to a list or tuple of the dictionary values.
I've done it below using list comprehension to iterate through the list and the values() method to extract the values from a dictionary
insert_query = "INSERT INTO SharepointTest(No,Name) VALUES (?, ?)" #change your sql statement to include parameter markers
cursor.executemany(insert_query, [tuple(d.values()) for d in data])
cnxn.commit() #finally commit your changes

append in array pymongo

i want to update bookmarks array by id of _id_collection.
result = collections.find_one({"_id": ObjectId(id)},
{ 'array_of_collections': { '$elemMatch': { '_id_collection': ObjectId('5e9871582be940b6af4a9b41') }}})
print(result) # {'_id': ObjectId('5e986b4a07b94384ae8016b7'), 'array_of_collections': [{'_id_collection': ObjectId('5e9871582be940b6af4a9b41'), 'name_of_collection': 'test2', 'bookmarks': []}]}
here is my code of result of searching this object, now i can append in bookmarks array some values, but i don't know how to do it.
on this picture u can see my monbodb structure.
If you do the $elemMatch as part of the query and then use the positional operator $, you can successfully push values to your desired bookmarks array.
Try this:
import pprint
from bson import ObjectId
from pymongo import MongoClient
client = MongoClient()
db = client.tst1
coll = db.coll6
coll.update_one({"_id": ObjectId("5e9898c69c26fe7ba93476f6"),
'array_of_collections': {'$elemMatch': {'_id_collection': ObjectId("5e9898c69c26fe7ba93476f4")}}},
{'$push': {'array_of_collections.$.bookmarks': 'Pushed Value 1'}})
mlist1 = list(coll.find())
for mdoc in mlist1:
pprint.pprint(mdoc)
Result Document:
{'_id': ObjectId('5e9898c69c26fe7ba93476f6'),
'array_of_collections': [{'_id_collection': ObjectId('5e9898c69c26fe7ba93476f2'),
'bookmarks': [],
'name_of_collection': 'test'},
{'_id_collection': ObjectId('5e9898c69c26fe7ba93476f3'),
'bookmarks': [],
'name_of_collection': 'test2'},
{'_id_collection': ObjectId('5e9898c69c26fe7ba93476f4'),
'bookmarks': ['Pushed Value 1'],
'name_of_collection': 'test3'},
{'_id_collection': ObjectId('5e9898c69c26fe7ba93476f5'),
'bookmarks': [],
'name_of_collection': 'test4'}]}

How to protect against SQL Injection with pandas read_gbq

How do I use pandas_gbq.read_gbq safely to protect against SQL Injections as I cannot in the docs find a way to parametrize it
I've looked at the docs at a way to parametrize as well as googles website and other sources.
df_valid = read_gbq(QUERY_INFO.format(variable), project_id='project-1622', location='EU') Where query looks like SELECT name, date FROM table WHERE id = '{0}'
I can input p' or '1'='1 and it works
Per Google BigQuery docs, you have to use a specified configuration with SQL parameterized statement:
import pandas as pd
sql = "SELECT name, date FROM table WHERE id = #id"
query_config = {
'query': {
'parameterMode': 'NAMED',
'queryParameters': [
{
'name': 'id',
'parameterType': {'type': 'STRING'},
'parameterValue': {'value': 1}
}
]
}
}
df = pd.read_gbq(sql, project_id='project-1622', location='EU', configuration=query_config)

How to insert document with collection.update_many() into Collection (MongoDB) using Pymongo (No Duplicated)

I insert Document into Collection with collection.update() because each data I have a postID to different. I want to when I run if a post was inserted in MongoDB, the post will be updated (not insert a new post with postID overlapping with postID first). This is a structure of my data:
comment1 = [
{
'commentParentId': parent_content.text,
'parentId': parent_ID,
'posted': child_time.text,
'postID':child_ID,
'author':
{
'name': child_name.text
},
'content': child_content.text
},
...............
]
This is my code, i used to insert data :
client = MongoClient()
db = client['comment_data2']
db.collection_data = db['comments']
for i in data_comment:
db.collection_data.update_many(
{db.collection_data.find({"postID": {"$in": i["postID"]}})},
{"$set": i},
{'upsert': True}
)
But I have a Error : TypeError: filter must be an instance of dict, bson.son.SON, or other type that inherits from collections.Mapping in line {'upsert': True}. And {db.collection_data.find({"postID": {"$in": i["postID"]}})} is right?
you can use this code:
db.collection_data.update_many(
{"postId": i["postID"]},
{"$set":i},
upsert = True
)

Categories