Dynamically creating table from csv file using psycopg2 - python

I would like to get some understanding on the question that I was pretty sure was clear for me. Is there any way to create table using psycopg2 or any other python Postgres database adapter with the name corresponding to the .csv file and (probably the most important) with columns that are specified in the .csv file.

I'll leave you to look at the psycopg2 library properly - this is off the top of my head (not had to use it for a while, but IIRC the documentation is ample).
The steps are:
Read column names from CSV file
Create "CREATE TABLE whatever" ( ... )
Maybe INSERT data
import os.path
my_csv_file = '/home/somewhere/file.csv'
table_name = os.path.splitext(os.path.split(my_csv_file)[1])[0]
cols = next(csv.reader(open(my_csv_file)))
You can go from there...
Create a SQL query (possibly using a templating engine for the fields and then issue the insert if needs be)

Related

Snowpark Snowflake Python to run a sql statement and export to Excel

I'm creating a Snowflake procedure using Snowpark (python) package executing a query into a snowflake dataframe and I would like to export that into Excel, how can I accomplish that? Is it a better approach to do this? The end goal is to export it the query results into Excel. Needs to be in a Snowflake procedure since we already have others "parent" procedures. Thanks!
CREATE OR REPLACE PROCEDURE EXPORT_SP()
RETURNS string not null
LANGUAGE PYTHON
RUNTIME_VERSION = '3.8'
PACKAGES = ('snowflake-snowpark-python', 'pandas')
HANDLER = 'run'
AS
$$
import pandas
def run(snowpark_session):
## Execute the query into a Snowflake dataframe
results_df = snowpark_session.sql('''
SELECT * FROM
MY TABLES
;
''').collect()
return results_df
$$
;
In general, you can do this by:
"Unloading" the data from the table using the COPY INTO <location> command.
Using the GET command to copy the data to your local filesystem.
Open the file with Excel! If you used the CSV format and the appropriate format options in step 1, you should be able to easily open the resulting data with Excel.
Snowpark directly supports step 1 in the DataFrameWriter.copy_into_location method. An instance of DataFrameWriter contained in the DataFrame.write attribute, as described here.
Snowpark also directly supports step 2 in the FileOperation.get method. As per the example in that documentation page, you can access this method using the .file attribute of your Snowpark session object.
Putting this all together, you should be able to do something like this in Snowpark to save a single exported file into the current working directory:
source_table = "my_table"
unload_location = "#my_stage/export.csv"
def run(session):
df = session.table(source_table)
df.write.copy_into_location(
unload_location,
file_format_type="csv",
format_type_options=dict(
compression="none",
field_delimiter="\t",
),
single=True,
header=True,
)
session.file.get(unload_location, ".")
You can of course use session.sql() instead of session.table() as needed. You might also want to consider unloading data to the stage associated with the source data, instead of creating a separate stage, i.e. if the data is from table my_table then you would unload to the stage #%my_table.
For more details, refer to the documentation pages I linked, which contain important reference information as well as several examples.
Note that I am not sure if session.file is accessible from inside a stored procedure; you will have to experiment to see what works in your specific situation.
As always, remember that this is untested code written by an unpaid volunteer. Always triple-check and test any code that is provided here. Please do ask questions in the comments if anything is still unclear.

Inserting specific columns of csv file into mongodb collection using python script

I have a python script to insert a csv file into mongodb collection
import pymongo
import pandas as pd
import json
client = pymongo.MongoClient("mongodb://localhost:27017")
df = pd.read_csv("iris.csv")
data = df.to_dict(oreint = "records")
db = client["Database name"]
db.CollectionName.insert_many(data)
Here all the columns of csv files are getting inserted into mongo collection. How can I achieve a usecase where I want to insert only specific columns of csv file in the mongo collection .
What changes I can make to existing code.
Lets say I also have database already created in my Mongo. Will this command work even if the database is present (db = client["Database name"])
Have you checked out pymongoarrow? the latest release has write support where you can import a csv file into mongodb. Here are the release notes and documentation. You can also use mongoimport to import a csv file, documentation is here, but I can't see any way to exclude fields like the way you can with pymongoarrow.

Update a SQLite3 database using CSVs and script automation

I have a sqlite database that is populated with values from csv files. I would like to create a script that when run:
deletes the old tables
creates new tables with the same schema (with newly updated values)
I noticed that sqlite script files don't accept ".mode csv" or .import "csv". Is there a way to automate this is with a script of some sort?
If you want a Python approach, you can use to_sql method from the pandas package to write to SQLite. Pandas can replace existing tables and automatically generate the schema based on the CSV file read.
import sqlite3
import pandas as pd
conn = sqlite3.connect('my.db')
# read the csv file
df = pd.read_csv("my.csv")
# write to SQLite
df.to_sql("my_tbl", conn, if_exists="replace")
conn.close()

python + sqlite3 - Can't get blob data to work

I know this has been touched on several times, but I cannot seem to get this working. I am writing a python program that will take in an sqlite3 database dump file, analyse it and recreate it using a database migration tool (called yoyo-migrations)
I am running into an issue with blob data in sqlite3 and how to correctly format it.
Here is a basic explanation of my programs execute
- read in dump file, separate into CREATE statements, INSERT statements and other
- generate migration files for CREATEs
- generate a migration file for each tables inserts
- run the migration to rebuild the database ( except now it is built off of migrations)
Basically I was given a database, and need to get it under control using migrations. This is just the first step (getting the thing rebuilt using the migration tool)
Here is the table creation of the blob table:
CREATE TABLE blob_table(
blockid INTEGER PRIMARY KEY,
block blob
)
I then create the migration file:
#
# file: migrations/0001.create_table.py
# Migration to build tables (autogenerated by parse_dump.py)
#
from yoyo import step
step('CREATE TABLE blob_table( blockid INTEGER PRIMARY KEY, block blob);')
Note that I just write that to a file, and then at the end run the migrations. Next I need to right a "seed" migration that inserts the data. This is where I run into trouble!
# here is an example insert line from the dump
INSERT INTO blob_table VALUES(765,X'00063030F180800FE1C');
So the X'' stuff is the blob data, and I need to write a python file which INSERTs this data back into the table. I have a large amount of data so I am using the execute many syntax. Here is what the seed migration file looks like (an example):
#
# file: migrations/0011.seed_blob_table.py
# Insert seed data for blob table
#
from yoyo import step
import sqlite3
def do_step(conn):
rows = [
(765,sqlite3.Binary('00063030303031340494100')),
(766,sqlite3.Binary('00063030303331341FC5150')),
(767,sqlite3.Binary('00063030303838381FC0210'))
]
cursor = conn.cursor()
cursor.executemany('INSERT INTO blob_table VALUES (?,?)', rows)
# run the insert
step(do_step)
I have tried using sqlite3.Binary(), the python built-in buffer(), both combinations of the two as well as int('string', base=16), hex() and many others. No matter what I do it will not match up with the database from the dump. What I mean is:
If I open up the new and old database side by side and excute this query:
# in the new database, it comes out as a string
SELECT * FROM blob_table WHERE blockid=765;
> 765|00063030303031340494100
# in the old database, it displays nothing
SELECT * FROM blob_table WHERE blockid=765;
> 765|
# if I do this in the old one, I get the x'' from the dump
SELECT blockid, quote(block) FROM blob_table WHERE blockid=765;
765|X'00063030303031340494100'
# if I use the quote() in the new database i get something different
SELECT blockid, quote(block) FROM blob_table WHERE blockid=765;
765|X'303030363330333033303330... (truncated, this is longer than the original and only has digits 0-9
My end goal is to rebuild the database and have it be identical to the starting one (from which the dump was made). Any tips on getting the blob stuff to work are much appreciated!
The buffer class is capable of handling binary data. However, it takes care to preserve the data you give to it, and '00063030303031340494100' is not binary data; it is a string that contains the digits zero, zero, zero, six, etc.
To construct a string containing binary data, use decode:
import codecs
blob = buffer(codecs.decode(b'00063030303031340494100', 'hex_codec'))

Python with Mysql - pdf file insertion during runtime

I have a script that stores results in pdf format in a particular folder. I want to create a mysql database ( which is successful with the below code ), and populate the pdf results to it. what would be the best way , storing the file as such , or as reference to the location. The file size would be around 2MB. Could someone help in explaining the same with some working examples. I am new to both python and mysql.Thanks in advance.
To clarify more : I tried using LOAD DATA INFILE and the BLOB type for the result file column , but it dosent seem to work .I am using pymysql api module to connect to the database.Below code is to connect to the database and is successful.
import pymsql
conn = pymysql.connect(host='hostname', port=3306, user='root', passwd='abcdef', db='mydb')
cur = conn.cursor()
cur.execute("SELECT * FROM userlogin")
for r in cur.fetchall():
print(r)
cur.close()
conn.close()
Since you seem to be close to getting mysql to store strings for you (user names), your best bet is to just stick with what you did there and store the file path just as you stored the strings in your userlogin table (but in a different table with a foreign key to userlogin). It will probably be the most efficient approach in the long run anyway, especially if you store important metadata along with the file path (like keywords or even complete n-gram sets)... now you're talking about a file indexing system like Google Desktop or Xapian... just so you know what you're up against if you want to do this the "best" way.

Categories