I would like to execute this query:
select datetime(date/1000,'unixepoch','localtime') as DATE, address as RECEIVED, body as BODY from sms;
And save it's output to a .csv file in a specified directory. Usually in Ubuntu terminal it is far more easy to manually give commands to save the output of the above query to a file. But i am not familiar with Python-sqlite3. I would like to know how do i execute this query and save it's output to custom directory in a .csv file. Please help me out !
Quick and dirty:
import sqlite
db = sqlite.connect('database_file')
cursor = db.cursor()
cursor.execute("SELECT ...")
rows = cursor.fetchall()
# Itereate rows and write your CSV
cursor.close()
db.close()
Rows will be a list with all matching records, which you can then iterate and manipulate into your csv file.
If you just want to make a csv file, look at the csv module. The following page should get you going https://docs.python.org/2/library/csv.html
You can also look at the pandas module to help create the file.
Related
Let assume I have some table in db created by reading csv file (by line). CSV file updates sometimes and I download it from remote resource.
Now code looks like this:
import asyncio
import aiohttp
import aiopg
import zipfile
async def update_db():
zipfile.ZipFile(ZIP_FILENAME).extract(CSV_FILENAME)
async with aiopg.create_pool(dsn) as pool:
async with pool.acquire() as conn:
async with conn.cursor() as cur:
with open(CSV_FILENAME) as file:
headers = next(file)
for line in file:
region, city = [col for col in line.split(';', COLUMN_QUANTITY)]
await cur.execute(f"select id from region where title=%s;", (region,))
response_list = list(cur)
if len(response_list) == 0:
await cur.execute(f"insert into region (title) values (%s) returning id", (region,))
region_id = list(cur)[0][0]
else:
region_id = response_list[0][0]
... ... ...
How can I detect new rows in csv and create new corteges in table without making extra queries to db (whithout checking each line from csv if it exists in table)?
I'm using windows, python3 and PostrgreSQL. When I download updated csv I have no access to old file.
A simple way would be saving the last line read from CSV on a file or database. This way you would know which lines are new on the CSV.
There are several ways, depending on your situation. For example, if changes to the file are made by one script, you can update the script to make a copy of the csv file before making updates and then get the differences. Linux "diff" command, for example, will tell you what lines have been added.
However, it sounds like you just might need to know if the csv file was updated so that you know to update the DB table. You can accomplish that (Linux again is the example) by running a cron job at certain intervals and determining if the file has changed in the last x minutes with "ls -l" command.
I'm attempting to import an sq file that already has tables into python. However, it doesn't seem to import what I had hoped. The only things I've seen so far are how to creata a new sq file with a table, but I'm looking to just have an already completed sq file imported into python. So far, I've written this.
# Python code to demonstrate SQL to fetch data.
# importing the module
import sqlite3
# connect withe the myTable database
connection = sqlite3.connect("CEM3_Slice_20180622.sql")
# cursor object
crsr = connection.cursor()
# execute the command to fetch all the data from the table emp
crsr.execute("SELECT * FROM 'Trade Details'")
# store all the fetched data in the ans variable
ans= crsr.fetchall()
# loop to print all the data
for i in ans:
print(i)
However, it keeps claiming that the Trade Details table, which is a table inside the file I've connected it to, does not exist. Nowhere I've looked shows me how to do this with an already created file and table, so please don't just redirect me to an answer about that
As suggested by Rakesh above, you create a connection to the DB, not to the .sql file. The .sql file contains SQL scripts to rebuild the DB from which it was generated.
After creating the connection, you can implement the following:
cursor = connection.cursor() #cursor object
with open('CEM3_Slice_20180622.sql', 'r') as f: #Not sure if the 'r' is necessary, but recommended.
cursor.executescript(f.read())
Documentation on executescript found here
To read the file into pandas DataFrame:
import pandas as pd
df = pd.read_sql('SELECT * FROM table LIMIT 10', connection)
There are two possibilities:
Your file is not in the correct format and therefore cannot be opened.
The SQLite file can exist anywhere on the disk e.g. /Users/Username/Desktop/my_db.sqlite , this means that you have to tell python exactly where your file is otherwise it will look inside the scripts directory, see that there is no file with the same name and therefore create a new file with the provided filename.
sqlite3.connect expects the full path to your database file or '::memory::' to create a database that exists in RAM. You don't pass it a SQL file. Eg.
connection = sqlite3.connect('example.db')
You can then read the contents of CEM3_Slice_20180622.sql as you would a normal file and execute the SQL commands against the database.
I have the following lines as part of Python code when working with .db SQLite file:
sql = "SELECT * FROM calculations"
cursor.execute(sql)
results = cursor.fetchall()
where "calculations" is a table I previously created during the execution of my code. When I do
print results
I see
[(1,3.56,7,0.3), (7,0.4,18,1.45), (11,23.18,2,4.44)]
what I need to do is save this output as another .db file named "output_YYYY_MM_DD_HH_MM_SS.db" using the module "datetime" so that when I connect to "output_YYYY_MM_DD_HH_MM_SS.db" and select all I would see an output exactly equal the list above.
Any ideas on how to do this?
Many thanks in advance.
If I remind well, sqlite3 creates a database with connect() if the database does not exist in the directory of the Python script:
"""
1. connect to the database assigning the name you want (use ``datetime`` time-to-string method)
2. execute multiple inserts on the new db to dump the list you have
3. close connection
"""
Feel free to ask if something is unclear.
I am working on an assignment where in we were provided a bunch of csv files to work on and extract information . I have succesfuly completed that part. As a bonus question we have 1 SQlite file with a .db extension . I wanted to know if any module exists to convert such files to .csv or to read them directly ?
In case such a method doesnt exist , ill probably insert the file into a database and use the python sqlite3 module to extract the data I need.
You can use the sqlite commandline tool to dump table data to CSV.
To export an SQLite table (or part of a table) as CSV, simply set the "mode" to "csv" and then run a query to extract the desired rows of the table.
sqlite> .header on
sqlite> .mode csv
sqlite> .once c:/work/dataout.csv
sqlite> SELECT * FROM tab1;
In the example above, the ".header on" line causes column labels to be
printed as the first row of output. This means that the first row of
the resulting CSV file will contain column labels. If column labels
are not desired, set ".header off" instead. (The ".header off" setting
is the default and can be omitted if the headers have not been
previously turned on.)
The line ".once FILENAME" causes all query output to go into the named
file instead of being printed on the console. In the example above,
that line causes the CSV content to be written into a file named
"C:/work/dataout.csv".
http://www.sqlite.org/cli.html
I need some help with my weather station. I would like to save all results into mysql database, but at the moment i've got all results in txt files.
Can you help me to write a script in python, to read from txt file and save into mysql?
My txt file (temperature.txt) contains data and temperature. It looks like:
2013-09-29 13:24 22.60
I'm using python script to get temperature and current time from big "result.txt" file:
#!/usr/bin/python
import time
buffer = bytes()
fh = open("/home/style/pomiar/result.txt")
for line in fh:
pass
last = line
items = last.strip().split()
fh.close();
print time.strftime("%Y-%m-%d %H:%M"), items[1]
But I would like to "print" that into mysql table. I know how to connect, but I dont know how to save data into table.
I know I need to use:
#!/usr/bin/python
import MySQLdb
# Open database connection
db = MySQLdb.connect("localhost","user","password","weather" )
And I've got my database "weather" with table "temperature". Dont know if I made good table (first - datatime, second varchar (5)). And now I need python script to read from this file and save into mysql.
Thanks a lot for ur support.
Next step is simple:
from contextlib import closing
with closing(self.db.cursor()) as cur:
cur.execute("INSERT INTO table1(`measured_at`,`temp`) VALUES(%s, %s)", (measured_at, temp))
self.db.commit()
P.S. It looks like you ask this question because you didn't make your homework and didn't read via ANY python tutorial how to work with MySQL.