Python read from txt and save into mysql - python

I need some help with my weather station. I would like to save all results into mysql database, but at the moment i've got all results in txt files.
Can you help me to write a script in python, to read from txt file and save into mysql?
My txt file (temperature.txt) contains data and temperature. It looks like:
2013-09-29 13:24 22.60
I'm using python script to get temperature and current time from big "result.txt" file:
#!/usr/bin/python
import time
buffer = bytes()
fh = open("/home/style/pomiar/result.txt")
for line in fh:
pass
last = line
items = last.strip().split()
fh.close();
print time.strftime("%Y-%m-%d %H:%M"), items[1]
But I would like to "print" that into mysql table. I know how to connect, but I dont know how to save data into table.
I know I need to use:
#!/usr/bin/python
import MySQLdb
# Open database connection
db = MySQLdb.connect("localhost","user","password","weather" )
And I've got my database "weather" with table "temperature". Dont know if I made good table (first - datatime, second varchar (5)). And now I need python script to read from this file and save into mysql.
Thanks a lot for ur support.

Next step is simple:
from contextlib import closing
with closing(self.db.cursor()) as cur:
cur.execute("INSERT INTO table1(`measured_at`,`temp`) VALUES(%s, %s)", (measured_at, temp))
self.db.commit()
P.S. It looks like you ask this question because you didn't make your homework and didn't read via ANY python tutorial how to work with MySQL.

Related

How to update db by reading constantly updated csv without multiple selects in python?

Let assume I have some table in db created by reading csv file (by line). CSV file updates sometimes and I download it from remote resource.
Now code looks like this:
import asyncio
import aiohttp
import aiopg
import zipfile
async def update_db():
zipfile.ZipFile(ZIP_FILENAME).extract(CSV_FILENAME)
async with aiopg.create_pool(dsn) as pool:
async with pool.acquire() as conn:
async with conn.cursor() as cur:
with open(CSV_FILENAME) as file:
headers = next(file)
for line in file:
region, city = [col for col in line.split(';', COLUMN_QUANTITY)]
await cur.execute(f"select id from region where title=%s;", (region,))
response_list = list(cur)
if len(response_list) == 0:
await cur.execute(f"insert into region (title) values (%s) returning id", (region,))
region_id = list(cur)[0][0]
else:
region_id = response_list[0][0]
... ... ...
How can I detect new rows in csv and create new corteges in table without making extra queries to db (whithout checking each line from csv if it exists in table)?
I'm using windows, python3 and PostrgreSQL. When I download updated csv I have no access to old file.
A simple way would be saving the last line read from CSV on a file or database. This way you would know which lines are new on the CSV.
There are several ways, depending on your situation. For example, if changes to the file are made by one script, you can update the script to make a copy of the csv file before making updates and then get the differences. Linux "diff" command, for example, will tell you what lines have been added.
However, it sounds like you just might need to know if the csv file was updated so that you know to update the DB table. You can accomplish that (Linux again is the example) by running a cron job at certain intervals and determining if the file has changed in the last x minutes with "ls -l" command.

How to Import a SQL file to Python

I'm attempting to import an sq file that already has tables into python. However, it doesn't seem to import what I had hoped. The only things I've seen so far are how to creata a new sq file with a table, but I'm looking to just have an already completed sq file imported into python. So far, I've written this.
# Python code to demonstrate SQL to fetch data.
# importing the module
import sqlite3
# connect withe the myTable database
connection = sqlite3.connect("CEM3_Slice_20180622.sql")
# cursor object
crsr = connection.cursor()
# execute the command to fetch all the data from the table emp
crsr.execute("SELECT * FROM 'Trade Details'")
# store all the fetched data in the ans variable
ans= crsr.fetchall()
# loop to print all the data
for i in ans:
print(i)
However, it keeps claiming that the Trade Details table, which is a table inside the file I've connected it to, does not exist. Nowhere I've looked shows me how to do this with an already created file and table, so please don't just redirect me to an answer about that
As suggested by Rakesh above, you create a connection to the DB, not to the .sql file. The .sql file contains SQL scripts to rebuild the DB from which it was generated.
After creating the connection, you can implement the following:
cursor = connection.cursor() #cursor object
with open('CEM3_Slice_20180622.sql', 'r') as f: #Not sure if the 'r' is necessary, but recommended.
cursor.executescript(f.read())
Documentation on executescript found here
To read the file into pandas DataFrame:
import pandas as pd
df = pd.read_sql('SELECT * FROM table LIMIT 10', connection)
There are two possibilities:
Your file is not in the correct format and therefore cannot be opened.
The SQLite file can exist anywhere on the disk e.g. /Users/Username/Desktop/my_db.sqlite , this means that you have to tell python exactly where your file is otherwise it will look inside the scripts directory, see that there is no file with the same name and therefore create a new file with the provided filename.
sqlite3.connect expects the full path to your database file or '::memory::' to create a database that exists in RAM. You don't pass it a SQL file. Eg.
connection = sqlite3.connect('example.db')
You can then read the contents of CEM3_Slice_20180622.sql as you would a normal file and execute the SQL commands against the database.

Saving a Python-produced list as .db file

I have the following lines as part of Python code when working with .db SQLite file:
sql = "SELECT * FROM calculations"
cursor.execute(sql)
results = cursor.fetchall()
where "calculations" is a table I previously created during the execution of my code. When I do
print results
I see
[(1,3.56,7,0.3), (7,0.4,18,1.45), (11,23.18,2,4.44)]
what I need to do is save this output as another .db file named "output_YYYY_MM_DD_HH_MM_SS.db" using the module "datetime" so that when I connect to "output_YYYY_MM_DD_HH_MM_SS.db" and select all I would see an output exactly equal the list above.
Any ideas on how to do this?
Many thanks in advance.
If I remind well, sqlite3 creates a database with connect() if the database does not exist in the directory of the Python script:
"""
1. connect to the database assigning the name you want (use ``datetime`` time-to-string method)
2. execute multiple inserts on the new db to dump the list you have
3. close connection
"""
Feel free to ask if something is unclear.

Python: Sqlite3 query output to .csv file

I would like to execute this query:
select datetime(date/1000,'unixepoch','localtime') as DATE, address as RECEIVED, body as BODY from sms;
And save it's output to a .csv file in a specified directory. Usually in Ubuntu terminal it is far more easy to manually give commands to save the output of the above query to a file. But i am not familiar with Python-sqlite3. I would like to know how do i execute this query and save it's output to custom directory in a .csv file. Please help me out !
Quick and dirty:
import sqlite
db = sqlite.connect('database_file')
cursor = db.cursor()
cursor.execute("SELECT ...")
rows = cursor.fetchall()
# Itereate rows and write your CSV
cursor.close()
db.close()
Rows will be a list with all matching records, which you can then iterate and manipulate into your csv file.
If you just want to make a csv file, look at the csv module. The following page should get you going https://docs.python.org/2/library/csv.html
You can also look at the pandas module to help create the file.

Python DBF module is adding extra rows to a table export

I am doing a bulk import of dbf files to sqlite. I wrote a simple script in python using the dbf module at http://dbfpy.sourceforge.net/. It works fine and as expected except for a small few cases. In a very discreet numbr of cases the module seems to have added a few erroneous records to the table it was reading.
I know this sounds crazy right but it really seems to be the case. I have exported the dbase file in question to csv using open office and imported it directly to sqlite using .import and the 3 extra records are not there.
But if I iterate through the file using python and the dbfpy module the 3 extra records are added.
I am wondering is it possible that these three records were flagged as deleted in the dbf file and while invisible to open office are being picked up by the dbf module. I could be way off in this possibility but I am really scratching my head on this one.
Any help is appreciated.
What follows is a sample of my method for reading the dbf file. I have removed the loop and used one single case instead.
conn = lite.connect('../data/my_dbf.db3')
#used to get rid of the 8 byte string error from sqlite3
conn.text_factory = str
cur = conn.cursor()
rows_list = []
db = dbf.Dbf("../data/test.dbf")
for rec in db:
***if not rec.deleted:***
row_tuple = (rec["name"], rec["address"], rec["age"])
rows_list.append(row_tuple)
print file_name + " processed"
db.close()
cur.executemany("INSERT INTO exported_data VALUES(?, ?, ?)", rows_list)
#pprint.pprint(rows_list)
conn.commit()
Solution
Ok after about another half hour of testing before lunch I discovered that my possible hypothesis was in fact correct some files had not been packed and as such had records which had been flagged for deleted still remaining in them. They should not have been in an unpacked state after export so this caused more confusion.
I manually packed one file and tested it and it immediately returned the proper results.
A big thanks for the help on this. I had added in the solution given below to ignore the deleted records. I had searched and searched for this method(deleted) in this module but could not find an api doc for it, I even looked in the code but in the fog of it all it must have slipped by. Thanks a million for the solution and help guys.
If you wont to discard records marked as deleted, you can write:
for rec in db:
if not rec.deleted:
row_tuple = (rec["name"], rec["address"], rec["age"])
rows_list.append(row_tuple)

Categories