Unicode Problem with SQLAlchemy - python

I know I'm having a problem with a conversion from Unicode but I'm not sure where it's happening.
I'm extracting data about a recent Eruopean trip from a directory of HTML files. Some of the location names have non-ASCII characters (such as é, ô, ü). I'm getting the data from a string representation of the the file using regex.
If i print the locations as I find them, they print with the characters so the encoding must be ok:
Le Pré-Saint-Gervais, France
Hôtel-de-Ville, France
I'm storing the data in a SQLite table using SQLAlchemy:
Base = declarative_base()
class Point(Base):
__tablename__ = 'points'
id = Column(Integer, primary_key=True)
pdate = Column(Date)
ptime = Column(Time)
location = Column(Unicode(32))
weather = Column(String(16))
high = Column(Float)
low = Column(Float)
lat = Column(String(16))
lon = Column(String(16))
image = Column(String(64))
caption = Column(String(64))
def __init__(self, filename, pdate, ptime, location, weather, high, low, lat, lon, image, caption):
self.filename = filename
self.pdate = pdate
self.ptime = ptime
self.location = location
self.weather = weather
self.high = high
self.low = low
self.lat = lat
self.lon = lon
self.image = image
self.caption = caption
def __repr__(self):
return "<Point('%s','%s','%s')>" % (self.filename, self.pdate, self.ptime)
engine = create_engine('sqlite:///:memory:', echo=False)
Base.metadata.create_all(engine)
Session = sessionmaker(bind = engine)
session = Session()
I loop through the files and insert the data from each one into the database:
for filename in filelist:
# open the file and extract the information using regex such as:
location_re = re.compile("<h2>(.*)</h2>",re.M)
# extract other data
newpoint = Point(filename, pdate, ptime, location, weather, high, low, lat, lon, image, caption)
session.add(newpoint)
session.commit()
I see the following warning on each insert:
/usr/lib/python2.5/site-packages/SQLAlchemy-0.5.4p2-py2.5.egg/sqlalchemy/engine/default.py:230: SAWarning: Unicode type received non-unicode bind param value 'Spitalfields, United Kingdom'
param.append(processors[key](compiled_params[key]))
And when I try to do anything with the table such as:
session.query(Point).all()
I get:
Traceback (most recent call last):
File "./extract_trips.py", line 131, in <module>
session.query(Point).all()
File "/usr/lib/python2.5/site-packages/SQLAlchemy-0.5.4p2-py2.5.egg/sqlalchemy/orm/query.py", line 1193, in all
return list(self)
File "/usr/lib/python2.5/site-packages/SQLAlchemy-0.5.4p2-py2.5.egg/sqlalchemy/orm/query.py", line 1341, in instances
fetch = cursor.fetchall()
File "/usr/lib/python2.5/site-packages/SQLAlchemy-0.5.4p2-py2.5.egg/sqlalchemy/engine/base.py", line 1642, in fetchall
self.connection._handle_dbapi_exception(e, None, None, self.cursor, self.context)
File "/usr/lib/python2.5/site-packages/SQLAlchemy-0.5.4p2-py2.5.egg/sqlalchemy/engine/base.py", line 931, in _handle_dbapi_exception
raise exc.DBAPIError.instance(statement, parameters, e, connection_invalidated=is_disconnect)
sqlalchemy.exc.OperationalError: (OperationalError) Could not decode to UTF-8 column 'points_location' with text 'Le Pré-Saint-Gervais, France' None None
I would like to be able to correctly store and then return the location names with the original characters intact. Any help would be much appreciated.

I found this article that helped explain my troubles somewhat:
http://www.amk.ca/python/howto/unicode#reading-and-writing-unicode-data
I was able to get the desired results by using the 'codecs' module and then changing my program as follows:
When opening the file:
infile = codecs.open(filename, 'r', encoding='iso-8859-1')
When printing the location:
print location.encode('ISO-8859-1')
I can now query and manipulate the data from the table without the error from before. I just have to specify the encoding when I output the text.
(I still don't entirely understand how this is working so I guess it's time to learn more about Python's unicode handling...)

From sqlalchemy.org
See section 0.4.2
added new flag to String and
create_engine(),
assert _unicode=(True|False|'warn'|None).
Defaults to False or None on
create _engine() and String, 'warn' on the Unicode type. When
True,
results in all unicode conversion operations raising an
exception when a
non-unicode bytestring is passed as a bind parameter. 'warn' results
in a warning. It is strongly advised that all unicode-aware
applications
make proper use of Python unicode objects (i.e. u'hello' and not
'hello')
so that data round trips accurately.
I think you are trying to input a non-unicode bytestring. Perhaps this might lead you on the right track? Some form of conversion is needed, compare 'hello' and u'hello'.
Cheers

Try using a column type of Unicode rather than String for the unicode columns:
Base = declarative_base()
class Point(Base):
__tablename__ = 'points'
id = Column(Integer, primary_key=True)
pdate = Column(Date)
ptime = Column(Time)
location = Column(Unicode(32))
weather = Column(String(16))
high = Column(Float)
low = Column(Float)
lat = Column(String(16))
lon = Column(String(16))
image = Column(String(64))
caption = Column(String(64))
Edit: Response to comment:
If you're getting warnings about unicode encodings then there are two things you can try:
Convert your location to unicode. This would mean having your Point created like this:
newpoint = Point(filename, pdate, ptime, unicode(location), weather, high, low, lat, lon, image, caption)
The unicode conversion will produce a unicode string when passed either a string or a unicode string, so you don't need to worry about what you pass in.
If that doesn't solve the encoding issues, try calling encode on your unicode objects. That would mean using code like:
newpoint = Point(filename, pdate, ptime, unicode(location).encode('utf-8'), weather, high, low, lat, lon, image, caption)
This step probably won't be necessary but what it essentially does is converts a unicode object from unicode code-points to a specific byte representation (in this case, utf-8). I'd expect SQLAlchemy to do this for you when you pass in unicode objects but it may not.

Related

Problems querying with python to BigQuery (Python String Format)

I am trying to make a query to BigQuery in order to modify all the values of a row (in python). When I use a simple string to query, I have no problems. Nevertheless, when I introduce the string formatting the query does not work. As follows I'm presenting the same query, but diminishing the number of columns that I am modifying.
I already made the connection to BigQuery, by defining the Client, etc (and works properly).
I tried:
"UPDATE `riscos-dev.survey_test.data-test-bdrn` SET informaci_meteorol_gica = {inf}, risc = {ri} WHERE objectid = {obj_id}".format(inf = df.informaci_meteorol_gica[index], ri = df.risc[index], obj_id = df.objectid[index])
To specify the input values in format:
df.informaci_meteorol_gica[index] = 'Neu' , also a string for df.risc[index] and df.objectid[index] = 3
I am obtaining the following error message:
BadRequest: 400 Braced constructors are not supported at [1:77]
Instead of using format method of string, I propose you another approach with the f string formating in Python :
def build_query():
inf = "'test_inf'"
ri = "'test_ri'"
obj_id = "'test_obj_id'"
return f"UPDATE `riscos-dev.survey_test.data-test-bdrn` SET informaci_meteorol_gica = {inf}, risc = {ri} WHERE objectid = {obj_id}"
if __name__ == '__main__':
query = build_query()
print(query)
The result is :
UPDATE `riscos-dev.survey_test.data-test-bdrn` SET informaci_meteorol_gica = 'test_inf', risc = 'test_ri' WHERE objectid = 'test_obj_id'
I mocked the query params in my example with :
inf = "'test_inf'"
ri = "'test_ri'"
obj_id = "'test_obj_id'"

'SQLite Time type only accepts Python time objects as input.' error for a database

I am writing a code which has a booking system done on SQLite and on one of the CSV files it has time as a variable, I need it in a time type as I do operations on the time, but it gives the error message as
SQLite Time type only accepts Python time objects as input.
How do I get around this?
My code is below.
class Flight(Base):
__tablename__ = 'flights'
id = Column(Integer, primary_key=True)
planeid = Column(Integer, ForeignKey('planes.id'))
leave = Column(Time)
arrive = Column(Time)
date = Column(Date)
passengers = Column(Integer)
destination = Column(String)
bookings = relationship("Booking", back_populates='flights')
plane = relationship("Plane", back_populates='flight')
...
if session.query(Flight).count() == 0:
with open("flights.csv", "r") as flights_file:
lines = flights_file.readlines()
for line in lines:
_, planeid, leave, arrive, date, passengers, destination = line.rstrip().split(",")
new_flight = Flight(planeid=planeid,
leave=leave,
arrive=arrive,
date=date,
passengers=passengers,
destination=destination)
objects_to_add.append(new_flight)
session.add_all(objects_to_add)
session.commit()
Are you importing Time from sqlalchemy? Please checkout this.
from sqlalchemy import (Column, Time)
arrive = Column(Time)

Django w/ sqlite3 CharField crashing on special character: "Rüppell's Vulture"

Django verision 1.9, DB backend: sqlite3.
I am having a hard time figuring out how to handle this error. I am importing the master bird species list (available here) into a set of Django models. I had the import going well, but it is crashing when I try to save this value: Rüppell's Vulture into the model. The target field is defined like this:
species_english = models.CharField(max_length=100, default=None, blank=True, null=True)
Here is there error:
ProgrammingError: You must not use 8-bit bytestrings unless you use a
text_factory that can interpret 8-bit bytestrings (like text_factory =
str). It is highly recommended that you instead just switch your
application to Unicode strings.
I was reading through Django's documentation about unicode strings. Which starts off beautifully like this:
Django natively supports Unicode data everywhere. Providing your
database can somehow store the data, you can safely pass around
Unicode strings to templates, models and the database.
Also looking up information about this character: ü, it has representation is both unicode and utf-8.
The method for saving this string to the DB is very straight-forward, I am simply parsing the CSV file using csv.reader:
new_species = Species(genus=new_genus, species=row[4], species_english=row[7])
Where the error-throwing string is contained in row[7]. What am I missing about why the database will not allow this character?
UPDATE
here is the content of the whole script importing the data:
import csv
from birds.models import SpeciesFile, Order, Family, Genus, Species, Subspecies
csv_file = str(SpeciesFile.objects.all()[0].species_list)
#COLUMNS
#0 - Order
#1 - Family Scientific
#2 - Family (English)
#3 - Genus
#4 - Species
#5 - SubSpecies
with open("birds/media/"+csv_file.split('/')[1], 'rU') as c:
Order.objects.all().delete()
Family.objects.all().delete()
Genus.objects.all().delete()
Species.objects.all().delete()
Subspecies.objects.all().delete()
reader = csv.reader(c, delimiter=';', quotechar='"')
ini_rows = 4
for row in reader:
if ini_rows > 0:
ini_rows -= 1
continue
if row[0]:
new_order = Order(order=row[0])
new_order.save()
elif row[1]:
new_fam = Family(order = new_order, family_scientific=row[1], family_english=row[2])
new_fam.save()
elif row[3]:
new_genus = Genus(family = new_fam, genus=row[3])
new_genus.save()
elif row[4]:
print row[4]
new_species = Species(genus=new_genus, species=row[4], species_english=row[7])
new_species.save()
elif row[5]:
print row[5]
new_subspecies = Subspecies(species=new_species, subspecies=row[5])
new_subspecies.save()
And here are the models.py file definitions:
from __future__ import unicode_literals
from django.db import models
class SpeciesFile(models.Model):
species_list = models.FileField()
class Order(models.Model):
order = models.CharField(max_length=100)
def __str__(self):
return self.order
class Family(models.Model):
order = models.ForeignKey(Order)
family_scientific = models.CharField(max_length=100)
family_english = models.CharField(max_length=100)
def __str__(self):
return self.family_english+" "+self.family_scientific
class Genus(models.Model):
family = models.ForeignKey(Family)
genus = models.CharField(max_length=100)
def __str__(self):
return self.genus
class Species(models.Model):
genus = models.ForeignKey(Genus, default=None)
species = models.CharField(max_length=100, default=None)
species_english = models.CharField(max_length=100, default=None, blank=True, null=True)
def __str__(self):
return self.species+" "+self.species_english
class Subspecies(models.Model):
species = models.ForeignKey(Species)
subspecies = models.CharField(max_length=100)
def __str__(self):
return self.subspecies
Django CharField is a character-oriented format. You need to pass it Unicode strings.
CSV is a byte-oriented format. When you read data out of a CSV file you get byte strings.
To get from bytes to characters you have to know what encoding was used when the original characters were turned into bytes as the CSV file was exported. Ideally that would be UTF-8, but if the file has come out of Excel it probably won't be. Maybe it's Windows-1252 (‘ANSI’ code page for Western European installations). Maybe it's something else.
(Django/Python 2 lets you get away with writing byte strings to Unicode properties when you have only ASCII bytes in it (bytes 0–127) because those have the same mapping in a lot encodings. ASCII is a ‘best guess’ at Do What I Mean, but it's not reliable and Python 3 prefers to raise errors if you try.)
So:
new_order = Order(order=row[0].decode('windows-1252'))
or, to decode the whole row at once:
row = [s.decode('windows-1252') for s in row]

How to get meta data with MMPython for images and video

I'm trying to get the creation date for all the photos and videos in a folder, and having mixed success. I have .jpg, .mov, and .mp4 videos in this folder.
I spent a long time looking at other posts, and I saw quite a few references to the MMPython library here: http://sourceforge.net/projects/mmpython/
Looking through the MMPython source I think this will give me what I need, but the problem is that I don't know how to invoke it. In other words, I have my file, but I don't know how to interface with MMPython and I can't see any examples
Here is my script:
import os
import sys
import exifread
import hashlib
import ExifTool
if len(sys.argv) > 1:
var = sys.argv[1]
else:
var = raw_input("Please enter the directory: ")
direct = '/Users/bbarr233/Documents/Personal/projects/photoOrg/photos'
print "direct: " + direct
print "var: " + var
var = var.rstrip()
for root, dirs, filenames in os.walk(var):
print "root " + root
for f in filenames:
#make sure that we are dealing with images or videos
if f.find(".jpg") > -1 or f.find(".jpeg") > -1 or f.find(".mov") > -1 or f.find(".mp4") > -1:
print "file " + root + "/" + f
f = open(root + "/" + f, 'rb')
#Now I want to do something like this, but don't know which method to call:
#tags = mmpython.process_file(f)
# do something with the creation date
Can someone hint me on on how I can use the MMPython library?
Thanks!!!
PS. I've looked at some other threads on this, such as:
Link to thread:This one didn't make sense to me
Link to thread: This one worked great for mov but not for my mp4s, it said the creation date was 1946
Link to thread: This thread is one of the ones that suggested MMPython, but like I said I don't know how to use it.
Here is a well commented code example I found which will show you how to use mmpython..
This module extracts metadata from new media files, using mmpython,
and provides utilities for converting metadata between formats.
# Copyright (C) 2005 Micah Dowty <micah#navi.cx>
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
import md5, os, cPickle
import mmpython
from mmpython.audio import mp3info
import sqlite
from RioKarma import Paths
class RidCalculator:
"""This object calculates the RID of a file- a sparse digest used by Rio Karma.
For files <= 64K, this is the file's md5sum. For larger files, this is the XOR
of three md5sums, from 64k blocks in the beginning, middle, and end.
"""
def fromSection(self, fileObj, start, end, blockSize=0x10000):
"""This needs a file-like object, as well as the offset and length of the portion
the RID is generated from. Beware that there is a special case for MP3 files.
"""
# It's a short file, compute only one digest
if end-start <= blockSize:
fileObj.seek(start)
return md5.md5(fileObj.read(end-start)).hexdigest()
# Three digests for longer files
fileObj.seek(start)
a = md5.md5(fileObj.read(blockSize)).digest()
fileObj.seek(end - blockSize)
b = md5.md5(fileObj.read(blockSize)).digest()
fileObj.seek((start + end - blockSize) / 2)
c = md5.md5(fileObj.read(blockSize)).digest()
# Combine the three digests
return ''.join(["%02x" % (ord(a[i]) ^ ord(b[i]) ^ ord(c[i])) for i in range(16)])
def fromFile(self, filename, length=None, mminfo=None):
"""Calculate the RID from a file, given its name. The file's length and
mmpython results may be provided if they're known, to avoid duplicating work.
"""
if mminfo is None:
mminfo = mmpython.parse(filename)
f = open(filename, "rb")
if length is None:
f.seek(0, 2)
length = f.tell()
f.seek(0)
# Is this an MP3 file? For some silliness we have to skip the header
# and the last 128 bytes of the file. mmpython can tell us where the
# header starts, but only in a somewhat ugly way.
if isinstance(mminfo, mmpython.audio.eyed3info.eyeD3Info):
try:
offset = mp3info.MPEG(f)._find_header(f)[0]
except ZeroDivisionError:
# This is a bit of a kludge, since mmpython seems to crash
# here on some MP3s for a currently-unknown reason.
print "WARNING, mmpython got a div0 error on %r" % filename
offset = 0
if offset < 0:
# Hmm, it couldn't find the header? Set this to zero
# so we still get a usable RID, but it probably
# won't strictly be a correct RID.
offset = 0
f.seek(0)
return self.fromSection(f, offset, length-128)
# Otherwise, use the whole file
else:
return self.fromSection(f, 0, length)
class BaseCache:
"""This is an abstract base class for objects that cache metadata
dictionaries on disk. The cache is implemented as a sqlite database,
with a 'dict' table holding administrative key-value data, and a
'files' table holding both a pickled representation of the metadata
and separate columns for all searchable keys.
"""
# This must be defined by subclasses as a small integer that changes
# when any part of the database schema or our storage format changes.
schemaVersion = None
# This is the template for our SQL schema. All searchable keys are
# filled in automatically, but other items may be added by subclasses.
schemaTemplate = """
CREATE TABLE dict
(
name VARCHAR(64) PRIMARY KEY,
value TEXT
);
CREATE TABLE files
(
%(keys)s,
_pickled TEXT NOT NULL
);
"""
# A list of searchable keys, used to build the schema and validate queries
searchableKeys = None
keyType = "VARCHAR(255)"
# The primary key is what ensures a file's uniqueness. Inserting a file
# with a primary key identical to an existing one will update that
# file rather than creating a new one.
primaryKey = None
def __init__(self, name):
self.name = name
self.connection = None
def open(self):
"""Open the cache, creating it if necessary"""
if self.connection is not None:
return
self.connection = sqlite.connect(Paths.getCache(self.name))
self.cursor = self.connection.cursor()
# See what version of the database we got. If it's empty
# or it's old, we need to reset it.
try:
version = self._dictGet('schemaVersion')
except sqlite.DatabaseError:
version = None
if version != str(self.schemaVersion):
self.empty()
def close(self):
if self.connection is not None:
self.sync()
self.connection.close()
self.connection = None
def _getSchema(self):
"""Create a complete schema from our schema template and searchableKeys"""
keys = []
for key in self.searchableKeys:
type = self.keyType
if key == self.primaryKey:
type += " PRIMARY KEY"
keys.append("%s %s" % (key, type))
return self.schemaTemplate % dict(keys=', '.join(keys))
def _encode(self, obj):
"""Encode an object that may not be a plain string"""
if type(obj) is unicode:
obj = obj.encode('utf-8')
elif type(obj) is not str:
obj = str(obj)
return "'%s'" % sqlite.encode(obj)
def _dictGet(self, key):
"""Return a value stored in the persistent dictionary. Returns None if
the key has no matching value.
"""
self.cursor.execute("SELECT value FROM dict WHERE name = '%s'" % key)
row = self.cursor.fetchone()
if row:
return sqlite.decode(row[0])
def _dictSet(self, key, value):
"""Create or update a value stored in the persistent dictionary"""
encodedValue = self._encode(value)
# First try inserting a new item
try:
self.cursor.execute("INSERT INTO dict (name, value) VALUES ('%s', %s)" %
(key, encodedValue))
except sqlite.IntegrityError:
# Violated the primary key constraint, update an existing item
self.cursor.execute("UPDATE dict SET value = %s WHERE name = '%s'" % (
encodedValue, key))
def sync(self):
"""Synchronize in-memory parts of the cache with disk"""
self.connection.commit()
def empty(self):
"""Reset the database to a default empty state"""
# Find and destroy every table in the database
self.cursor.execute("SELECT tbl_name FROM sqlite_master WHERE type='table'")
tables = [row.tbl_name for row in self.cursor.fetchall()]
for table in tables:
self.cursor.execute("DROP TABLE %s" % table)
# Apply the schema
self.cursor.execute(self._getSchema())
self._dictSet('schemaVersion', self.schemaVersion)
def _insertFile(self, d):
"""Insert a new file into the cache, given a dictionary of its metadata"""
# Make name/value lists for everything we want to update
dbItems = {'_pickled': self._encode(cPickle.dumps(d, -1))}
for column in self.searchableKeys:
if column in d:
dbItems[column] = self._encode(d[column])
# First try inserting a new row
try:
names = dbItems.keys()
self.cursor.execute("INSERT INTO files (%s) VALUES (%s)" %
(",".join(names), ",".join([dbItems[k] for k in names])))
except sqlite.IntegrityError:
# Violated the primary key constraint, update an existing item
self.cursor.execute("UPDATE files SET %s WHERE %s = %s" % (
", ".join(["%s = %s" % i for i in dbItems.iteritems()]),
self.primaryKey, self._encode(d[self.primaryKey])))
def _deleteFile(self, key):
"""Delete a File from the cache, given its primary key"""
self.cursor.execute("DELETE FROM files WHERE %s = %s" % (
self.primaryKey, self._encode(key)))
def _getFile(self, key):
"""Return a metadata dictionary given its primary key"""
self.cursor.execute("SELECT _pickled FROM files WHERE %s = %s" % (
self.primaryKey, self._encode(key)))
row = self.cursor.fetchone()
if row:
return cPickle.loads(sqlite.decode(row[0]))
def _findFiles(self, **kw):
"""Search for files. The provided keywords must be searchable.
Yields a list of details dictionaries, one for each match.
Any keyword can be None (matches anything) or it can be a
string to match. Keywords that aren't provided are assumed
to be None.
"""
constraints = []
for key, value in kw.iteritems():
if key not in self.searchableKeys:
raise ValueError("Key name %r is not searchable" % key)
constraints.append("%s = %s" % (key, self._encode(value)))
if not constraints:
constraints.append("1")
self.cursor.execute("SELECT _pickled FROM files WHERE %s" %
" AND ".join(constraints))
row = None
while 1:
row = self.cursor.fetchone()
if not row:
break
yield cPickle.loads(sqlite.decode(row[0]))
def countFiles(self):
"""Return the number of files cached"""
self.cursor.execute("SELECT COUNT(_pickled) FROM files")
return int(self.cursor.fetchone()[0])
def updateStamp(self, stamp):
"""The stamp for this cache is any arbitrary value that is expected to
change when the actual data on the device changes. It is used to
check the cache's validity. This function update's the stamp from
a value that is known to match the cache's current contents.
"""
self._dictSet('stamp', stamp)
def checkStamp(self, stamp):
"""Check whether a provided stamp matches the cache's stored stamp.
This should be used when you have a stamp that matches the actual
data on the device, and you want to see if the cache is still valid.
"""
return self._dictGet('stamp') == str(stamp)
class LocalCache(BaseCache):
"""This is a searchable metadata cache for files on the local disk.
It can be used to speed up repeated metadata lookups for local files,
but more interestingly it can be used to provide full metadata searching
on local music files.
"""
schemaVersion = 1
searchableKeys = ('type', 'rid', 'title', 'artist', 'source', 'filename')
primaryKey = 'filename'
def lookup(self, filename):
"""Return a details dictionary for the given filename, using the cache if possible"""
filename = os.path.realpath(filename)
# Use the mtime as a stamp to see if our cache is still valid
mtime = os.stat(filename).st_mtime
cached = self._getFile(filename)
if cached and int(cached.get('mtime')) == int(mtime):
# Yay, still valid
return cached['details']
# Nope, generate a new dict and cache it
details = {}
Converter().detailsFromDisk(filename, details)
generated = dict(
type = details.get('type'),
rid = details.get('rid'),
title = details.get('title'),
artist = details.get('artist'),
source = details.get('source'),
mtime = mtime,
filename = filename,
details = details,
)
self._insertFile(generated)
return details
def findFiles(self, **kw):
"""Search for files that match all given search keys. This returns an iterator
over filenames, skipping any files that aren't currently valid in the cache.
"""
for cached in self._findFiles(**kw):
try:
mtime = os.stat(cached['filename']).st_mtime
except OSError:
pass
else:
if cached.get('mtime') == mtime:
yield cached['filename']
def scan(self, path):
"""Recursively scan all files within the specified path, creating
or updating their cache entries.
"""
for root, dirs, files in os.walk(path):
for name in files:
filename = os.path.join(root, name)
self.lookup(filename)
# checkpoint this after every directory
self.sync()
_defaultLocalCache = None
def getLocalCache(create=True):
"""Get the default instance of LocalCache"""
global _defaultLocalCache
if (not _defaultLocalCache) and create:
_defaultLocalCache = LocalCache("local")
_defaultLocalCache.open()
return _defaultLocalCache
class Converter:
"""This object manages the connection between different kinds of
metadata- the data stored within a file on disk, mmpython attributes,
Rio attributes, and file extensions.
"""
# Maps mmpython classes to codec names for all formats the player
# hardware supports.
codecNames = {
mmpython.audio.eyed3info.eyeD3Info: 'mp3',
mmpython.audio.mp3info.MP3Info: 'mp3',
mmpython.audio.flacinfo.FlacInfo: 'flac',
mmpython.audio.pcminfo.PCMInfo: 'wave',
mmpython.video.asfinfo.AsfInfo: 'wma',
mmpython.audio.ogginfo.OggInfo: 'vorbis',
}
# Maps codec names to extensions. Identity mappings are the
# default, so they are omitted.
codecExtensions = {
'wave': 'wav',
'vorbis': 'ogg',
}
def filenameFromDetails(self, details,
unicodeEncoding = 'utf-8'):
"""Determine a good filename to use for a file with the given metadata
in the Rio 'details' format. If it's a data file, this will use the
original file as stored in 'title'.
Otherwise, it uses Navi's naming convention: Artist_Name/album_name/##_track_name.extension
"""
if details.get('type') == 'taxi':
return details['title']
# Start with just the artist...
name = details.get('artist', 'None').replace(os.sep, "").replace(" ", "_") + os.sep
album = details.get('source')
if album:
name += album.replace(os.sep, "").replace(" ", "_").lower() + os.sep
track = details.get('tracknr')
if track:
name += "%02d_" % track
name += details.get('title', 'None').replace(os.sep, "").replace(" ", "_").lower()
codec = details.get('codec')
extension = self.codecExtensions.get(codec, codec)
if extension:
name += '.' + extension
return unicode(name).encode(unicodeEncoding, 'replace')
def detailsFromDisk(self, filename, details):
"""Automagically load media metadata out of the provided filename,
adding entries to details. This works on any file type
mmpython recognizes, and other files should be tagged
appropriately for Rio Taxi.
"""
info = mmpython.parse(filename)
st = os.stat(filename)
# Generic details for any file. Note that we start out assuming
# all files are unreadable, and label everything for Rio Taxi.
# Later we'll mark supported formats as music.
details['length'] = st.st_size
details['type'] = 'taxi'
details['rid'] = RidCalculator().fromFile(filename, st.st_size, info)
# We get the bulk of our metadata via mmpython if possible
if info:
self.detailsFromMM(info, details)
if details['type'] == 'taxi':
# All taxi files get their filename as their title, regardless of what mmpython said
details['title'] = os.path.basename(filename)
# Taxi files also always get a codec of 'taxi'
details['codec'] = 'taxi'
# Music files that still don't get a title get their filename minus the extension
if not details.get('title'):
details['title'] = os.path.splitext(os.path.basename(filename))[0]
def detailsFromMM(self, info, details):
"""Update Rio-style 'details' metadata from MMPython info"""
# Mime types aren't implemented consistently in mmpython, but
# we can look at the type of the returned object to decide
# whether this is a format that the Rio probably supports.
# This dictionary maps mmpython clases to Rio codec names.
for cls, codec in self.codecNames.iteritems():
if isinstance(info, cls):
details['type'] = 'tune'
details['codec'] = codec
break
# Map simple keys that don't require and hackery
for fromKey, toKey in (
('artist', 'artist'),
('title', 'title'),
('album', 'source'),
('date', 'year'),
('samplerate', 'samplerate'),
):
v = info[fromKey]
if v is not None:
details[toKey] = v
# The rio uses a two-letter prefix on bit rates- the first letter
# is 'f' or 'v', presumably for fixed or variable. The second is
# 'm' for mono or 's' for stereo. There doesn't seem to be a good
# way to get VBR info out of mmpython, so currently this always
# reports a fixed bit rate. We also have to kludge a bit because
# some metdata sources give us bits/second while some give us
# kilobits/second. And of course, there are multiple ways of
# reporting stereo...
kbps = info['bitrate']
if type(kbps) in (int, float) and kbps > 0:
stereo = bool( (info['channels'] and info['channels'] >= 2) or
(info['mode'] and info['mode'].find('stereo') >= 0) )
if kbps > 8000:
kbps = kbps // 1000
details['bitrate'] = ('fm', 'fs')[stereo] + str(kbps)
# If mmpython gives us a length it seems to always be in seconds,
# whereas the Rio expects milliseconds.
length = info['length']
if length:
details['duration'] = int(length * 1000)
# mmpython often gives track numbers as a fraction- current/total.
# The Rio only wants the current track, and we might as well also
# strip off leading zeros and such.
trackNo = info['trackno']
if trackNo:
details['tracknr'] = int(trackNo.split("/", 1)[0])
Reference: http://svn.navi.cx/misc/trunk/rio-karma/python/RioKarma/Metadata.py
Further:
Including Python modules
You should look at the os.stat functions
https://docs.python.org/2/library/os.html
os.stat returns file creation and modified times ctime, mtime
It should be something like this:
Import os
st= os.stat(full_file_path)
file_ctime= st.st_ctime
print(file_ctime)

GeoAlchemy2 store point and query results

The documentation on GeoAlchemy2 doesn't seem fully featured (as compared to the pervious version).
I have a model:
class AddressCode(Base):
__tablename__ = 'address_codes'
id = Column(Integer, primary_key=True)
code = Column(Unicode(34))
geometry = Column(Geometry('POINT'))
And I want to store lat/long data, which I tried to save in the above model, example
"51.42553,-0.666085"
Which gives me the error:
"Parse error at position 9 within Geometry (the "," char")
Anyone able to shed some light on where I am going wrong here?
Also on the subject, how would I peform a query to say..
Show nearest 20 users:
class AddressCode(Base):
__tablename__ = 'address_codes'
id = Column(Integer, primary_key=True)
name = Column(Unicode(34))
geometry = Column(Geometry('POINT'))
Something like?
geom_var = "51.42553,-0.666085"
Session.query(User).filter(func.ST_DWithin, 20, geom_var).all()
In both GeoAlchemy and GeoAlchemy2 you need to specify the geometries in the well-known text format called WKT or Well-known text, or the Well-known binary format. For a point the syntax is 'POINT(X Y)', thus 'POINT(-0.666085 51.42553)' notice that the longitude comes first, then latitude.
The shapely module contains useful functions for handling geometries outside relational databases, along with easy conversions between Python geometry classes and WKT, WKB formats.
Here's how you do it:
this region table is defined as:
regionTable = Table('region', metadata,
Column('region_id', Integer, Sequence('region_region_id_seq'), primary_key=True),
Column('type_cd', String(30)),
Column('region_nm', String(255)),
Column('geo_loc', Geography )
)
how to query it:
(give me all regions within 50 miles of my current location..)
sqlstring = select([regionTable],
func.ST_DWithin(
regionTable.c.geo_loc,
'POINT(-74.78886216922375 40.32829276931833)',
1609*50 ) )
result = connection.execute(sqlstring)
for row in result:
print "region name:", row['region_nm']

Categories