implement a simple NoSQL database interface based on a text file - python

In this test, you will have to implement a simple NoSQL database interface based on a text file. The get_() function takes the file path and key (any string) as input and returns the value for this key. If the value is missing, None is returned. The set_() function also accepts the path to the file, the key and the value (any string), and writes the value to the database using the key.
The sizes of the key and values are set by the constants KEY_LEN and VALUE_LEN.
What i should make:
db = open('./nosql.db', 'w')
path = db.name # ./nosql.db
set_(path, 'key1', 'value1')
get_(path, 'key1') # 'value1'
set_(path, 'key1', 'value2')
get_(path, 'key1') # 'value2'
My code (yes, I haven't done much, but I think the idea is clear):
KEY_LEN = 10
VALUE_LEN = 20
# BEGIN (write your solution here)
def set_(path, key, value):
fr = open(path)
fw = open(path, 'w')
for l in fr:
ls = l.split(' ')
if ls[0] == key:
ls[1] = value
fr.writelines(' '.join(ls))
# END

Related

Storing SQL output as python variables and storing in text files

I'm pretty much new to python and SQL and I am trying some coding tasks. I have a SQL query in the format of the below, where I return a set of values using python and SQL. What I would like to do using python is to define the variable "X as User_Name" and parse this to a text file within my local linux directory (for example in a file called Usernames.txt).
query = """\
Select
X as User_Name,
Y,
Z
FROM
tbl1
WHERE ...
AND ... """
In the below snippets I attempt to write this to the text file, but does not seem to work for me
cursor = connection.cursor()
....
fo = open ('/localDrive/Usernames.txt', 'a')
for row in cur:
rows = list(row)
fo.write(rows[0])
....
fo.close()
The issue is sometimes there are more than 1 row returned so I'd need to store all usernames in that text file. I'd like then to be able to check against this text file and not return SQL Output if the "X as User_Name" already exists within the text file (Usernames.txt) This is something I'm not sure how to do
Just use Pickle to save your data, with dictionaries and sets to compare them. Pickle can save and load Python objects with no parsing required.
If you want a human readable output as well, just print the objects to the screen or file.
e.g. (untested)
import pickle
from pathlib import Path
pickle_path = Path("data.pickle")
fields = ('field_1', 'field_2', 'field_3')
def add_fields(data_list):
# Return a list of dictionaries
return [dict(zip(fields, row)) for row in data_list]
def get_unique_values(dict_list, key):
# Return a set of key field values
return set(dl[key] for dl in dict_list)
def get_data_subset(dict_list, key_field, keys):
# Return records where key_field contains values in keys
return [dl for dl in dict_list if dl[key_field] in keys]
# ...
# Create DB connection etc.
# ...
# ...
cursor = connection.cursor()
cursor.execute(query)
results = cursor.fetchall()
# De-serialise the local data if it exists
if pickle_path.exists():
with pickle_path.open("rb") as pp:
prev_results = pickle.load(pp)
else:
prev_results = []
results = add_fields(results)
keys = get_unique_values(results, 'field_1')
prev_keys = get_unique_values(prev_results, 'field_1')
# All keys
all_keys = keys | prev_keys
# in both sets
existing_keys = keys ^ prev_keys
# just in prev
deleted_keys = prev_keys - keys
# just the new values in keys
new_keys = keys - prev_keys
# example: handle deleted data
temp_dl = []
for row in prev_results:
if row['field_1'] not in deleted_keys:
temp_dl.append(row)
prev_results = temp_dl
# example: handle new keys
new_data = get_data_subset(results, 'field_1', new_keys)
prev_results.extend(new_data)
# Serialise the local data
if pickle_path.exists():
pickle_path.unlink()
with pickle_path.open("wb") as pp:
pickle.dump(prev_results, pp)
if len(new_data):
print("New records added")
for row in new_data:
print(row)

Reading the Value Attribute of a Checkbox in Flask/WTF

I have a form with a column of checkboxes corresponding to the columns in my database. I'm setting the value of each checkbox (in javascript) to the name of the column, but when I try to read the checkbox value in Flask/Python all I can get is True or False. How do I read the text value of the value attribute of the checkboxes?
Just to complicate things, I'm generating the form as a FieldList of FormFields, so I can't simply hardcode the field names. (Well, I could, but that would make it fragile to schema changes.)
My form code is
class ImportFilterSubForm(Form):
use = BooleanField(
'Use',
render_kw={'class': 'use'}
)
sel = SelectField(
'Maps to:',
choices=[],
render_kw={'class': 'sel'},
validators=[Optional()]
)
class ImportFilterForm(FlaskForm):
rows = FieldList(FormField(ImportFilterSubForm))
My view code, with error handling removed, is
def prefilter_import():
db_columns = Contact.__table__.columns.keys()
filename = request.cookies.get('workfile')
with open(filename) as fh:
reader = csv.DictReader(fh)
file_columns = reader.fieldnames
form = ImportFilterForm()
for col in db_columns:
new_row = form.rows.append_entry()
new_row.use.label = col
new_row.sel.choices = file_columns
input_file = request.cookies.get('input_file')
return render_template('filter_import.html', form=form, filename=input_file)
def postfilter_import():
form = ImportFilterForm()
db_columns = Contact.__table__.columns.keys()
filename = request.cookies.get('workfile')
with open(filename) as fh:
reader = csv.DictReader(fh)
file_columns = reader.fieldnames
missing_columns = db_columns
extra_columns = file_columns
mappings = dict()
for i, row in enumerate(form.rows):
if row.use.data: # Problem arises here
mappings[row.use.data] = row.sel.data
for key, value in mappings.items():
missing_columns.remove(key) # Problem manifests here
extra_columns.remove(value)
I'm trying to create a dict mapping the values of the checkboxes to the values of the selects, but I'm only getting True and False for the checkboxes, even though I've verified that the checkboxes' value attributes are correctly returned as the names of the corresponding columns.
How can I get Flask/WTForms to return the text of the value attributes?
After some investigation, I discovered the raw_data attribute of the field object, which contains a list of the values of the value attributes of the HTML control. Thus, the code
mappings = dict()
for row in form.rows:
if row.use.data:
mappings[row.sel.data] = row.use.raw_data[0]
for key, value in mappings.items():
missing_columns.remove(value)
extra_columns.remove(key)
does what I need it to do.

Python new section in config class

I am trying to write a dynamic config .ini
Where I could add new sections with key and values also I could add key less values.
I have written a code which create a .ini. But the section is coming as 'default'.
Also it is just overwriting the file every time with out adding new section.
I have written a code in python 3 to create a .ini file.
import configparser
"""Generates the configuration file with the config class.
The file is a .ini file"""
class Config:
"""Class for data in uuids.ini file management"""
def __init__(self):
self.config = configparser.ConfigParser()
self.config_file = "conf.ini"
# self.config.read(self.config_file)
def wrt(self, config_name={}):
condict = {
"test": "testval",
'test1': 'testval1',
'test2': 'testval2'
}
for name, val in condict.items():
self.config.set(config_name, name, val)
#self.config.read(self.config_file)
with open(self.config_file, 'w+') as out:
self.config.write(out)
if __name__ == "__main__":
Config().wrt()
I should be able to add new sections with key or with out keys.
Append keys or value.
It should have proper section name.
Some problems with your code:
The usage of mutable objects as default parameters can be a little
tricky and you may see unexpected behavior.
You are using config.set() which is legacy.
you are defaulting config_name to a dictionary, why?
Too much white space :p
You don't need to iterate through the dictionary items to write them using the newer (none legacy) function, as shown below
This should work:
"""Generates the configuration file with the config class.
The file is a .ini file
"""
import configparser
import re
class Config:
"""Class for data in uuids.ini file management."""
def __init__(self):
self.config = configparser.ConfigParser()
self.config_file = "conf.ini"
# self.config.read(self.config_file)
def wrt(self, config_name='DEFAULT', condict=None):
if condict is None:
self.config.add_section(config_name)
return
self.config[config_name] = condict
with open(self.config_file, 'w') as out:
self.config.write(out)
# after writing to file check if keys have no value in the ini file (e.g: key0 = )
# the last character is '=', let us strip it off to only have the key
with open(self.config_file) as out:
ini_data = out.read()
with open(self.config_file, 'w') as out:
new_data = re.sub(r'^(.*?)=\s+$', r'\1', ini_data, 0, re.M)
out.write(new_data)
out.write('\n')
condict = {"test": "testval", 'test1': 'testval1', 'test2': 'testval2'}
c = Config()
c.wrt('my section', condict)
c.wrt('EMPTY')
c.wrt(condict={'key': 'val'})
c.wrt(config_name='NO_VALUE_SECTION', condict={'key0': '', 'key1': ''})
This outputs:
[DEFAULT]
key = val
[my section]
test = testval
test1 = testval1
test2 = testval2
[EMPTY]
[NO_VALUE_SECTION]
key1
key0

Create named variables in the local scope from JSON keys

Is there a way I can create named variables in the local scope from a json file?
document json
This is my json file, I would like to create variables in the local scope named as the path of my json dictionary
This is how I manually create them, I would like to do it automatically for all the json file. Is it possible?
class board(object):
def __init__(self, json, image):
self.json = json
self.image = image
def extract_json(self, *args):
with open(self.json) as data_file:
data = json.load(data_file)
jsonpath_expr = parse(".".join(args))
return jsonpath_expr.find(data)[0].value
MyAgonism = board('document.json', './tabellone.jpg')
boxes_time_minutes_coord = MyAgonism.extract_json("boxes", "time_minutes", "coord")
boxes_time_seconds_coord = MyAgonism.extract_json("boxes", "time_seconds", "coord")
boxes_score_home_coord = MyAgonism.extract_json("boxes", "score_home", "coord")
I think you're making this much more complicated than it needs to be.
with open('document.json') as f:
d = json.load(f)
time_minutes_coords = d['boxes']['time_minutes']['coord']
time_seconds_coords = d['boxes']['time_seconds']['coord']
score_home_coords = d['boxes']['score_home']['coord']
If you actually want to create named variables in the local scope from the keys in your json file, you can use the locals() dictionary (but this is a terrible idea, it's far better just to reference them from the json dictionary).
# Flatten the dictionary keys.
# This turns ['boxes']['time_minutes']['coord']
# into "boxes_time_minutes_coord"
def flatten_dict(d, k_pre=None, delim='_', fd=None):
if fd is None:
fd = {}
for k, v in d.iteritems():
if k_pre is not None:
k = '{0}{1}{2}'.format(k_pre, delim, k)
if isinstance(v, dict):
flatten_dict(v, k, delim, fd)
else:
fd[k] = v
return fd
fd = flatten_dict(d)
locals().update(fd)
print boxes_time_minutes_coord
Lots of caveats, like the possibility of overwriting some other variable in your local scope, or the possibility that two dictionary keys could be identical after flattening unless you choose a delimiter that doesn't appear in any of the dictionary keys. Or that this won't work if your keys contain invalid characters for variable names (like spaces for example).

How to get meta data with MMPython for images and video

I'm trying to get the creation date for all the photos and videos in a folder, and having mixed success. I have .jpg, .mov, and .mp4 videos in this folder.
I spent a long time looking at other posts, and I saw quite a few references to the MMPython library here: http://sourceforge.net/projects/mmpython/
Looking through the MMPython source I think this will give me what I need, but the problem is that I don't know how to invoke it. In other words, I have my file, but I don't know how to interface with MMPython and I can't see any examples
Here is my script:
import os
import sys
import exifread
import hashlib
import ExifTool
if len(sys.argv) > 1:
var = sys.argv[1]
else:
var = raw_input("Please enter the directory: ")
direct = '/Users/bbarr233/Documents/Personal/projects/photoOrg/photos'
print "direct: " + direct
print "var: " + var
var = var.rstrip()
for root, dirs, filenames in os.walk(var):
print "root " + root
for f in filenames:
#make sure that we are dealing with images or videos
if f.find(".jpg") > -1 or f.find(".jpeg") > -1 or f.find(".mov") > -1 or f.find(".mp4") > -1:
print "file " + root + "/" + f
f = open(root + "/" + f, 'rb')
#Now I want to do something like this, but don't know which method to call:
#tags = mmpython.process_file(f)
# do something with the creation date
Can someone hint me on on how I can use the MMPython library?
Thanks!!!
PS. I've looked at some other threads on this, such as:
Link to thread:This one didn't make sense to me
Link to thread: This one worked great for mov but not for my mp4s, it said the creation date was 1946
Link to thread: This thread is one of the ones that suggested MMPython, but like I said I don't know how to use it.
Here is a well commented code example I found which will show you how to use mmpython..
This module extracts metadata from new media files, using mmpython,
and provides utilities for converting metadata between formats.
# Copyright (C) 2005 Micah Dowty <micah#navi.cx>
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
import md5, os, cPickle
import mmpython
from mmpython.audio import mp3info
import sqlite
from RioKarma import Paths
class RidCalculator:
"""This object calculates the RID of a file- a sparse digest used by Rio Karma.
For files <= 64K, this is the file's md5sum. For larger files, this is the XOR
of three md5sums, from 64k blocks in the beginning, middle, and end.
"""
def fromSection(self, fileObj, start, end, blockSize=0x10000):
"""This needs a file-like object, as well as the offset and length of the portion
the RID is generated from. Beware that there is a special case for MP3 files.
"""
# It's a short file, compute only one digest
if end-start <= blockSize:
fileObj.seek(start)
return md5.md5(fileObj.read(end-start)).hexdigest()
# Three digests for longer files
fileObj.seek(start)
a = md5.md5(fileObj.read(blockSize)).digest()
fileObj.seek(end - blockSize)
b = md5.md5(fileObj.read(blockSize)).digest()
fileObj.seek((start + end - blockSize) / 2)
c = md5.md5(fileObj.read(blockSize)).digest()
# Combine the three digests
return ''.join(["%02x" % (ord(a[i]) ^ ord(b[i]) ^ ord(c[i])) for i in range(16)])
def fromFile(self, filename, length=None, mminfo=None):
"""Calculate the RID from a file, given its name. The file's length and
mmpython results may be provided if they're known, to avoid duplicating work.
"""
if mminfo is None:
mminfo = mmpython.parse(filename)
f = open(filename, "rb")
if length is None:
f.seek(0, 2)
length = f.tell()
f.seek(0)
# Is this an MP3 file? For some silliness we have to skip the header
# and the last 128 bytes of the file. mmpython can tell us where the
# header starts, but only in a somewhat ugly way.
if isinstance(mminfo, mmpython.audio.eyed3info.eyeD3Info):
try:
offset = mp3info.MPEG(f)._find_header(f)[0]
except ZeroDivisionError:
# This is a bit of a kludge, since mmpython seems to crash
# here on some MP3s for a currently-unknown reason.
print "WARNING, mmpython got a div0 error on %r" % filename
offset = 0
if offset < 0:
# Hmm, it couldn't find the header? Set this to zero
# so we still get a usable RID, but it probably
# won't strictly be a correct RID.
offset = 0
f.seek(0)
return self.fromSection(f, offset, length-128)
# Otherwise, use the whole file
else:
return self.fromSection(f, 0, length)
class BaseCache:
"""This is an abstract base class for objects that cache metadata
dictionaries on disk. The cache is implemented as a sqlite database,
with a 'dict' table holding administrative key-value data, and a
'files' table holding both a pickled representation of the metadata
and separate columns for all searchable keys.
"""
# This must be defined by subclasses as a small integer that changes
# when any part of the database schema or our storage format changes.
schemaVersion = None
# This is the template for our SQL schema. All searchable keys are
# filled in automatically, but other items may be added by subclasses.
schemaTemplate = """
CREATE TABLE dict
(
name VARCHAR(64) PRIMARY KEY,
value TEXT
);
CREATE TABLE files
(
%(keys)s,
_pickled TEXT NOT NULL
);
"""
# A list of searchable keys, used to build the schema and validate queries
searchableKeys = None
keyType = "VARCHAR(255)"
# The primary key is what ensures a file's uniqueness. Inserting a file
# with a primary key identical to an existing one will update that
# file rather than creating a new one.
primaryKey = None
def __init__(self, name):
self.name = name
self.connection = None
def open(self):
"""Open the cache, creating it if necessary"""
if self.connection is not None:
return
self.connection = sqlite.connect(Paths.getCache(self.name))
self.cursor = self.connection.cursor()
# See what version of the database we got. If it's empty
# or it's old, we need to reset it.
try:
version = self._dictGet('schemaVersion')
except sqlite.DatabaseError:
version = None
if version != str(self.schemaVersion):
self.empty()
def close(self):
if self.connection is not None:
self.sync()
self.connection.close()
self.connection = None
def _getSchema(self):
"""Create a complete schema from our schema template and searchableKeys"""
keys = []
for key in self.searchableKeys:
type = self.keyType
if key == self.primaryKey:
type += " PRIMARY KEY"
keys.append("%s %s" % (key, type))
return self.schemaTemplate % dict(keys=', '.join(keys))
def _encode(self, obj):
"""Encode an object that may not be a plain string"""
if type(obj) is unicode:
obj = obj.encode('utf-8')
elif type(obj) is not str:
obj = str(obj)
return "'%s'" % sqlite.encode(obj)
def _dictGet(self, key):
"""Return a value stored in the persistent dictionary. Returns None if
the key has no matching value.
"""
self.cursor.execute("SELECT value FROM dict WHERE name = '%s'" % key)
row = self.cursor.fetchone()
if row:
return sqlite.decode(row[0])
def _dictSet(self, key, value):
"""Create or update a value stored in the persistent dictionary"""
encodedValue = self._encode(value)
# First try inserting a new item
try:
self.cursor.execute("INSERT INTO dict (name, value) VALUES ('%s', %s)" %
(key, encodedValue))
except sqlite.IntegrityError:
# Violated the primary key constraint, update an existing item
self.cursor.execute("UPDATE dict SET value = %s WHERE name = '%s'" % (
encodedValue, key))
def sync(self):
"""Synchronize in-memory parts of the cache with disk"""
self.connection.commit()
def empty(self):
"""Reset the database to a default empty state"""
# Find and destroy every table in the database
self.cursor.execute("SELECT tbl_name FROM sqlite_master WHERE type='table'")
tables = [row.tbl_name for row in self.cursor.fetchall()]
for table in tables:
self.cursor.execute("DROP TABLE %s" % table)
# Apply the schema
self.cursor.execute(self._getSchema())
self._dictSet('schemaVersion', self.schemaVersion)
def _insertFile(self, d):
"""Insert a new file into the cache, given a dictionary of its metadata"""
# Make name/value lists for everything we want to update
dbItems = {'_pickled': self._encode(cPickle.dumps(d, -1))}
for column in self.searchableKeys:
if column in d:
dbItems[column] = self._encode(d[column])
# First try inserting a new row
try:
names = dbItems.keys()
self.cursor.execute("INSERT INTO files (%s) VALUES (%s)" %
(",".join(names), ",".join([dbItems[k] for k in names])))
except sqlite.IntegrityError:
# Violated the primary key constraint, update an existing item
self.cursor.execute("UPDATE files SET %s WHERE %s = %s" % (
", ".join(["%s = %s" % i for i in dbItems.iteritems()]),
self.primaryKey, self._encode(d[self.primaryKey])))
def _deleteFile(self, key):
"""Delete a File from the cache, given its primary key"""
self.cursor.execute("DELETE FROM files WHERE %s = %s" % (
self.primaryKey, self._encode(key)))
def _getFile(self, key):
"""Return a metadata dictionary given its primary key"""
self.cursor.execute("SELECT _pickled FROM files WHERE %s = %s" % (
self.primaryKey, self._encode(key)))
row = self.cursor.fetchone()
if row:
return cPickle.loads(sqlite.decode(row[0]))
def _findFiles(self, **kw):
"""Search for files. The provided keywords must be searchable.
Yields a list of details dictionaries, one for each match.
Any keyword can be None (matches anything) or it can be a
string to match. Keywords that aren't provided are assumed
to be None.
"""
constraints = []
for key, value in kw.iteritems():
if key not in self.searchableKeys:
raise ValueError("Key name %r is not searchable" % key)
constraints.append("%s = %s" % (key, self._encode(value)))
if not constraints:
constraints.append("1")
self.cursor.execute("SELECT _pickled FROM files WHERE %s" %
" AND ".join(constraints))
row = None
while 1:
row = self.cursor.fetchone()
if not row:
break
yield cPickle.loads(sqlite.decode(row[0]))
def countFiles(self):
"""Return the number of files cached"""
self.cursor.execute("SELECT COUNT(_pickled) FROM files")
return int(self.cursor.fetchone()[0])
def updateStamp(self, stamp):
"""The stamp for this cache is any arbitrary value that is expected to
change when the actual data on the device changes. It is used to
check the cache's validity. This function update's the stamp from
a value that is known to match the cache's current contents.
"""
self._dictSet('stamp', stamp)
def checkStamp(self, stamp):
"""Check whether a provided stamp matches the cache's stored stamp.
This should be used when you have a stamp that matches the actual
data on the device, and you want to see if the cache is still valid.
"""
return self._dictGet('stamp') == str(stamp)
class LocalCache(BaseCache):
"""This is a searchable metadata cache for files on the local disk.
It can be used to speed up repeated metadata lookups for local files,
but more interestingly it can be used to provide full metadata searching
on local music files.
"""
schemaVersion = 1
searchableKeys = ('type', 'rid', 'title', 'artist', 'source', 'filename')
primaryKey = 'filename'
def lookup(self, filename):
"""Return a details dictionary for the given filename, using the cache if possible"""
filename = os.path.realpath(filename)
# Use the mtime as a stamp to see if our cache is still valid
mtime = os.stat(filename).st_mtime
cached = self._getFile(filename)
if cached and int(cached.get('mtime')) == int(mtime):
# Yay, still valid
return cached['details']
# Nope, generate a new dict and cache it
details = {}
Converter().detailsFromDisk(filename, details)
generated = dict(
type = details.get('type'),
rid = details.get('rid'),
title = details.get('title'),
artist = details.get('artist'),
source = details.get('source'),
mtime = mtime,
filename = filename,
details = details,
)
self._insertFile(generated)
return details
def findFiles(self, **kw):
"""Search for files that match all given search keys. This returns an iterator
over filenames, skipping any files that aren't currently valid in the cache.
"""
for cached in self._findFiles(**kw):
try:
mtime = os.stat(cached['filename']).st_mtime
except OSError:
pass
else:
if cached.get('mtime') == mtime:
yield cached['filename']
def scan(self, path):
"""Recursively scan all files within the specified path, creating
or updating their cache entries.
"""
for root, dirs, files in os.walk(path):
for name in files:
filename = os.path.join(root, name)
self.lookup(filename)
# checkpoint this after every directory
self.sync()
_defaultLocalCache = None
def getLocalCache(create=True):
"""Get the default instance of LocalCache"""
global _defaultLocalCache
if (not _defaultLocalCache) and create:
_defaultLocalCache = LocalCache("local")
_defaultLocalCache.open()
return _defaultLocalCache
class Converter:
"""This object manages the connection between different kinds of
metadata- the data stored within a file on disk, mmpython attributes,
Rio attributes, and file extensions.
"""
# Maps mmpython classes to codec names for all formats the player
# hardware supports.
codecNames = {
mmpython.audio.eyed3info.eyeD3Info: 'mp3',
mmpython.audio.mp3info.MP3Info: 'mp3',
mmpython.audio.flacinfo.FlacInfo: 'flac',
mmpython.audio.pcminfo.PCMInfo: 'wave',
mmpython.video.asfinfo.AsfInfo: 'wma',
mmpython.audio.ogginfo.OggInfo: 'vorbis',
}
# Maps codec names to extensions. Identity mappings are the
# default, so they are omitted.
codecExtensions = {
'wave': 'wav',
'vorbis': 'ogg',
}
def filenameFromDetails(self, details,
unicodeEncoding = 'utf-8'):
"""Determine a good filename to use for a file with the given metadata
in the Rio 'details' format. If it's a data file, this will use the
original file as stored in 'title'.
Otherwise, it uses Navi's naming convention: Artist_Name/album_name/##_track_name.extension
"""
if details.get('type') == 'taxi':
return details['title']
# Start with just the artist...
name = details.get('artist', 'None').replace(os.sep, "").replace(" ", "_") + os.sep
album = details.get('source')
if album:
name += album.replace(os.sep, "").replace(" ", "_").lower() + os.sep
track = details.get('tracknr')
if track:
name += "%02d_" % track
name += details.get('title', 'None').replace(os.sep, "").replace(" ", "_").lower()
codec = details.get('codec')
extension = self.codecExtensions.get(codec, codec)
if extension:
name += '.' + extension
return unicode(name).encode(unicodeEncoding, 'replace')
def detailsFromDisk(self, filename, details):
"""Automagically load media metadata out of the provided filename,
adding entries to details. This works on any file type
mmpython recognizes, and other files should be tagged
appropriately for Rio Taxi.
"""
info = mmpython.parse(filename)
st = os.stat(filename)
# Generic details for any file. Note that we start out assuming
# all files are unreadable, and label everything for Rio Taxi.
# Later we'll mark supported formats as music.
details['length'] = st.st_size
details['type'] = 'taxi'
details['rid'] = RidCalculator().fromFile(filename, st.st_size, info)
# We get the bulk of our metadata via mmpython if possible
if info:
self.detailsFromMM(info, details)
if details['type'] == 'taxi':
# All taxi files get their filename as their title, regardless of what mmpython said
details['title'] = os.path.basename(filename)
# Taxi files also always get a codec of 'taxi'
details['codec'] = 'taxi'
# Music files that still don't get a title get their filename minus the extension
if not details.get('title'):
details['title'] = os.path.splitext(os.path.basename(filename))[0]
def detailsFromMM(self, info, details):
"""Update Rio-style 'details' metadata from MMPython info"""
# Mime types aren't implemented consistently in mmpython, but
# we can look at the type of the returned object to decide
# whether this is a format that the Rio probably supports.
# This dictionary maps mmpython clases to Rio codec names.
for cls, codec in self.codecNames.iteritems():
if isinstance(info, cls):
details['type'] = 'tune'
details['codec'] = codec
break
# Map simple keys that don't require and hackery
for fromKey, toKey in (
('artist', 'artist'),
('title', 'title'),
('album', 'source'),
('date', 'year'),
('samplerate', 'samplerate'),
):
v = info[fromKey]
if v is not None:
details[toKey] = v
# The rio uses a two-letter prefix on bit rates- the first letter
# is 'f' or 'v', presumably for fixed or variable. The second is
# 'm' for mono or 's' for stereo. There doesn't seem to be a good
# way to get VBR info out of mmpython, so currently this always
# reports a fixed bit rate. We also have to kludge a bit because
# some metdata sources give us bits/second while some give us
# kilobits/second. And of course, there are multiple ways of
# reporting stereo...
kbps = info['bitrate']
if type(kbps) in (int, float) and kbps > 0:
stereo = bool( (info['channels'] and info['channels'] >= 2) or
(info['mode'] and info['mode'].find('stereo') >= 0) )
if kbps > 8000:
kbps = kbps // 1000
details['bitrate'] = ('fm', 'fs')[stereo] + str(kbps)
# If mmpython gives us a length it seems to always be in seconds,
# whereas the Rio expects milliseconds.
length = info['length']
if length:
details['duration'] = int(length * 1000)
# mmpython often gives track numbers as a fraction- current/total.
# The Rio only wants the current track, and we might as well also
# strip off leading zeros and such.
trackNo = info['trackno']
if trackNo:
details['tracknr'] = int(trackNo.split("/", 1)[0])
Reference: http://svn.navi.cx/misc/trunk/rio-karma/python/RioKarma/Metadata.py
Further:
Including Python modules
You should look at the os.stat functions
https://docs.python.org/2/library/os.html
os.stat returns file creation and modified times ctime, mtime
It should be something like this:
Import os
st= os.stat(full_file_path)
file_ctime= st.st_ctime
print(file_ctime)

Categories