I need a Python script to analyze the contents of a log file. The log files (named like: log.txt.2014-01-01) are made up as follows:
....<different structure>
2013-05-09 19:09:20,112 [1] DEBUG Management.Handle - Action: Amount=005,00; Date=25.04.2013 19:25:04
2013-05-09 19:09:20,112 [1] DEBUG Management.Handle - Action: Amount=005,00; Date=25.04.2013 19:27:05
2013-05-09 19:09:20,112 [1] DEBUG Management.Handle - Action: Amount=005,00; Date=25.04.2013 19:28:05
...<different structure>
I need to sum the Amount and print the total.
This is a job for regular expressions:
import re
from cStringIO import StringIO
def extractAmount(file_like):
amountRe = re.compile('^.* Management\.Handle - Action: Amount=(\d+),(\d+);')
for line in file_like:
result = amountRe.match(line)
if result:
matches = result.groups()
yield (float(matches[0]) + (float(matches[1]) / 100.0))
data = StringIO("""....<different structure>
2013-05-09 19:09:20,112 [1] DEBUG Management.Handle - Action: Amount=005,00; Date=25.04.2013 19:25:04
2013-05-09 19:09:20,112 [1] DEBUG Management.Handle - Action: Amount=005,00; Date=25.04.2013 19:27:05
2013-05-09 19:09:20,112 [1] DEBUG Management.Handle - Action: Amount=005,00; Date=25.04.2013 19:28:05
...<different structure>""")
print sum(extractAmount(data))
In the example I've used a cStringIO object to load the data, but this approach should work with any iterable that gives strings (such as the file object from open).
import re
x=<your_test_string>
z= [float(re.sub(r",",".",i)) for i in re.findall(r"(?<=DEBUG Management\.Handle - Action: Amount=)([^;]+)",x)]
print sum(z)
You can try this.
Try
http://www.pythontutor.com/visualize.html#code=import+re%0Ax%3D%22%22%22....%3Cdifferent+structure%3E%0A%0A2013-05-09+19%3A09%3A20,112+%5B1%5D+DEBUG+Management.Handle+-+Action%3A+Amount%3D005,00%3B+Date%3D25.04.2013+19%3A25%3A04%0A%0A2013-05-09+19%3A09%3A20,112+%5B1%5D+DEBUG+Management.Handle+-+Action%3A+Amount%3D005,00%3B+Date%3D25.04.2013+19%3A27%3A05%0A%0A2013-05-09+19%3A09%3A20,112+%5B1%5D+DEBUG+Management.Handle+-+Action%3A+Amount%3D005,00%3B+Date%3D25.04.2013+19%3A28%3A05%0A%0A...%3Cdifferent+structure%3E%22%22%22%0Az%3D+%5Bfloat(re.sub(r%22,%22,%22.%22,i))+for+i+in+re.findall(r%22(%3F%3C%3DDEBUG+Management%5C.Handle+-+Action%3A+Amount%3D)(%5B%5E%3B%5D%2B)%22,x)%5D%0Aprint+sum(z)&mode=display&origin=opt-frontend.js&cumulative=false&heapPrimitives=false&drawParentPointers=false&textReferences=false&showOnlyOutputs=false&py=2&rawInputLstJSON=%5B%5D&curInstr=7
Related
it seems to be a problem on where python is searching for de library and not finding it.
still, im very new in this so may be its another thing.
this is the error (i separated the middle part where i think shows the problem):
ftuser#a5a1d3ed08d3:/freqtrade$ freqtrade backtesting --strategy canal
2022-08-26 03:51:37,394 - freqtrade.configuration.load_config - INFO - Using config: user_data/config.json ...
2022-08-26 03:51:37,483 - freqtrade.loggers - INFO - Verbosity set to 0
2022-08-26 03:51:37,484 - freqtrade.configuration.configuration - INFO - Using max_open_trades: 1 ...
2022-08-26 03:51:37,716 - freqtrade.configuration.configuration - INFO - Using user-data directory: /freqtrade/user_data ...
2022-08-26 03:51:37,718 - freqtrade.configuration.configuration - INFO - Using data directory: /freqtrade/user_data/data/binance ...
2022-08-26 03:51:37,719 - freqtrade.configuration.configuration - INFO - Parameter --cache=day detected ...
2022-08-26 03:51:37,719 - freqtrade.configuration.check_exchange - INFO - Checking exchange...
2022-08-26 03:51:37,741 - freqtrade.configuration.check_exchange - INFO - Exchange "binance" is officially supported by the Freqtrade development team.
2022-08-26 03:51:37,741 - freqtrade.configuration.configuration - INFO - Using pairlist from configuration.
2022-08-26 03:51:37,741 - freqtrade.configuration.config_validation - INFO - Validating configuration ...
2022-08-26 03:51:37,746 - freqtrade.commands.optimize_commands - INFO - Starting freqtrade in Backtesting mode
2022-08-26 03:51:37,746 - freqtrade.exchange.exchange - INFO - Instance is running with dry_run enabled
2022-08-26 03:51:37,746 - freqtrade.exchange.exchange - INFO - Using CCXT 1.92.20
2022-08-26 03:51:37,746 - freqtrade.exchange.exchange - INFO - Applying additional ccxt config: {'options': {'defaultType': 'future'}}
2022-08-26 03:51:37,766 - freqtrade.exchange.exchange - INFO - Applying additional ccxt config: {'options': {'defaultType': 'future'}}
2022-08-26 03:51:37,782 - freqtrade.exchange.exchange - INFO - Using Exchange "Binance"
2022-08-26 03:51:39,052 - freqtrade.resolvers.exchange_resolver - INFO - Using resolved exchange 'Binance'...
2022-08-26 03:51:39,097 - freqtrade.resolvers.iresolver - WARNING - Could not import /freqtrade/user_data/strategies/canal.py due to 'cannot import name 'SSLchannels' from 'technical.indicators' (/home/ftuser/.local/lib/python3.10/site-packages/technical/indicators/init.py)'
2022-08-26 03:51:39,182 - freqtrade - ERROR - Impossible to load Strategy 'canal'. This class does not exist or contains Python code errors.
2022-08-26 03:51:39,182 - freqtrade.exchange.exchange - INFO - Closing async ccxt session.
this is the code in VS Code:
import numpy as np # noqa
import pandas as pd # noqa
from pandas import DataFrame
from freqtrade.strategy import (BooleanParameter, CategoricalParameter, DecimalParameter,
IStrategy, IntParameter)
# --------------------------------
# Add your lib to import here
import talib.abstract as ta
import freqtrade.vendor.qtpylib.indicators as qtpylib
from technical.indicators import SSLchannels
# This class is a sample. Feel free to customize it.
class canal(IStrategy):
INTERFACE_VERSION = 3
# Can this strategy go short?
can_short: bool = False
# Minimal ROI designed for the strategy.
# This attribute will be overridden if the config file contains "minimal_roi".
minimal_roi = {
"60": 0.01,
"30": 0.02,
"0": 0.04
}
# Optimal stoploss designed for the strategy.
# This attribute will be overridden if the config file contains "stoploss".
stoploss = -0.10
# Trailing stoploss
trailing_stop = False
# trailing_only_offset_is_reached = False
# trailing_stop_positive = 0.01
# trailing_stop_positive_offset = 0.0 # Disabled / not configured
# Optimal timeframe for the strategy.
timeframe = '5m'
# Run "populate_indicators()" only for new candle.
process_only_new_candles = True
# These values can be overridden in the config.
use_exit_signal = True
exit_profit_only = False
ignore_roi_if_entry_signal = False
buy_rsi = IntParameter(low=1, high=50, default=30, space='buy', optimize=True, load=True)
sell_rsi = IntParameter(low=50, high=100, default=70, space='sell', optimize=True, load=True)
short_rsi = IntParameter(low=51, high=100, default=70, space='sell', optimize=True, load=True)
exit_short_rsi = IntParameter(low=1, high=50, default=30, space='buy', optimize=True, load=True)
# Number of candles the strategy requires before producing valid signals
startup_candle_count: int = 30
# Optional order type mapping.
order_types = {
'entry': 'limit',
'exit': 'limit',
'stoploss': 'market',
'stoploss_on_exchange': False
}
# Optional order time in force.
order_time_in_force = {
'entry': 'gtc',
'exit': 'gtc'
}
plot_config = {
'main_plot': {
'tema': {},
'sar': {'color': 'white'},
},
'subplots': {
"MACD": {
'macd': {'color': 'blue'},
'macdsignal': {'color': 'orange'},
},
"RSI": {
'rsi': {'color': 'red'},
}
}
}
def informative_pairs(self):
"""
Define additional, informative pair/interval combinations to be cached from the exchange.
These pair/interval combinations are non-tradeable, unless they are part
of the whitelist as well.
For more information, please consult the documentation
:return: List of tuples in the format (pair, interval)
Sample: return [("ETH/USDT", "5m"),
("BTC/USDT", "15m"),
]
"""
return []
def populate_indicators(self, dataframe: DataFrame, metadata: dict) -> DataFrame:
# RSI
dataframe['rsi'] = ta.RSI(dataframe)
return dataframe
def populate_entry_trend(self, dataframe: DataFrame, metadata: dict) -> DataFrame:
dataframe.loc[
(
# Signal: RSI crosses above 30
(qtpylib.crossed_above(dataframe['rsi'], self.buy_rsi.value)) &
(dataframe['volume'] > 0) # Make sure Volume is not 0
),
'enter_long'] = 1
dataframe.loc[
(
# Signal: RSI crosses above 70
(qtpylib.crossed_above(dataframe['rsi'], self.short_rsi.value)) &
(dataframe['volume'] > 0) # Make sure Volume is not 0
),
'enter_short'] = 1
return dataframe
def populate_exit_trend(self, dataframe: DataFrame, metadata: dict) -> DataFrame:
dataframe.loc[
(
# Signal: RSI crosses above 70
(qtpylib.crossed_above(dataframe['rsi'], self.sell_rsi.value)) &
(dataframe['volume'] > 0) # Make sure Volume is not 0
),
'exit_long'] = 1
dataframe.loc[
(
# Signal: RSI crosses above 30
(qtpylib.crossed_above(dataframe['rsi'], self.exit_short_rsi.value)) &
# Guard: tema below BB middle
(dataframe['volume'] > 0) # Make sure Volume is not 0
),
'exit_short'] = 1
return dataframe
I left RSI indicator so that i could coment:
#from technical.indicators import SSLchannels
and test the code is ok and it works. It runs the backtest ok.
Heres how i have the folders in my PC
I also tryed choosing python 3.8 and 3.10 in VS Code just ro try and both work well if i take out technical library and shows error if i put it.
any help would be apreciated.
Thanks!
I would think that either you need to pip install the technical.indicators or use docker. I prefer using docker to execute freqtrade commands as they already have all the dependencies installed.
we are getting output in this format
2020-11-19 12:00:01,414 - INFO - clusterDC -Backup started
2020-11-19 12:00:01,415 - Debug - executing command: /opt/couchbase/bin/cbbackupmgr backup --archive /backup/clusterDC --repo clusterDC_date --cluster nodedc --username user --threads 16
2020-11-19 12:00:01,414 - INFO - clusterDC - Backup Succeeded. Backup successfully completed.
But now we want them in below JSON format.
"backup":[
{
"server_name":"nodedc",
"status":"Success/Failed",
"backup_start_time":"yyyy-mm-dd hh:mm:ss.mmmuu",
"cluster_name":"clusterDC",
"location":"/backup/clusterDc/clusterDC_date",
"backup_end_time":"yyyy-mm-dd hh:mm:ss:mmmuu"
}
There are multiple ways you could do this. One would be to parse the your output into a a list then use numpy to convert that to json. For parsing, just use string manipulation (maybe regex if needed) to populate your list.
If the output follows the same format each time:
#output = raw output
output = [line.split() for line in output.splitlines()]
backup = {}
backup["server_name"] = text[1][14]
backup["status"] = text[2][8]
backup["backup_start_time"] = f"{text[0][0]} {text[0][1].replace(',', ':')}"
backup["cluster_name"] = text[0][5]
backup["location"] = f"{text[1][10]}/{text[1][12]}"
backup["backup_end_time"] = f"{text[2][0]} {text[2][1].replace(',', ':')}"
backup = {'backup': [backup]}
print(backup)
Output:
{'backup': [{'server_name': 'nodedc', 'status': 'Succeeded.', 'backup_start_time': '2020-11-19 12:00:01:414', 'cluster_name': 'clusterDC', 'location': '/backup/clusterDC/clusterDC_date', 'backup_end_time': '2020-11-19 12:00:01:414'}]}
I'm trying to convert a string
From "{ip: 10.213.151.76, mask: 255.255.252.0},{ip: 10.213.151.799, mask: 255.255.252.0}"
to [{ip: 10.213.151.76, mask: 255.255.252.0}, {ip: 10.213.151.76, mask: 255.255.252.0}].
Playbook code
- hosts: localhost
vars:
- vari: "[{ip: 10.213.151.76, mask: 255.255.252.0},{ip: 10.213.151.799, mask: 255.255.252.0}]"
- foo: []
tasks:
- set_fact: testing={{vari[1:-1] | regex_findall('\{(.*?)\}')}}
# - set_fact: testing={{vari[1:-1]}}
- debug: var=testing
- name: run my script!
command: python ../test.py "{{item}}"
delegate_to: 127.0.0.1
register: hash
with_items: "{{testing}}"
- debug: var=hash
- set_fact: foo={{foo + item.stdout_lines}}
with_items: "{{hash.results}}"
- debug: var=foo
pyhon script which converts a string to dictionary.
#!/usr/bin/python
import sys
import json
ip_str = str(sys.argv[1])
print dict(s.split(': ') for s in (ip_str.split(', ')))
Current foo variable value coming like this.
ok: [localhost] => {
"foo": [
"{'ip': '10.213.151.76', 'mask': '255.255.252.0'}",
"{'ip': '10.213.151.799', 'mask': '255.255.252.0'}"
]
}
Basically, i want this value in the list of hash format: [{ip: 10.213.151.76, mask: 255.255.252.0}, {ip: 10.213.151.76, mask: 255.255.252.0}].
Python script return value as a dictionary but register store it as a string. ansible not able to convert it as a dictonary.
Any help? Thanks in Advance.
For the record, if it's this complex, I think you should consider redesigning. However...
You can write a variables file in one play, and read it in the next.
- name: dynamic vars file write test
hosts: localhost
gather_facts: no
tasks:
- name: write it
shell: ' echo "foo: bar">tst.yml '
- name: dynamic vars file read test
hosts: localhost
gather_facts: no
vars_files:
- tst.yml
tasks:
- name: read it
debug:
msg: "{{foo}}"
Which outputs:
PLAY [dynamic vars file write test]
********************************************
TASK [write it]
********************************************
changed: [localhost]
PLAY [dynamic vars file read test]
********************************************
TASK [read it]
********************************************
ok: [localhost] => {
"changed": false,
"msg": "bar"
}
This makes it a heck of a lot easier to format and parse, at least for me.
I am trying to configure CherryPy's logging format. CherryPy uses the logging module in python so this has been easy to do however it appears that CherryPy still inserts it's own timestamp into the actual "message" of the log. How can I get CherryPy to not insert it's own timestamp into the "message"?
Below is a small incomplete example of the code that demonstrates what i'm trying to do and the undesired output.
main.py
...
cherrypy_logger = logging.getlogger('cherrypy.error')
cherrypy_logger.handlers = [] # remove any previous handlers the logger had
new_handler = logging.streamHandler()
new_formatter = logging.formatter('blah blah blah ....: %(message)s')
new_handler.setformatter(new_formatter)
cherrypy_logger.addhandler(new_handler)
....
Then when the CherryPy lib/module logs something I get the following:
"blah blah blah ...: [Jan/17/07 23:59:59 ] Engine Started ..... "
I could be doing something wrong, but it seems like CheeryPy is inserting a timestamp in the string it's submitting to the logger with no regard to how the developer may want to show the time in the logs. How can I fix this?
NOTE: the above code is from memory and is the bare minimum to get my point across (hopefully). It will not compile/run.
Thanks in advance.
Another way to do some little changes in CherryPy Log without changing source code is using method or property hacking, for example:
To modify the datetime fomatter you can change time() method.
cherrypy._cplogging.LogManager.time = lambda self : \
datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S,%f")[:23]
To modify the string formatter of access_log you can use:
cherrypy._cplogging.LogManager.access_log_format = (
'{t} MYLOG {h} "{r}" {s} {b} "{f}" "{a}"'
if six.PY3 else
'%(t)s MYLOG %(h)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s"'
)
And finally you can modify string formatter of error_log using:
new_formatter = logging.Formatter("%(asctime)s MYLOG %(message)s")
for h in cherrypy.log.error_log.handlers:
h.setFormatter(new_formatter)
Hope it helps !!!
Answered my own question:
It turns out that CherryPy was in fact inserting a timestamp into the "message"
The following code can be found in _cplogging.py
self.error_log.log(severity, ' '.join((self.time(), context, msg)), exc_info=exc_info)
This, IMHO, is a poor way to insert a timestamp into the log because of the inflexability to change the logging format. For now I've changed the line to read like this:
self.error_log.log(severity, ' '.join((context, msg)), exc_info=exc_info)
which fixes the problem for me however other bits of code will need a few more tweaks to make it a proper patch, which I'll see if I can do and submit.
PS. The CherryPy access log suffers from a very similar issue.
Anyways, hope this helps out somebody else!
I had some trouble finding a description of the access_log_format template vars, so I've retrieved them from the cherrypy source (site-packages/cherrypy/_cplogging.py) of 18.6.0 for convenience.
Default format:
access_log_format = '{h} {l} {u} {t} "{r}" {s} {b} "{f}" "{a}"'
Template var defaults:
atoms = {'h': remote.name or remote.ip,
'l': '-',
'u': getattr(request, 'login', None) or '-',
't': self.time(),
'r': request.request_line,
's': status,
'b': dict.get(outheaders, 'Content-Length', '') or '-',
'f': dict.get(inheaders, 'Referer', ''),
'a': dict.get(inheaders, 'User-Agent', ''),
'o': dict.get(inheaders, 'Host', '-'),
'i': request.unique_id,
'z': LazyRfc3339UtcTime(),
}
For example, here is the function I call to configure CherryPy logging for my project. This function is called before cherrypy.engine.start() and cherrypy.engine.block()
def configure_logger():
pst = pytz.timezone("US/Pacific")
cherrypy._cplogging.LogManager.time = lambda self: datetime.now().astimezone(pst).strftime("%Y-%m-%d %H:%M:%S.%f %Z")
#Default access_log_format '{h} {l} {u} {t} "{r}" {s} {b} "{f}" "{a}"'
#h - remote.ip, l - "-", u - login (or "-"), t - time, r - request line, s - status, b - content length
#f - referer, a - User Agent, o - Host or -, i - request.unique_id, z - UtcTime
cherrypy._cplogging.LogManager.access_log_format = '{t} ACCESS {s} {r} {h} {b} bytes'
I am trying to format the output of the logging to have the levelname on the right side of the terminal always. I currently have a script that looks like:
import logging, os, time
fn = 'FN'
start = time.time()
def getTerminalSize():
import os
env = os.environ
def ioctl_GWINSZ(fd):
try:
import fcntl, termios, struct, os
cr = struct.unpack('hh', fcntl.ioctl(fd, termios.TIOCGWINSZ,
'1234'))
except:
return
return cr
cr = ioctl_GWINSZ(0) or ioctl_GWINSZ(1) or ioctl_GWINSZ(2)
if not cr:
try:
fd = os.open(os.ctermid(), os.O_RDONLY)
cr = ioctl_GWINSZ(fd)
os.close(fd)
except:
pass
if not cr:
cr = (env.get('LINES', 25), env.get('COLUMNS', 80))
return int(cr[1]), int(cr[0])
(width, _) = getTerminalSize()
level_width = 8
message_width = width - level_width - 4
FORMAT = '%(message)-{len1:{width1}d}s [%(levelname){len2:{width2}d}s]'.format(
len1 = message_width,_
len2 = level_width,_
width1 = len(str(message_width)),_
width2 = len(str(level_width)))
logging.basicConfig(format=FORMAT, level="DEBUG")
logging.debug("Debug Message")
logging.info("Info Message")
logging.warning("Warning Message")
logging.error("Error Message")
logging.critical("Critical Message")
logging.info("Starting File: " + os.path.basename(fn) + "\n-----------------------------------------")
logging.info("\tTo read data: %s"%(time.time() - start))
The output looks like:
Debug Message [ DEBUG]
Info Message [ INFO]
Warning Message [ WARNING]
Error Message [ ERROR]
Critical Message [CRITICAL]
Starting File: Channel209.Raw32
----------------------------------------- [ INFO]
To read data: 0.281999826431 [
INFO]
I would like the output to look something like this instead and can't quite figure it out:
Debug Message [ DEBUG]
Info Message [ INFO]
Warning Message [ WARNING]
Error Message [ ERROR]
Critical Message [CRITICAL]
Starting File: Channel209.Raw32
----------------------------------------- [ INFO]
To read data: 0.281999826431 [ INFO]
As #Carpetsmoker said, to do what I truly desired required creating a new formatter class overwriting the default.
The following class worked well for this process:
import logging
import textwrap
import itertools
'''
MyFormatter class
Adapted from: https://stackoverflow.com/questions/6847862/how-to-change-the-format-of-logged-messages-temporarily-in-python
https://stackoverflow.com/questions/3096402/python-string-formatter-for-paragraphs
Authors: Vinay Sajip, unutbu
'''
class MyFormatter(logging.Formatter):
#This function overwrites logging.Formatter.format
#We conver the msg into the overall format we want to see
def format(self,record):
widths=[getTerminalSize()[0] - 12 ,10]
form='{row[0]:<{width[0]}} {row[1]:<{width[1]}}'
#Instead of formatting...rewrite message as desired here
record.msg = self.Create_Columns(form,widths,[record.msg],["[%8s]"%record.levelname])
#Return basic formatter
return super(MyFormatter,self).format(record)
def Create_Columns(self,format_str,widths,*columns):
'''
format_str describes the format of the report.
{row[i]} is replaced by data from the ith element of columns.
widths is expected to be a list of integers.
{width[i]} is replaced by the ith element of the list widths.
All the power of Python's string format spec is available for you to use
in format_str. You can use it to define fill characters, alignment, width, type, etc.
formatter takes an arbitrary number of arguments.
Every argument after format_str and widths should be a list of strings.
Each list contains the data for one column of the report.
formatter returns the report as one big string.
'''
result=[]
for row in zip(*columns):
#Create a indents for each row...
sub = []
#Loop through
for r in row:
#Expand tabs to spaces to make our lives easier
r = r.expandtabs()
#Find the leading spaces and create indend character
if r.find(" ") == 0:
i = 0
for letters in r:
if not letters == " ":
break
i += 1
sub.append(" "*i)
else:
sub.append("")
#Actually wrap and creat the string to return...stolen from internet
lines=[textwrap.wrap(elt, width=num, subsequent_indent=ind) for elt,num,ind in zip(row,widths,sub)]
for line in itertools.izip_longest(*lines,fillvalue=''):
result.append(format_str.format(width=widths,row=line))
return '\n'.join(result)
It relies on getting the terminal size in some function called getTerminalSize. I used Harco Kuppens' Method that I will not repost here.
An example driver program is as follows, where MyFormatter and getTerminalSize are located in Colorer:
import logging
import Colorer
logger = logging.getLogger()
logger_handler = logging.StreamHandler()
logger.addHandler(logger_handler)
logger_handler.setFormatter(Colorer.MyFormatter("%(message)s"))
logger.setLevel("DEBUG")
logging.debug("\t\tTHIS IS A REALY long DEBUG Message that works and wraps around great........")
logging.info(" THIS IS A REALY long INFO Message that works and wraps around great........")
logging.warning("THIS IS A REALY long WARNING Message that works and wraps around great........")
logging.error("\tTHIS IS A REALY long ERROR Message that works and wraps around great........")
logging.critical("THIS IS A REALY long CRITICAL Message that works and wraps around great........")
Where the output is (commented for readability):
# THIS IS A REALY long DEBUG Message that works and [ DEBUG]
# wraps around great........
# THIS IS A REALY long INFO Message that works and wraps around [ INFO]
# great........
# THIS IS A REALY long WARNING Message that works and wraps around [ WARNING]
# great........
# THIS IS A REALY long ERROR Message that works and wraps [ ERROR]
# around great........
# THIS IS A REALY long CRITICAL Message that works and wraps around [CRITICAL]
# great........
I modified the last lines to look like:
logging.info("Starting File: %s" % os.path.basename(fn))
logging.info("%s" % ('-' * 15))
logging.info(" To read data: %s" % (time.time() - start))
Your error was using a newline (\n) and tab (\t) character.
Or, if you must keep the newline (which seems rather odd to me), you could manually add the spaces, like so:
logging.info("Starting File: %s\n%s%s" % (
os.path.basename(fn),
('-' * 15),
' ' * (width - 15 - 12)))
Other notes
You should create a Minimal, Complete, Tested and Readable. Your code wasn't working, I needed to modify a few things just to get the example running. See your messages edit history for what I had to edit.
Since Python 3.3, there's os.get_terminal_size. If that isn't available, then doing subprocess.call(['tput cols'], shell=True) seems a whole lot simpler to me...