I have data stored in comma delimited txt files. One of the columns represents a datetime.
I need to load each column into separate numpy arrays (and decode the date into a python datetime object).
What is the fastest way to do this (in terms of run time)?
NB. the files are several hundred MB of data and currently take several minutes to load in.
e.g. mydata.txt
15,3,0,2003-01-01 00:00:00,12.2
15,4.5,0,2003-01-01 00:00:00,13.7
15,6,0,2003-01-01 00:00:00,18.4
15,7.5,0,2003-01-01 00:00:00,17.9
15,9,0,2003-01-01 00:00:00,17.7
15,10.5,0,2003-01-01 00:00:00,16.3
15,12,0,2003-01-01 00:00:00,17.2
Here is my current code (it works, but is slow):
import csv
import datetime
import time
import numpy
a=[]
b=[]
c=[]
d=[]
timestmp=[]
myfile = open('mydata.txt',"r")
# Read in the data
csv_reader = csv.reader(myfile)
for row in csv_reader:
a.append(row[0])
b.append(row[1])
c.append(row[2])
timestmp.append(row[3])
d.append(row[4])
a = numpy.array(a)
b = numpy.array(b)
c = numpy.array(c)
d = numpy.array(d)
# Convert Time string list into list of Python datetime objects
times = []
time_format = "%Y-%m-%d %H:%M:%S"
for i in xrange(len(timestmp)):
times.append(datetime.datetime.fromtimestamp(time.mktime(time.strptime(timestmp[i], time_format))))
Is there a more efficient way to do this?
Any help is very much appreciated -thanks!
(edit: In the end the bottleneck turned out to be with the datetime conversion, and not reading the file as I originally assumed.)
First, you should run your sample script with Python's built-in profiler to see where the problem actually might be. You can do this from the command-line:
python -m cProfile myscript.py
Secondly, what jumps at me at least, why is that loop at the bottom necessary? Is there a technical reason that it can't be done while reading mydata.txt in the loop you have above the instantiation of the numpy arrays?
Thirdly, you should create the datetime objects directly, as it also supports strptime. You don't need to create a time stamp, make the time, and just make a datetime from a timestamp.
Your loop at the bottom can just be re-written like this:
times = []
timestamps = []
TIME_FORMAT = "%Y-%m-%d %H:%M:%S"
for t in timestmp:
parsed_time = datetime.datetime.strptime(t, TIME_FORMAT)
times.append(parsed_time)
timestamps.append(time.mktime(parsed_time.timetuple()))
I too the liberty of PEP-8ing your code a bit, such as changing your constant to all caps. Also, you can iterate over a list just by using the in operator.
Try numpy.loadtxt(), the doc string has a good example.
You can also try to use copy=False when call numpy.array since the default behavior is copy it, this can speed up the script (especially since you said it process a lot of data).
npa = numpy.array(ar, copy=False)
If you follow Mahmoud Abdelkader's advice and use the profiler, and find out that the bottleneck is in the csv loader, you could always try replacing your csv_reader with this:
for line in open("ProgToDo.txt"):
row = line.split(',')
a.append(int(row[0]))
b.append(int(row[1]))
c.append(int(row[2]))
timestmp.append(row[3])
d.append(float(row[4]))
But more probable I think is that you have a lot of data conversions. Especially the last loop for time conversion will take a long time if you have millions of conversions! If you succeed in doing it all in one step (read+convert), plus taking Terseus advice on not copying the arrays to numpy dittos, you will reduce execution times.
I'm not completely sure if this will help but you may be able to speed up the reading of the file by using ast.literal_eval. For example:
from ast import literal_eval
myfile = open('mydata.txt',"r")
mylist = []
for line in myfile:
line = line.strip()
e = line.rindex(",")
row = literal_eval('[%s"%s"%s]' % (line[:e-19], line[e-19:e], line[e:]))
mylist.append(row)
a, b, c, timestamp, d = zip(*mylist)
# a, b, c, timestamp, and d are what they were after your csv_reader loop
Related
So i have a text document with a lot of values from calculations. I have extracted all the data and stored it in an array, but they are not numbers that I can use for anything. I want to use the number to plot them in a graph, but the elements in the array are text-strings, how would i turn them into numbers and remove unneccesary signs like commas and n= for instance?
Here is code, and under is my print statement.
import numpy as np
['n=1', 'n=2', 'n=3', 'n=4', 'n=5', 'n=6', 'n=7', 'n=8', 'n=9', 'n=10', 'n=11', 'n=12', 'n=13', 'n=14', 'n=15', 'n=16', 'n=17', 'n=18', 'n=19'])
I'd use the conversion method presented in this post within the extract function, so e.g.
...
delta_x.append(strtofloat(words[1]))
...
where you might as well do the conversion inline (my strtofloat is a function you'd have to write based on mentioned post) and within a try/except block, so failed conversions are just ignored from your list.
To make it more consistent, any conversion error should discard the whole line affected, so you might want to use intermediate variables and a check for each field.
Btw. I noticed the argument to the extract function, it would seem logical to make the argument a string containing the file name from which to extract the data?
EDIT: as a side note, you might want to look into pandas, which is a library specialised in numerical data handling. Depending on the format of your data file there are probably standard functions to read your whole file into a DataFrame (which is a kind of super-charged array class which can handle a lot of data processing as well) in a single command.
I would consider using regular expression:
import re
match_number = re.compile('-?[0-9]+\.?[0-9]*(?:[Ee]-?[0-9]+)?')
for line in infile:
words = line.split()
new_delta_x = float(re.search(match_number, words[1]).group())
new_abs_error = float(re.search(match_number, words[7]).group())
new_n = int(re.search(match_number, words[10]).group())
delta_x.append(new_delta_x)
abs_error.append(new_abs_error)
n.append(new_n)
But it seems like your data is already in csv format. So try using pandas.
Then read data into dataframe without header (column names will be integers).
import numpy as np
import pandas as pd
df = pd.read_csv('approx_derivative_sine.txt', header=None)
delta_x = df[1].to_numpy()
abs_error = df[7].to_numpy()
# if n is always number of the row
n = df.index.to_numpy(dtype=int)
# if n is always in the form 'n=<integer>'
n = df[10].apply(lambda x: x.strip()[2:]).to_numpy(dtype=int)
If you could post a few rows of your approx_derivative_sine.txt file, that would be useful.
From the given array in the question, If you would like to remove the 'n=' and convert each element to an integer, you may try the following.
import numpy as np
array = np.array(['n=1', 'n=2', 'n=3', 'n=4', 'n=5', 'n=6', 'n=7', 'n=8', 'n=9',
'n=10', 'n=11', 'n=12', 'n=13', 'n=14', 'n=15', 'n=16', 'n=17', 'n=18', 'n=19'])
array = [int(i.replace('n=', '')) for i in array]
print(array)
My script cleans arrays from the unwanted string like "##$!" and other stuff.
The script works as intended but the speed of it is extremely slow when the excel row size is big.
I tried to use numpy if it could speed it up but I'm not too familiar with is so I might be using it incorrectly.
xls = pd.ExcelFile(path)
df = xls.parse("Sheet2")
TeleNum = np.array(df['telephone'].values)
def replace(orignstr): # removes the unwanted string from numbers
for elem in badstr:
if elem in orignstr:
orignstr = orignstr.replace(elem, '')
return orignstr
for UncleanNum in tqdm(TeleNum):
newnum = replace(str(UncleanNum)) # calling replace function
df['telephone'] = df['telephone'].replace(UncleanNum, newnum) # store string back in data frame
I also tried removing the method to if that would help and just place it as one block of code but the speed remained the same.
for UncleanNum in tqdm(TeleNum):
orignstr = str(UncleanNum)
for elem in badstr:
if elem in orignstr:
orignstr = orignstr.replace(elem, '')
print(orignstr)
df['telephone'] = df['telephone'].replace(UncleanNum, orignstr)
TeleNum = np.array(df['telephone'].values)
The current speed of the script running an excel file of 200,000 is around 70it/s and take around an hour to finish. Which is not that good since this is just one function of many.
I'm not too advanced in python. I'm just learning as I script so if you have any pointer it would be appreciated.
Edit:
Most of the array elements Im dealing with are numbers but some have string in them. I trying to remove all string in the array element.
Ex.
FD3459002912
*345*9002912$
If you are trying to clear everything that isn't a digit from the strings you can directly use re.sub like this:
import re
string = "FD3459002912"
regex_result = re.sub("\D", "", string)
print(regex_result) # 3459002912
I was given the following problem where I had to match the logdata and the expected_result. The code is as follows, edited with my solution and comments containing feedback I received:
import collections
log_data = """1.1.2014 12:01,111-222-333,454-333-222,COMPLETED
1.1.2014 13:01,111-222-333,111-333,FAILED
1.1.2014 13:04,111-222-333,454-333-222,FAILED
1.1.2014 13:05,111-222-333,454-333-222,COMPLETED
2.1.2014 13:01,111-333,111-222-333,FAILED
"""
expected_result = {
"111-222-333": "40.00%",
"454-333-222": "66.67%",
"111-333" : "0.00%"
}
def compute_success_ratio(logdata):
#! better option to use .splitlines()
#! or even better recognize the CSV structure and use csv.reader
entries = logdata.split('\n')
#! interesting choice to collect the data first
#! which could result in explosive growth of memory hunger, are there
#! alternatives to this structure?
complst = []
faillst = []
#! probably no need for attaching `lst` to the variable name, no?
for entry in entries:
#! variable naming could be clearer here
#! a good way might involve destructuring the entry like:
#! _, caller, callee, result
#! which also avoids using magic indices further down (-1, 1, 2)
ent = entry.split(',')
if ent[-1] == 'COMPLETED':
#! complst.extend(ent[1:3]) for even more brevity
complst.append(ent[1])
complst.append(ent[2])
elif ent[-1] == 'FAILED':
faillst.append(ent[1])
faillst.append(ent[2])
#! variable postfix `lst` could let us falsely assume that the result of set()
#! is a list.
numlst = set(complst + faillst)
#! good use of collections.Counter,
#! but: Counter() already is a dictionary, there is no need to convert it to one
comps = dict(collections.Counter(complst))
fails = dict(collections.Counter(faillst))
#! variable naming overlaps with global, and doesn't make sense in this context
expected_result = {}
for e in numlst:
#! good: dealt with possibility of a number not showing up in `comps` or `fails`
#! bad: using a try/except block to deal with this when a simpler .get("e", 0)
#! would've allowed dealing with this more elegantly
try:
#! variable naming not very expressive
rat = float(comps[e]) / float(comps[e] + fails[e]) * 100
perc = round(rat, 2)
#! here we are rounding twice, and then don't use the formatting string
#! to attach the % -- '{:.2f}%'.format(perc) would've been the right
#! way if one doesn't know percentage formatting (see below)
expected_result[e] = "{:.2f}".format(perc) + '%'
#! a generally better way would be to either
#! from __future__ import division
#! or to compute the ratio as
#! ratio = float(comps[e]) / (comps[e] + fails[e])
#! and then use percentage formatting for the ratio
#! "{:.2%}".format(ratio)
except KeyError:
expected_result[e] = '0.00%'
return expected_result
if __name__ == "__main__":
assert(compute_success_ratio(log_data) == expected_result)
#! overall
#! + correct
#! ~ implementation not optimal, relatively wasteful in terms of memory
#! - variable naming inconsistent, overly shortened, not expressive
#! - some redundant operations
#! + good use of standard library collections.Counter
#! ~ code could be a tad bit more idiomatic
I have understood some of the problems such as variable naming conventions and avoiding the try block section as much as possible.
However, I fail to understand how using csv.reader improves the code. Also, how am I supposed to understand the comment about collecting the data first? What could the alternatives be? Could anybody throw some light on these two issues?
When you do entries = logdata.split('\n') you will create a list with the split strings. Since log files can be quite large, this can consume a large amount of memory.
The way that csv.reader works is that it will open the file and only read one line at a time (approximately). This means that the data remains in the file and only one row is ever in memory.
Forgetting about the csv parsing for a minute, the issue is illustrated by the difference between these approaches:
In approach 1 we read the whole file into memory:
data = open('logfile').read().split('\n')
for line in data:
# do something with the line
In approach 2 we read one line at a time:
data = open('logfile')
for line in data:
# do something with the line
Approach 1 will consume more memory as the whole file needs to be read into memory. It also traverses the data twice - once when we read it and once to split into lines. The downside of approach 2 is that we can only do one loop through data.
For the particular case here, where we're not reading from a file but rather from a variable which is already in memory, the big difference will be that we will consume about twice as much memory by using the split approach.
split('\n') and splitlines will create a copy of your data where each line is is separate item in the list. Since you only need to pass over the data once instead of randomly accessing lines this is wasteful compared to CSV reader which could return you one line at the time. The other benefit of using the reader is that you wouldn't have to split the data to lines and lines to columns manually.
The comment about data collection refers the fact that you add all the completed and failed items to two lists. Let's say that item 111-333 completes five times and fails twice. Your data would look something like this:
complst = ['111-333', '111-333', '111-333', '111-333', '111-333']
faillst = ['111-333', '111-333']
You don't need those repeating items so you could have used Counter directly without collecting items to the lists and save a lot of memory.
Here's an alternative implementation that uses csv.reader and collects success & failure counts to a dict where item name is key and value is list [success count, failure count]:
from collections import defaultdict
import csv
from io import StringIO
log_data = """1.1.2014 12:01,111-222-333,454-333-222,COMPLETED
1.1.2014 13:01,111-222-333,111-333,FAILED
1.1.2014 13:04,111-222-333,454-333-222,FAILED
1.1.2014 13:05,111-222-333,454-333-222,COMPLETED
2.1.2014 13:01,111-333,111-222-333,FAILED
"""
RESULT_STRINGS = ['COMPLETED', 'FAILED']
counts = defaultdict(lambda: [0, 0])
for _, *params, result in csv.reader(StringIO(log_data)):
try:
index = RESULT_STRINGS.index(result)
for param in params:
counts[param][index] += 1
except ValueError:
pass # Skip line in case last column is not in RESULT_STRINGS
result = {k: '{0:.2f}%'.format(v[0] / sum(v) * 100) for k, v in counts.items()}
Note that above will work only on Python 3.
Alternatively, Pandas looks a good solution for this purpose if you are OK with using it.
import pandas as pd
log_data = pd.read_csv('data.csv',header=None)
log_data.columns = ['date', 'key1','key2','outcome']
meltedData = pd.melt(log_data, id_vars=['date','outcome'], value_vars=['key1','key2'],
value_name = 'key') # we transpose the keys here
meltedData['result'] = [int(x.lower() == 'completed') for x in meltedData['outcome']] # add summary variable
groupedData = meltedData.groupby(['key'])['result'].mean()
groupedDict = groupedData.to_dict()
print groupedDict
Result:
{'111-333': 0.0, '111-222-333': 0.40000000000000002, '454-333-222': 0.66666666666666663}
I have a large CSV file full of stock-related data formatted as such:
Ticker Symbol, Date, [some variables...]
So each line starts of with the symbol (like "AMZN"), then has the date, then has 12 variables related to price or volume on the selected date. There are about 10,000 different securities represented in this file and I have a line for each day that the stock has been publicly traded for each of them. The file is ordered first alphabetically by ticker symbol and second chronologically by date. The entire file is about 3.3 GB.
The sort of task I want to solve would be to be able to extract the most recent n lines of data for a given ticker symbol with respect to the current date. I have code that does this, but based on my observations it seems to take, on average, around 8-10 seconds per retrieval (all tests have been extracting 100 lines).
I have functions I'd like to run that require me to grab such chunks for hundreds or thousands of symbols, and I would really like to reduce the time. My code is inefficient, but I am not sure how to make it run faster.
First, I have a function called getData:
def getData(symbol, filename):
out = ["Symbol","Date","Open","High","Low","Close","Volume","Dividend",
"Split","Adj_Open","Adj_High","Adj_Low","Adj_Close","Adj_Volume"]
l = len(symbol)
beforeMatch = True
with open(filename, 'r') as f:
for line in f:
match = checkMatch(symbol, l, line)
if beforeMatch and match:
beforeMatch = False
out.append(formatLineData(line[:-1].split(",")))
elif not beforeMatch and match:
out.append(formatLineData(line[:-1].split(",")))
elif not beforeMatch and not match:
break
return out
(This code has a couple of helper functions, checkMatch and formatLineData, which I will show below.) Then, there is another function called getDataColumn that gets the column I want with the correct number of days represented:
def getDataColumn(symbol, col=12, numDays=100, changeRateTransform=False):
dataset = getData(symbol)
if not changeRateTransform:
column = [day[col] for day in dataset[-numDays:]]
else:
n = len(dataset)
column = [(dataset[i][col] - dataset[i-1][col])/dataset[i-1][col] for i in range(n - numDays, n)]
return column
(changeRateTransform converts raw numbers into daily change rate numbers if True.) The helper functions:
def checkMatch(symbol, symbolLength, line):
out = False
if line[:symbolLength+1] == symbol + ",":
out = True
return out
def formatLineData(lineData):
out = [lineData[0]]
out.append(datetime.strptime(lineData[1], '%Y-%m-%d').date())
out += [float(d) for d in lineData[2:6]]
out += [int(float(d)) for d in lineData[6:9]]
out += [float(d) for d in lineData[9:13]]
out.append(int(float(lineData[13])))
return out
Does anyone have any insight on what parts of my code run slow and how I can make this perform better? I can't do the sort of analysis I want to do without speeding this up.
EDIT:
In response to the comments, I made some changes to the code in order to utilize the existing methods in the csv module:
def getData(symbol, database):
out = ["Symbol","Date","Open","High","Low","Close","Volume","Dividend",
"Split","Adj_Open","Adj_High","Adj_Low","Adj_Close","Adj_Volume"]
l = len(symbol)
beforeMatch = True
with open(database, 'r') as f:
databaseReader = csv.reader(f, delimiter=",")
for row in databaseReader:
match = (row[0] == symbol)
if beforeMatch and match:
beforeMatch = False
out.append(formatLineData(row))
elif not beforeMatch and match:
out.append(formatLineData(row))
elif not beforeMatch and not match:
break
return out
def getDataColumn(dataset, col=12, numDays=100, changeRateTransform=False):
if not changeRateTransform:
out = [day[col] for day in dataset[-numDays:]]
else:
n = len(dataset)
out = [(dataset[i][col] - dataset[i-1][col])/dataset[i-1][col] for i in range(n - numDays, n)]
return out
Performance was worse using the csv.reader class. I tested on two stocks, AMZN (near top of file) and ZNGA (near bottom of file). With the original method, the run times were 0.99 seconds and 18.37 seconds, respectively. With the new method leveraging the csv module, the run times were 3.04 seconds and 64.94 seconds, respectively. Both return the correct results.
My thought is that the time is being taken up more from finding the stock than from the parsing. If I try these methods on the first stock in the file, A, the methods both run in about 0.12 seconds.
When you're going to do lots of analysis on the same dataset, the pragmatic approach would be to read it all into a database. It is made for fast querying; CSV isn't. Use the sqlite command line tools, for example, which can directly import from CSV. Then add a single index on (Symbol, Date) and lookups will be practically instantaneous.
If for some reason that is not feasible, for example because new files can come in at any moment and you cannot afford the preparation time before starting your analysis of them, you'll have to make the best of dealing with CSV directly, which is what the rest of my answer will focus on. Remember that it's a balancing act, though. Either you pay a lot upfront, or a bit extra for every lookup. Eventually, for some amount of lookups it would have been cheaper to pay upfront.
Optimization is about maximizing the amount of work not done. Using generators and the built-in csv module aren't going to help much with that in this case. You'd still be reading the whole file and parsing all of it, at least for line breaks. With that amount of data, it's a no-go.
Parsing requires reading, so you'll have to find a way around it first. Best practices of leaving all intricacies of the CSV format to the specialized module bear no meaning when they can't give you the performance you want. Some cheating must be done, but as little as possible. In this case, I suppose it is safe to assume that the start of a new line can be identified as b'\n"AMZN",' (sticking with your example). Yes, binary here, because remember: no parsing yet. You could scan the file as binary from the beginning until you find the first line. From there read the amount of lines you need, decode and parse them the proper way, etc. No need for optimization there, because a 100 lines are nothing to worry about compared to the hundreds of thousands of irrelevant lines you're not doing that work for.
Dropping all that parsing buys you a lot, but the reading needs to be optimized as well. Don't load the whole file into memory first and skip as many layers of Python as you can. Using mmap lets the OS decide what to load into memory transparently and lets you work with the data directly.
Still you're potentially reading the whole file, if the symbol is near the end. It's a linear search, which means the time it takes is linearly proportional to the number of lines in the file. You can do better though. Because the file is sorted, you could improve the function to instead perform a kind of binary search. The number of steps that will take (where a step is reading a line) is close to the binary logarithm of the number of lines. In other words: the number of times you can divide your file into two (almost) equally sized parts. When there are one million lines, that's a difference of five orders of magnitude!
Here's what I came up with, based on Python's own bisect_left with some measures to account for the fact that your "values" span more than one index:
import csv
from itertools import islice
import mmap
def iter_symbol_lines(f, symbol):
# How to recognize the start of a line of interest
ident = b'"' + symbol.encode() + b'",'
# The memory-mapped file
mm = mmap.mmap(f.fileno(), 0, access=mmap.ACCESS_READ)
# Skip the header
mm.readline()
# The inclusive lower bound of the byte range we're still interested in
lo = mm.tell()
# The exclusive upper bound of the byte range we're still interested in
hi = mm.size()
# As long as the range isn't empty
while lo < hi:
# Find the position of the beginning of a line near the middle of the range
mid = mm.rfind(b'\n', 0, (lo+hi)//2) + 1
# Go to that position
mm.seek(mid)
# Is it a line that comes before lines we're interested in?
if mm.readline() < ident:
# If so, ignore everything up to right after this line
lo = mm.tell()
else:
# Otherwise, ignore everything from right before this line
hi = mid
# We found where the first line of interest would be expected; go there
mm.seek(lo)
while True:
line = mm.readline()
if not line.startswith(ident):
break
yield line.decode()
with open(filename) as f:
r = csv.reader(islice(iter_symbol_lines(f, 'AMZN'), 10))
for line in r:
print(line)
No guarantees about this code; I didn't pay much attention to edge cases, and I couldn't test with (any of) your file(s), so consider it a proof of concept. It is plenty fast, however – think tens of milliseconds on an SSD!
So I have an alternative solution which I ran and tested on my own as well with a sample data set that I got on Quandl that appears to have all the same headers and similar data. (Assuming that I havent misunderstood the end result that your trying to achieve).
I have this command line tool that one of our engineers built for us for parsing massive csvs - since I deal with absurd amount of data on a day to day basis - it is open sourced and you can get it here: https://github.com/DataFoxCo/gocsv
I also already wrote the short bash script for it in case you don't want to pipeline the commands but it does also support pipelining.
The command to run the following short script follows a super simple convention:
bash tickers.sh wikiprices.csv 'AMZN' '2016-12-\d+|2016-11-\d+'
#!/bin/bash
dates="$3"
cat "$1" \
| gocsv filter --columns 'ticker' --regex "$2" \
| gocsv filter --columns 'date' --regex "$dates" > "$2"'-out.csv'
both arguments for ticker and for dates are regexes
You can add as many variations as your want into that one regex, separating them by |.
So if you wanted AMZN and MSFT then you would simply modify it to this: AMZN|MSFT
I did something very similar with the dates - but i only limited my sample run to any dates from this month or last month.
End Result
Starting data:
myusername$ gocsv dims wikiprices.csv
Dimensions:
Rows: 23946
Columns: 14
myusername$ bash tickers.sh wikiprices.csv 'AMZN|MSFT' '2016-12-\d+'
myusername$ gocsv dims AMZN|MSFT-out.csv
Dimensions:
Rows: 24
Columns: 14
Here is a sample where I limited to only those 2 tickers and then to december only:
Voila - in a matter of seconds you have a second file saved with out the data you care about.
The gocsv program has great documentation by the way - and a ton of other functions e.g. running a vlookup basically at any scale (which is what inspired the creator to make the tool)
in addition to using csv.reader I think using itertools.groupby would speed up looking for the wanted sections, so the actual iteration could look something like this:
import csv
from itertools import groupby
from operator import itemgetter #for the keyfunc for groupby
def getData(wanted_symbol, filename):
with open(filename) as file:
reader = csv.reader(file)
#so each line in reader is basically line[:-1].split(",") from the plain file
for symb, lines in groupby(reader, itemgetter(0)):
#so here symb is the symbol at the start of each line of lines
#and lines is the lines that all have that symbol in common
if symb != wanted_symbol:
continue #skip this whole section if it has a different symbol
for line in lines:
#here we have each line as a list of fields
#for only the lines that have `wanted_symbol` as the first element
<DO STUFF HERE>
so in the space of <DO STUFF HERE> you could have the out.append(formatLineData(line)) to do what your current code does but the code for that function has a lot of unnecessary slicing and += operators which I think are pretty expensive for lists (might be wrong), another way you could apply the conversions is to have a list of all the conversions:
def conv_date(date_str):
return datetime.strptime(date_str, '%Y-%m-%d').date()
#the conversions applied to each element (taken from original formatLineData)
castings = [str, conv_date, #0, 1
float, float, float, float, #2:6
int, int, int, #6:9
float, float, float, float, #9:13
int] #13
then use zip to apply these to each field in a line in a list comprehension:
[conv(val) for conv, val in zip(castings, line)]
so you would replace <DO STUFF HERE> with out.append with that comprehension.
I'd also wonder if switching the order of groupby and reader would be better since you don't need to parse most of the file as csv, just the parts you are actually iterating over so you could use a keyfunc that seperates just the first field of the string
def getData(wanted_symbol, filename):
out = [] #why are you starting this with strings in it?
def checkMatch(line): #define the function to only take the line
#this would be the keyfunc for groupby in this example
return line.split(",",1)[0] #only split once, return the first element
with open(filename) as file:
for symb, lines in groupby(file,checkMatch):
#so here symb is the symbol at the start of each line of lines
if symb != wanted_symbol:
continue #skip this whole section if it has a different symbol
for line in csv.reader(lines):
out.append( [typ(val) for typ,val in zip(castings,line)] )
return out
Consider there is a few rather long numpy arrays:
importy numpy as np;
long_array1 = np.array([random.random() for i in range(10000)]);
long_array2 = np.array([random.random() for i in range(10000)]);
long_array3 = np.array([random.random() for i in range(10000)]);
I would like to save the arrays into the file file.dat, one row per numpy array.
The text representation of an array should be in a python array-like format, i.e. in the case of following numpy array:
a = np.array([0.3213,0.145323,0.852,0.723,0.421452])
I want to save following line in the file.
[0.3213,0.145323,0.852,0.723,0.421452]
There is what I do:
array1_str = ",".join([str(item) for item in long_array1]);
array2_str = ",".join([str(item) for item in long_array2]);
array3_str = ",".join([str(item) for item in long_array3]);
with open("file.dat","w") as file_arrays:
file_arrays.write("[" + array1_str + "]\n");
file_arrays.write("[" + array2_str + "]\n");
file_arrays.write("[" + array3_str + "]\n");
Everything works fine actually. I am just doubtful about the efficiency of my code. I am almost sure there has to be another (better and more efficient) way how to do this.
I welcome comments to the random list generation as well.
This is the fastest way:
','.join(map(str, long_array1.tolist()))
If you want to keep the text more compact, this is fast too:
','.join(map(lambda x: '%.7g' % x, long_array1.tolist()))
Source: I benchmarked every possible method for this as the maintainer of the pycollada library.
Since you want a Python-list-like format, how about actually using the Python list format?
array1_str = repr(list(long_array1))
That's going to stay mostly in C-land and performance should be much better.
If you don't want the spaces, take 'em out after:
array1_str = repr(list(long_array1)).translate(None, " ")
Memory usage may be an issue, however.
sounds like you might be able to use the numpy.savetxt() for this;
something like:
def dump_array(outfile, arraylike):
outfile.write('[')
numpy.savetxt(outfile, arraylike, newline=',', fmt="%s")
outfile.write(']\n')
although i don't think the corresponding numpy.loadtxt() will be able to read in this format.