Pulling out data from CSV files' specific columns in Python - python
I need a quick help with reading CSV files using Python and storing it in a 'data-type' file to use the data to graph after storing all the data in different files.
I have searched it, but in all cases I found, there was headers in the data. My data does not header part. They are tab separated. And I need to store only specific columns of the data. Ex:
12345601 2345678#abcdef 1 2 365 places
In this case, as an example, I would want to store only "2345678#abcdef" and "365" in the new python file in order to use it in the future to create a graph.
Also, I have more than 1 csv file in a folder and I need to do it in each of them. The sources I found did not talk about it and only referred to:
# open csv file
with open(csv_file, 'rb') as csvfile:
Could anyone refer me to already answered question or help me out with it?
. . . and storing it in a PY file to use the data to graph after storing all the data in different files . . .
. . . I would want to store only "2345678#abcdef" and "365" in the new python file . . .
Are you sure that you want to store the data in a python file? Python files are supposed to hold python code and they should be executable by the python interpreter. It would be a better idea to store your data in a data-type file (say, preprocessed_data.csv).
To get a list of files matching a pattern, you can use python's built-in glob library.
Here's an example of how you could read multiple csv files in a directory and extract the desired columns from each one:
import glob
# indices of columns you want to preserve
desired_columns = [1, 4]
# change this to the directory that holds your data files
csv_directory = '/path/to/csv/files/*.csv'
# iterate over files holding data
extracted_data = []
for file_name in glob.glob(csv_directory):
with open(file_name, 'r') as data_file:
while True:
line = data_file.readline()
# stop at the end of the file
if len(line) == 0:
break
# splits the line by whitespace
tokens = line.split()
# only grab the columns we care about
desired_data = [tokens[i] for i in desired_columns]
extracted_data.append(desired_data)
It would be easy to write the extracted data to a new file. The following example shows how you might save the data to a csv file.
output_string = ''
for row in extracted_data:
output_string += ','.join(row) + '\n'
with open('./preprocessed_data.csv', 'w') as csv_file:
csv_file.write(output_string)
Edit:
If you don't want to combine all the csv files, here's a version that can process one at a time:
def process_file(input_path, output_path, selected_columns):
extracted_data = []
with open(input_path, 'r') as in_file:
while True:
line = in_file.readline()
if len(line) == 0: break
tokens = line.split()
extracted_data.append([tokens[i] for i in selected_columns])
output_string = ''
for row in extracted_data:
output_string += ','.join(row) + '\n'
with open(output_path, 'w') as out_file:
out_file.write(output_string)
# whenever you need to process a file:
process_file(
'/path/to/input.csv',
'/path/to/processed/output.csv',
[1, 4])
# if you want to process every file in a directory:
target_directory = '/path/to/my/files/*.csv'
for file in glob.glob(target_directory):
process_file(file, file + '.out', [1, 4])
Edit 2:
The following example will process every file in a directory and write the results to a similarly-named output file in another directory:
import os
import glob
input_directory = '/path/to/my/files/*.csv'
output_directory = '/path/to/output'
for file in glob.glob(input_directory):
file_name = os.path.basename(file) + '.out'
out_file = os.path.join(output_directory, file_name)
process_file(file, out_file, [1, 4])
If you want to add headers to the output, then process_file could be modified like this:
def process_file(input_path, output_path, selected_columns, column_headers=[]):
extracted_data = []
with open(input_path, 'r') as in_file:
while True:
line = in_file.readline()
if len(line) == 0: break
tokens = line.split()
extracted_data.append([tokens[i] for i in selected_columns])
output_string = ','.join(column_headers) + '\n'
for row in extracted_data:
output_string += ','.join(row) + '\n'
with open(output_path, 'w') as out_file:
out_file.write(output_string)
Here's another approach using a namedtuple that will help extract selected fields from a csv file and then let you write them out to a new csv file.
from collections import namedtuple
import csv
# Setup named tuple to receive csv data
# p1 to p5 are arbitrary field names associated with the csv file
SomeData = namedtuple('SomeData', 'p1, p2, p3, p4, p5, p6')
# Read data from the csv file and create a generator object to hold a reference to the data
# We use a generator object rather than a list to reduce the amount of memory our program will use
# The captured data will only have data from the 2nd & 5th column from the csv file
datagen = ((d.p2, d.p5) for d in map(SomeData._make, csv.reader(open("mydata.csv", "r"))))
# Write the data to a new csv file
with open("newdata.csv","w", newline='') as csvfile:
cvswriter = csv.writer(csvfile, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
# Use the generator created earlier to access the filtered data and write it out to a new csv file
for d in datagen:
cvswriter.writerow(d)
Original Data in "mydata.csv":
12345601,2345678#abcdef,1,2,365,places
4567,876#def,0,5,200,noplaces
Output Data in "newdata.csv":
2345678#abcdef,365
876#def,200
EDIT 1:
For tab delimited data make the following changes to the code:
change
datagen = ((d.p2, d.p5) for d in map(SomeData._make, csv.reader(open("mydata.csv", "r"))))
to
datagen = ((d.p2, d.p5) for d in map(SomeData._make, csv.reader(open("mydata2.csv", "r"), delimiter='\t', quotechar='"')))
and
cvswriter = csv.writer(csvfile, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
to
cvswriter = csv.writer(csvfile, delimiter='\t', quotechar='"', quoting=csv.QUOTE_MINIMAL)
Related
How to read a csv file and create a new csv file after every nth number of rows?
I'm trying to write a function that reads a sheet of an existing .csv file and every 20 rows are copied to a newly created csv file. Therefore, it needs to be designed like a file counter "file_01, file_02, file_04,...," where the first 20 rows are copied to file_01, the next 20 to file_02.csv, and so on. Currently I have this code which hasn't worked for me work so far. import csv import os.path from itertools import islice N = 20 new_filename = "" filename = "" with open(filename, "rb") as file: # the a opens it in append mode reader = csv.reader(file) for i in range(N): line = next(file).strip() #print(line) with open(new_filename, 'wb') as outfh: writer = csv.writer(outfh) writer.writerow(line) writer.writerows(islice(reader, 2)) I have attached a file for testing. https://1drv.ms/u/s!AhdJmaLEPcR8htYqFooEoYUwDzdZbg 32.01,18.42,58.98,33.02,55.37,63.25,12.82,-32.42,33.99,179.53, 41.11,33.94,67.85,57.61,59.23,94.69,19.43,-19.15,21.71,-161.13, 49.80,54.12,72.78,100.74,56.97,128.84,26.95,-6.76,10.07,-142.62, 55.49,81.02,68.93,148.17,49.25,157.32,34.94,5.39,0.44,-123.32, 56.01,112.81,59.27,177.87,38.50,179.63,43.43,18.42,-5.81,-102.24, 50.79,142.87,48.06,-162.32,26.60,-161.21,52.38,34.37,-7.42,-79.64, 41.54,167.36,37.12,-145.93,15.01,-142.84,60.90,57.05,-4.47,-56.54, 30.28,-172.09,27.36,-130.24,5.11,-123.66,66.24,91.12,-0.76,-35.44, 18.64,-153.20,19.52,-114.09,-1.54,-102.96,64.77,131.32,5.12,-21.68, 7.92,-134.07,14.24,-96.93,-3.79,-80.91,57.10,162.35,12.51,-9.21, -0.34,-113.74,11.80,-78.73,-2.49,-58.46,46.75,-175.86,20.81,2.87, -4.81,-91.85,11.78,-60.28,0.59,-39.26,35.75,-158.12,29.79,15.71, -4.76,-68.67,13.79,-43.84,6.82,-24.69,25.27,-141.56,39.05,30.71, -1.33,-46.42,18.44,-30.23,14.53,-11.95,16.21,-124.45,47.91,50.25, 4.14,-29.61,24.89,-18.02,23.01,0.10,9.59,-106.05,54.46,77.07, 11.04,-15.39,32.33,-6.66,31.92,12.48,6.24,-86.34,55.72,110.53, 18.69,-2.32,40.46,4.57,41.11,26.87,6.07,-65.68,50.25,142.78, 26.94,10.56,49.18,16.67,49.92,45.39,8.06,-46.86,40.13,168.29, 35.80,24.58,58.45,31.99,56.83,70.92,12.96,-31.90,28.10,-171.07, 44.90,41.72,67.41,55.89,59.21,103.94,19.63,-18.67,15.97,-152.40, -5.41,-77.62,11.40,-63.21,4.80,-29.06,31.33,-151.44,43.00,37.25, -2.88,-54.38,13.08,-46.00,12.16,-15.86,21.21,-134.62,51.25,59.16, 1.69,-35.73,17.44,-32.01,20.37,-3.78,13.06,-117.10,56.18,88.98, 8.15,-20.80,23.70,-19.66,29.11,8.29,7.74,-98.22,54.91,123.30, 15.52,-7.45,31.04,-8.22,38.22,21.78,5.76,-77.99,47.34,153.31, 23.53,5.38,39.07,2.98,47.29,38.71,6.58,-57.45,36.18,176.74, 32.16,18.76,47.71,14.88,55.08,61.71,9.76,-40.52,23.99,-163.75, 41.27,34.36,56.93,29.53,59.23,92.75,15.53,-26.40,12.16,-145.27, 49.92,54.65,66.04,51.59,57.34,126.97,22.59,-13.65,2.14,-126.20, 55.50,81.56,72.21,90.19,49.88,155.84,30.32,-1.48,-4.71,-105.49, 55.92,113.45,70.26,139.40,39.23,178.48,38.55,10.92,-7.09,-83.11, 50.58,143.40,61.40,172.50,27.38,-162.27,47.25,24.86,-4.77,-60.15, 41.30,167.74,50.34,-166.33,15.74,-143.93,56.21,43.14,-0.54,-38.22, 30.03,-171.78,39.24,-149.48,5.71,-124.87,63.77,70.19,4.75,-24.15, 18.40,-152.91,29.17,-133.78,-1.18,-104.31,66.51,108.81,11.86,-11.51, 7.69,-133.71,20.84,-117.74,-3.72,-82.28,61.95,146.15,20.05,0.65, -0.52,-113.33,14.97,-100.79,-2.58,-59.75,52.78,172.46,28.91,13.29, -4.91,-91.36,11.92,-82.84,0.34,-40.12,41.93,-167.91,38.21,27.90,
These are some of the problems with your current solution. You created a csv.reader object but then you did not use it You read each line but then you did not store them anywhere You are not keeping track of 20 rows which was supposed to be your requirement You created the output file in a separate with block which does not have access anymore to the read lines or the csv.reader object Here's a working solution: import csv inp_file = "input.csv" out_file_pattern = "file_{:{fill}2}.csv" max_rows = 20 with open(inp_file, "r") as inp_f: reader = csv.reader(inp_f) all_rows = [] cur_file = 1 for row in reader: all_rows.append(row) if len(all_rows) == max_rows: with open(out_file_pattern.format(cur_file, fill="0"), "w") as out_f: writer = csv.writer(out_f) writer.writerows(all_rows) all_rows = [] cur_file += 1 The flow is as follows: Read each row of the CSV using a csv.reader Store each row in an all_rows list Once that list gets 20 rows, open a file and write all the rows to it Use the csv.writer's writerows method Use a cur_file counter to format the filename Every time 20 rows are dumped to a file, empty out the list and increment the file counter This solution includes the blank lines as part of the 20 rows. Your test file has actually 19 rows of CSV data and 1 row for a blank line. If you need to skip the blank line, just add a simple check of if not row: continue Also, as I mentioned in a comment, I assume that the input file is an actual CSV file, meaning it's a plain text file with CSV formatted data. If the input is actually an Excel file, then solutions like this won't work, because you'll need some special libraries to read Excel files, even if the contents visually looks like CSV or even if you rename the file to .csv.
Without using any special CSV libraries (e.g. csv, though you could, just that I don't know how to use them, however don't think it is necessary for this case), you could: excel_csv_fp = open(r"<file_name>", "r", encoding="utf-8") # Check proper encoding for your file csv_data = excel_csv_fp.readlines() file_counter = 0 new_file_name = "" new_fp = "" for line in csv_data: if line == "": if new_fp != "": new_fp.close() file_counter += 1 new_file_name = "file_" + "{:02d}".format(file_counter) # 1 turns into 01 and 10 turns 10 i.e. remains the same new_fp = open("<some_path>/" + new_file_name + ".csv", "w", encoding="utf-8") # Makes a new CSV file to start writing to elif new_fp != "": # Updated code to make sure new_fp is a file pointer and not a string new_fp.write(line) # Write each line after a space If you have any questions on any of the code (how it works, why I choose what etc.), just ask in the comments and I'll try to reply as soon as possible.
os.walk-ing through a directory structure to read many CSV headers and write them to an output CSV
I have a folder that contains 60 folders, each of which contains about 60 CSVs (and 1 or 2 non-CSVs). I need to compare the header rows of all of these CSVs, so I am trying to go through the directories and write to an output CSV (1) the filepath of the file in question and (2) the header row in the subsequent cells in the row in the output CSV. Then go to the next file, and write the same information in the next row of the output CSV. I am lost in the part where I am writing the header rows to the CSV -- and am too lost to have even generated an error message. Can anyone advise on what to do next? import os import sys import csv csvfile = '/Users/username/Documents/output.csv' def main(args): # Open a CSV for writing outputs to with open(csvfile, 'w') as out: writer = csv.writer(out, lineterminator='\n') # Walk through the directory specified in cmd line for root, dirs, files in os.walk(args): for item in files: # Check if the item is a CSV if item.endswith('.csv'): # If yes, read the first row with open(item, newline='') as f: reader = csv.reader(f) row1 = next(reader) # Write the first cell as the file name f.write(os.path.realpath(item)) f.write(f.readline()) f.write('\n') # Write this row to a new line in the csvfile var # Go to next file # If not a CSV, go to next file else: continue # Write each file to the CSV # writer.writerow([item]) if __name__ == '__main__': main(sys.argv[1])
IIUC you need a new csv file with 2 columns: file_path and headers. If the header that you need is just a list of column names from that csv, then it will be easier if you use a pandas dataframe to store these values first and then write the dataframe to a csv. import pandas as pd res = [] for root, dirs, files in os.walk(args): for item in files: # Check if the item is a CSV if item.endswith('.csv'): # If yes, read the first row df = pd.read_csv(item) row = {} row['file_path'] = os.path.realpath(item) row['headers'] = df.columns res.append(row) res_df = pd.DataFrame(res) res_df.to_csv(csvfile)
You seem to be getting confused between which file you're reading and writing to. Confusion is normal when you try to do everything in one big function. The whole point of functions is to break things down so it's easy to follow, understand and debug. Here is some code, which doesn't work, but you can easily print out what each function is returning, and once you know that's correct, you feed it to the next function. Each function is small, with very few variables, so not much can go wrong. And most importantly, the variables in each function are local to it, meaning they cannot interfere with what's happening elsewhere, or even confuse you into thinking they might be interfering (and that makes a huge difference). def collect_csv_data(): results = [] for root, dirs, files in os.walk(args): for file in files: if file.endswith('.csv'): headers = extract_headers(os.path.join(root, file)) results.append((file, headers)) return results def extract_headers(filepath): with open(filepath) as f: reader = csv.reader(f) headers = reader.next() return headers def write_results(result, filepath): with open(filepath, 'w') as f: writer = csv.writer(f) for result in results: writer.writerow(result) if __name__ == '__main__': directory = sys.argv[1] results = collect_csv_data(directory) write_results(results, 'results.csv')
Deleting "string" containing last rows from CSV file using regex
I am new to Python. I have thousands of CSV files, in which, there is a group of text that comes after the numeric data are logged and I would like to remove all the rows onwards that begin with text. For example: col 1 col 2 col 3 -------------------- 10 20 30 -------------------- 45 34 56 -------------------- Start 8837sec 9items -------------------- Total 6342sec 755items The good thing is that the text for all the csv files begin with "Start" in column1. I would prefer removing all the rows afterwards including the row that says "Start". Here is what I have so far: import csv, os, re, sys fileList = [] pattern = [r"\b(Start).*", r"\b(Total).*"] for file in files: fullname = os.path.join(cwd, file) if not os.path.isdir(fullname) and not os.path.islink(fullname): fileList.append(fullname) for file in fileList: try: ifile = open(file, "r") except IOError: sys.stderr.write("File %s not found! Please check the filename." %(file)) sys.exit() else: with ifile: reader = csv.reader(ifile) writer = csv.writer(ifile) rowList = [] for row in reader: rowList.append((", ".join(row))) for pattern in word_pattern: if not (re.match(pattern, rowList) writer.writerow(elem) After running this script, it gives me blank csv file. Any idea what to change?
You don't need the CSV reader for this. You could simply find the offset and truncate the file. Open the file in binary mode and use a multi-line regex to find the pattern in the text and use its index. import os import re # multiline, ascii only regex matches Start or Total at start of line start_tag_finder = re.compile(rb'(?am)\nStart|\nTotal').search for filename in files: # TODO: I'm not sure where "files" comes from... # NOTE: no need to join cwd, relative paths do that automatically if not os.path.isdir(filename) and not os.path.islink(filename): with open(filename, 'rb+') as f: # NOTE: you can cap file size if you'd like if os.stat(filename).st_size > 1000000: print(filename, "overflowed 10M size limit") continue search = start_tag_finder(f.read()) if search: f.truncate(search.start())
I would try this for everything after you get your fileList together: for file in fileList: keepRows = [] open(file, 'r') as oFile: for row in oFile: if row[0] != "Start": keepRows += row else: oFile.close() with open(file, 'wb+') as nFile: writer = csv.writer(nFile, delimiter=',') writer.writerow([keepRows]) This opens your original file, gets the lines you wants, closes it and opens it with the w+. This overwrites the file, keeping the name, but clears it out via truncate and then will write each of the rows you wanted to keep on each row of the cleared out file. Alternatively, you could create a new file for each csv doing: for file in fileList: keepRows = [] with open(file, 'r') as oFile, open('new_file.csv', 'a') as nFile: for row in oFile: if row[0] != "Start": keepRows += row else: oFile.close() for row in keepRows: nFile.write(row) Opening with a puts the cursor in the next row each time since this is append. The .writerow method before users iterables which is why it is in [] for the object where as each group, or row, in keepRows while in append does not need iterables and will write each item within the grouping to its own column, move to the next row and do the same thing. EDIT: Updated syntax for binary file mode and .writer().
Use Python to split a CSV file with multiple headers
I have a CSV file that is being constantly appended. It has multiple headers and the only common thing among the headers is that the first column is always "NAME". How do I split the single CSV file into separate CSV files, one for each header row? here is a sample file: "NAME","AGE","SEX","WEIGHT","CITY" "Bob",20,"M",120,"New York" "Peter",33,"M",220,"Toronto" "Mary",43,"F",130,"Miami" "NAME","COUNTRY","SPORT","NUMBER","SPORT","NUMBER" "Larry","USA","Football",14,"Baseball",22 "Jenny","UK","Rugby",5,"Field Hockey",11 "Jacques","Canada","Hockey",19,"Volleyball",4 "NAME","DRINK","QTY" "Jesse","Beer",6 "Wendel","Juice",1 "Angela","Milk",3
If the size of the csv files is not huge -- so all can be in memory at once -- just use read() to read the file into a string and then use a regex on this string: import re with open(ur_csv) as f: data=f.read() chunks=re.finditer(r'(^"NAME".*?)(?=^"NAME"|\Z)',data,re.S | re.M) for i, chunk in enumerate(chunks, 1): with open('/path/{}.csv'.format(i), 'w') as fout: fout.write(chunk.group(1)) If the size of the file is a concern, you can use mmap to create something that looks like a big string but is not all in memory at the same time. Then use the mmap string with a regex to separate the csv chunks like so: import mmap import re with open(ur_csv) as f: mf=mmap.mmap(f.fileno(), 0, access=mmap.ACCESS_READ) chunks=re.finditer(r'(^"NAME".*?)(?=^"NAME"|\Z)',mf,re.S | re.M) for i, chunk in enumerate(chunks, 1): with open('/path/{}.csv'.format(i), 'w') as fout: fout.write(chunk.group(1)) In either case, this will write all the chunks in files named 1.csv, 2.csv etc.
Copy the input to a new output file each time you see a header line. Something like this (not checked for errors): partNum = 1 outHandle = None for line in open("yourfile.csv","r").readlines(): if line.startswith('"NAME"'): if outHandle is not None: outHandle.close() outHandle = open("part%d.csv" % (partNum,), "w") partNum += 1 outHandle.write(line) outHandle.close() The above will break if the input does not begin with a header line or if the input is empty.
You can use the python csv package to read your source file and write multile csv files based on the rule that if element 0 in your row == "NAME", spawn off a new file. Something like this... import csv outfile_name = "out_%.csv" out_num = 1 with open('nameslist.csv', 'rb') as csvfile: csvreader = csv.reader(csvfile, delimiter=',') csv_buffer = [] for row in csvreader: if row[0] != "NAME": csv_buffer.append(row) else: with open(outfile_name % out_num, 'wb') as csvout: for b_row in csv_buffer: csvout.writerow(b_row) out_num += 1 csv_buffer = [row] P.S. I haven't actually tested this but that's the general concept
Given the other answers, the only modification that I would suggest would be to open using csv.DictReader. pseudo code would be like this. Assuming that the first line in the file is the first header Note that this assumes that there is no blank line or other indicator between the entries so that a 'NAME' header occurs right after data. If there were a blank line between appended files the you could use that as an indicator to use infile.fieldnames() on the next row. If you need to handle the inputs as a list, then the previous answers are better. ifile = open(filename, 'rb') infile = cvs.Dictreader(ifile) infields = infile.fieldnames filenum = 1 ofile = open('outfile'+str(filenum), 'wb') outfields = infields # This allows you to change the header field outfile = csv.DictWriter(ofile, fieldnames=outfields, extrasaction='ignore') outfile.writerow(dict((fn, fn) for fn in outfields)) for row in infile: if row['NAME'] != 'NAME': #process this row here and do whatever is needed else: close(ofile) # build infields again from this row infields = [row["NAME"], ...] # This assumes you know the names & order # Dict cannot be pulled as a list and keep the order that you want. filenum += 1 ofile = open('outfile'+str(filenum), 'wb') outfields = infields # This allows you to change the header field outfile = csv.DictWriter(ofile, fieldnames=outfields, extrasaction='ignore') outfile.writerow(dict((fn, fn) for fn in outfields)) # This is the end of the loop. All data has been read and processed close(ofile) close(ifile) If the exact order of the new header does not matter except for the name in the first entry, then you can transfer the new list as follows: infileds = [row['NAME'] for k in row.keys(): if k != 'NAME': infields.append(row[k]) This will create the new header with NAME in entry 0 but the others will not be in any particular order.
Building list of lists from CSV file
I have an Excel file(that I am exporting as a csv) that I want to parse, but I am having trouble with finding the best way to do it. The csv is a list of computers in my network, and what accounts are in the local administrator group for each one. I have done something similar with tuples, but the number of accounts for each computer range from 1 to 30. I want to build a list of lists, then go through each list to find the accounts that should be there(Administrator, etc.) and delete them, so that I can then export a list of only accounts that shouldn't be a local admin, but are. The csv file is formatted as follows: "computer1" Administrator localadmin useraccount "computer2" localadmin Administrator "computer3" localadmin Administrator user2account Any help would be appreciated EDIT: Here is the code I am working with import csv import sys #used for passing in the argument file_name = sys.argv[1] #filename is argument 1 with open(file_name, 'rU') as f: #opens PW file reader = csv.reader(f) data = list(list(rec) for rec in csv.reader(f, delimiter=',')) #reads csv into a list of lists f.close() #close the csv for i in range(len(data)): print data[i][0] #this alone will print all the computer names for j in range(len(data[i])) #Trying to run another for loop to print the usernames print data[i][j] The issue is with the second for loop. I want to be able to read across each line and for now, just print them.
This should get you on the right track: import csv import sys #used for passing in the argument file_name = sys.argv[1] #filename is argument 1 with open(file_name, 'rU') as f: #opens PW file reader = csv.reader(f) data = list(list(rec) for rec in csv.reader(f, delimiter=',')) #reads csv into a list of lists for row in data: print row[0] #this alone will print all the computer names for username in row: #Trying to run another for loop to print the usernames print username Last two lines will print all of the row (including the "computer"). Do for x in range(1, len(row)): print row[x] ... to avoid printing the computer twice. Note that f.close() is not required when using the "with" construct because the resource will automatically be closed when the "with" block is exited. Personally, I would just do: import csv import sys #used for passing in the argument file_name = sys.argv[1] #filename is argument 1 with open(file_name, 'rU') as f: #opens PW file reader = csv.reader(f) # Print every value of every row. for row in reader: for value in row: print value That's a reasonable way to iterate through the data and should give you a firm basis to add whatever further logic is required.
This is how I opened a .csv file and imported columns of data as numpy arrays - naturally, you don't need numpy arrays, but... data = {} app = QApplication( sys.argv ) fname = unicode ( QFileDialog.getOpenFileName() ) app.quit() filename = fname.strip('.csv') + ' for release.csv' #open the file and skip the first two rows of data imported_array = np.loadtxt(fname, delimiter=',', skiprows = 2) data = {'time_s':imported_array[:,0]} data['Speed_RPM'] = imported_array[:,1]
It can be done using the pandas library. import pandas as pd df = pd.read_csv(filename) list_of_lists = df.values.tolist() This approach applies to other kinds of data like .tsv, etc.