Parsing Data, Excel to Python - python
So I have a excel(.csv) data file that looks like:
Frequency Frequency error
0.00575678 17
0.315 2
0.003536329 13
0.00481 1
0.004040379 4
where the second column is the error in the first data column e.g. the value of the first entry is 0.00575678 +/- 0.0000000017 and the second is 0.315 +/- 0.002. So using python is there a way to parse the data using Python so that I can get two data arrays, the 1st being frequency and the 2nd the frequency error. Where the first entry in the 2nd array is in the format of 0.0000000017. If this was a small data file I'd do it manually but it has a few thousand entries so its not really an option. Thanks
Maybe not the fastest, but looks close.
sample = """\
0.00575678,17
0.315,2
0.003536329,13
0.00481,1
0.004040379,4"""
for line in sample.splitlines():
value,errordigits = line.split(',')
error = ''.join(c if c in '0.' else '0' for c in value)[:-1]
error += errordigits
print "%s,%s" % (value,error)
prints:
0.00575678,0.000000017
0.315,0.002
0.003536329,0.0000000013
0.00481,0.00001
0.004040379,0.000000004
i found pandas useful to get data from a csv.
f = pandas.read_csv("YOURFILE.csv");
dfw = pandas.DataFrame(data = df, columns=['COLUMNNAME1','COLUMNNAME2'])
y = df.COLUMNNAME1.values
x = df.COLUMNNAME2.values
Related
Python Pandas Dataframe Carriage Return and Line Feed Problems
I am reading data from an API that I can't share. I have a dataframe that looks like this after reading it from the API: col_1 col_2 1 data 2 data 3 data Steve 4 data 5 data I want everything in row "Steve" to be concatenated with the previous line. How can I do this? There is some sort of carriage return/line feed problem when I import the data. Any suggestions? Expected Output: col_1 col_2 1 data 2 data 3 data + Steve 4 data 5 data I am converting my result from the API to a dataframe by doing this: results = requesgs.get(url, auth, headers, data) results_data = results.content rawData = pd.read_csv(io.StringIO(results_data.decode("utf-8")))
My assumption is that the condition to get the line merged with the previous one is that the value in col_2 is null. That condition can be changed according to your specific case. f = pd.isnull(data.loc[:,"col_2"]) data.loc[:,"col_2"] = ["{:s} + {:s}".format(str(x), str(y)) if z else str(x) for x, y, z in zip(data.loc[:,"col_2"], data.loc[:,"col_1"].shift(-1), f.shift(-1, fill_value=False)) ] data = data.loc[~f,:].reset_index(drop=True) I need to create the series f because is going to used both for: merging the "incomplete" lines with the previous ones filtering out the the incomplete lines once the merger is done
How can i take values from csv file and print the values that are within + or - 1 of the value?
I am quite new to python so please bear with me. I am trying to pick one of the values printed, find it in the csv file, and print the values + or - 1 around it. Here is the code to pick the values. import pandas as pd import numpy as np from scipy import stats import matplotlib.pyplot as plt df = pd.read_csv(r"/Users/aaronhuang/Desktop/ffp/exfileCLEAN2.csv", skiprows=[1]) magnitudes = df['Magnitude '].values times = df['Time '].values zscores = np.abs(stats.zscore(magnitudes, ddof=1)) outlier_indicies = np.argwhere(zscores > 3).flatten() numbers = print(times[outlier_indicies]) The values printed are below. 2455338.895 2455350.644 2455391.557 2455404.776 2455413.734 2455451.661 2455473.49 2455477.521 2455507.505 2455702.662 2455734.597 2455765.765 2455776.575 2455826.593 2455842.512 2455866.508 2455996.796 2456017.767 2456047.694 2456058.732 2456062.722 2456071.924 2456082.802 2456116.494 2456116.535 2456116.576 2456116.624 2456116.673 2456116.714 2456116.799 2456123.527 2456164.507 2456166.634 2456391.703 2456455.535 2456455.6 2456501.763 2456511.616 2456519.731 2456525.49 2456547.588 2456570.526 2456595.515 2456776.661 2456853.543 2456920.511 2456953.496 2457234.643 2457250.68 2457252.672 2457278.526 2457451.89 2457485.722 2457497.93 2457500.674 2457566.874 2457567.877 2457644.495 2457661.553 2457675.513 An example of the csv file is below. Time Magnitude Magnitude error 2455260.853 19.472 0.150 2455260.900 19.445 0.126 2455261.792 19.484 0.168 2455262.830 19.157 0.261 2455264.814 19.376 0.150 ... ... ... ... 2457686.478 19.063 0.176 2457689.480 19.178 0.128 2457690.475 19.386 0.171 2457690.480 19.092 0.112 2457691.476 19.191 0.122 For example, I want to pick the first value, which is 2455338.895 i would like to print all the values + or - 1 of it (in the time column) (and later graph it). Some help would be greatly appreciated. Thank you in advance.
I think this is what you are looking for (assuming you want single number query which you mentioned in question that way): numbers = times[outlier_indicies] print(df[(df['Time']<numbers[0]+1) & (df['Time']>numbers[0]-1)]['Time']) Looping over numbers to get all intervals are straightforward, if that is what you might be interested in. EDIT: The for loop looks like this: print(pd.concat([df[(df['Time']<i+1) & (df['Time']>i-1)]['Time'] for i in numbers])) The non-loop version if there are no overlapping intervals in (numbers[i]-1,numbers[i]+1): intervals = pd.DataFrame(data={'start': numbers-1,'finish': numbers+1}) starts = pd.DataFrame(data={'start': 1}, index=intervals.start.tolist()) finishs = pd.DataFrame(data={'finish': -1},index=intervals.finish.tolist()) transitions = pd.merge(starts, finishs, how='outer', left_index=True, right_index=True).fillna(0) transitions['transition'] = (transitions.pop('finish')+transitions.pop('start')).cumsum() B = pd.DataFrame(index=numbers) pd.merge(transitions, B, how='outer', left_index=True, right_index=True).fillna(method='ffill').loc[B.index].astype(bool) print(transitions[transitions.transition == 1].index) In case of overlapping you can merge consecutive overlapping intervals in intervals dataframe with the help of following column and then run the above code(needs maybe a couple more lines to complete): intervals['overlapping'] = (intervals.finish - intervals.start.shift(-1))>0
You can simply iterate over the numbers: all_nums = numbers.split(" ") first = all_nums[0] threshold = 1 result = [] for num in all_nums: if abs(float(first) - float(num)) < threshold: result.append(num) # float(num) if you want number instead of str print(result)
Save each Excel-spreadsheet-row with header in separate .txt-file (saved as a parameter-sample to be read by simulation programs)
I'm a building energy simulation modeller with an Excel-question to enable automated large-scale simulations using parameter samples (samples generated using Monte Carlo). Now I have the following question in saving my samples: I want to save each row of an Excel-spreadsheet in a separate .txt-file in a 'special' way to be read by simulation programs. Let's say, I have the following excel-file with 4 parameters (a,b,c,d) and 20 values underneath: a b c d 2 3 5 7 6 7 9 1 3 2 6 2 5 8 7 6 6 2 3 4 Each row of this spreadsheet represents a simulation-parameter-sample. I want to store each row in a separate .txt-file as follows (so 5 '.txt'-files for this spreadsheet): '1.txt' should contain: a=2; b=3; c=5; d=7; '2.txt' should contain: a=6; b=7; c=9; d=1; and so on for files '3.txt', '4.txt' and '5.txt'. So basically matching the header with its corresponding value underneath for each row in a separate .txt-file ('header equals value;'). Is there an Excel add-in that does this or is it better to use some VBA-code? Anybody some idea? (I'm quit experienced in simulation modelling but not in programming, therefore this rather easy parameter-sample-saving question in Excel. (Solutions in Python are also welcome if that's easier for you people))
my idea would be to use Python along with Pandas as it's one of the most flexible solutions, as your use case might expand in the future. I'm gonna try making this as simple as possible. Though I'm assuming, that you have Python, that you know how to install packages via pip or conda and are ready to run a python script on whatever system you are using. First your script needs to import pandas and read the file into a DataFrame: import pandas as pd df = pd.read_xlsx('path/to/your/file.xlsx') (Note that you might need to install the xlrd package, in addition to pandas) Now you have a powerful data structure, that you can manipulate in plenty of ways. I guess the most intuitive one, would be to loop over all items. Use string formatting, which is best explained over here and put the strings together the way you need them: outputs = {} for row in df.index: s = "" for col in df.columns: s += "{}={};\n".format(col, df[col][row]) print(s) now you just need to write to a file using python's io method open. I'll just name the files by the index of the row, but this solution will overwrite older text files, created by earlier runs of this script. You might wonna add something unique like the date and time or the name of the file you read to it or increment the file name further with multiple runs of the script, for example like this. All together we get: import pandas as pd df = pd.read_excel('path/to/your/file.xlsx') file_count = 0 for row in df.index: s = "" for col in df.columns: s += "{}={};\n".format(col, df[col][row]) file = open('test_{:03}.txt'.format(file_count), "w") file.write(s) file.close() file_count += 1 Note that it's probably not the most elegant way and that there are one liners out there, but since you are not a programmer I thought you might prefer a more intuitive way, that you can tweak yourself easily.
I got this to work in Excel. You can expand the length of the variables x,y and z to match your situation and use LastRow, LastColumn methods to find the dimensions of your data set. I named the original worksheet "Data", as shown below. Sub TestExportText() Dim Hdr(1 To 4) As String Dim x As Long Dim y As Long Dim z As Long For x = 1 To 4 Hdr(x) = Cells(1, x) Next x x = 1 For y = 1 To 5 ThisWorkbook.Sheets.Add After:=Sheets(Sheets.Count) ActiveSheet.Name = y For z = 1 To 4 With ActiveSheet .Cells(z, 1) = Hdr(z) & "=" & Sheets("Data").Cells(x + 1, z) & ";" End With Next z x = x + 1 ActiveSheet.Move ActiveWorkbook.ActiveSheet.SaveAs Filename:="File" & y & ".txt", FileFormat:=xlTextWindows ActiveWorkbook.Close SaveChanges:=False Next y End Sub
If you can save your Excel spreadsheet as a CSV file then this python script will do what you want. with open('data.csv') as file: data_list = [l.rstrip('\n').split(',') for l in file] counter = 1 for x in range (1, len (data_list)) : output_file_name = str (counter) + '.txt' with open (output_file_name, 'w' ) as file : for x in range (len (data_list [counter])) : print (x) output_string = data_list [0] [x] + '=' + data_list [counter] [x] + ';\n' file.write (output_string) counter += 1
Python: Remove all lines of csv files that do not match within a range
I have been given two sets of data in the form of csv files which have 23 columns and thousands of lines of data. The data in column 14 corresponds to the positions of stars in an image of a galaxy. The issue is that one set of data contains values for positions that do not exist in the second set of data. They need to both contain the same positions, but the positions are off by a value of 0.0002 each data set. F435.csv has values which are 0.0002 greater than the values in F550.csv. I am trying to find the matches between the two files, but within a certain range because all values are off by a certain amount. Then, I need to delete all lines of data that correspond to values that do not match. Below is a sample of the data from each of the two files: F435W.csv: NUMBER,FLUX_APER,FLUXERR_APER,MAG_APER,MAGERR_APER,FLUX_BEST,FLUXERR_BEST,MAG_BEST,MAGERR_BEST,BACKGROUND,X_IMAGE,Y_IMAGE,ALPHA_J2000,DELTA_J2000,X2_IMAGE,Y2_IMAGE,XY_IMAGE,A_IMAGE,B_IMAGE,THETA_IMAGE,ERRA_IMAGE,ERRB_IMAGE,ERRTHETA_IMAGE 1,2017.013,0.01242859,-8.2618,0,51434.12,0.3269918,-11.7781,0,0.01957931,1387.9406,541.916,49.9898514,41.5266996,8.81E+01,1.63E+03,1.44E+02,40.535,8.65,84.72,0.00061,0.00035,62.14 2,84.73392,0.01245409,-4.8201,0.0002,112.9723,0.04012135,-5.1324,0.0004,-0.002142646,150.306,146.7986,49.9942613,41.5444109,4.92E+00,5.60E+00,-2.02E-01,2.379,2.206,-74.69,0.00339,0.0029,88.88 3,215.1939,0.01242859,-5.8321,0.0001,262.2751,0.03840466,-6.0469,0.0002,-0.002961465,3248.686,52.8478,50.003155,41.5019044,4.77E+00,5.05E+00,-1.63E-01,2.263,2.166,-65.29,0.002,0.0019,-66.78 4,0.3796681,0.01240305,1.0515,0.0355,0.5823653,0.05487975,0.587,0.1023,-0.00425157,3760.344,11.113,50.0051049,41.4949256,1.93E+00,1.02E+00,-7.42E-02,1.393,1.007,-4.61,0.05461,0.03818,-6.68 5,0.9584663,0.01249223,0.0461,0.0142,1.043696,0.0175857,-0.0464,0.0183,-0.004156116,4013.2063,9.1225,50.0057256,41.4914444,1.12E+00,9.75E-01,1.09E-01,1.085,0.957,28.34,0.01934,0.01745,44.01 F550M.csv: NUMBER,FLUX_APER,FLUXERR_APER,MAG_APER,MAGERR_APER,FLUX_BEST,FLUXERR_BEST,MAG_BEST,MAGERR_BEST,BACKGROUND,X_IMAGE,Y_IMAGE,ALPHA_J2000,DELTA_J2000,X2_IMAGE,Y2_IMAGE,XY_IMAGE,A_IMAGE,B_IMAGE,THETA_IMAGE,ERRA_IMAGE,ERRB_IMAGE,ERRTHETA_IMAGE,,FALSE 2,1921.566,0.01258874,-8.2091,0,37128.06,0.2618096,-11.4243,0,0.01455503,4617.5225,554.576,49.9887896,41.5264699,6.09E+01,8.09E+02,1.78E+01,28.459,7.779,88.63,0.00054,0.00036,77.04,, 3,1.055918,0.01256313,-0.0591,0.0129,9.834856,0.1109255,-2.4819,0.0122,-0.002955142,3936.4946,85.3255,49.9949149,41.5370016,3.98E+01,1.23E+01,1.54E+01,6.83,2.336,24.13,0.06362,0.01965,23.98,, 4,151.2355,0.01260153,-5.4491,0.0001,184.0693,0.03634057,-5.6625,0.0002,-0.002626019,3409.2642,76.9891,49.9931935,41.5442109,4.02E+00,4.35E+00,-1.47E-03,2.086,2.005,-89.75,0.00227,0.00198,66.61,, 5,0.3506025,0.01258874,1.138,0.039,0.3466277,0.01300407,1.1503,0.0407,-0.002441164,3351.9893,8.9147,49.9942299,41.5451727,4.97E-01,5.07E-01,7.21E-03,0.715,0.702,62.75,0.02,0.01989,82.88 Below is the code I have so far, but I'm unsure how to find matches based on that specific column. I am very new to Python, and this task is probably way beyond my knowledge of Python, but I desperately need to figure it out. I've been working on this single task for weeks, trying different methods. Thank you in advance! import csv with open('F435W.csv') as csvF435: readCSV = csv.reader(csvF435, delimiter=',') with open('F550M.csv') as csvF550: readCSV = csv.reader(csvF550, delimiter=',') for x in range (0,6348): a = csvF435[x] for y in range(0,6349): b = csvF550[y] if b < a + 0.0002 and b > a - 0.0002: newlist.append(b) break
You can use the following sample: import csv def isfloat(value): try: float(value) return True except ValueError: return False interval = 0.0002 with open('F435W.csv') as csvF435: csvF435_in = csv.reader(csvF435, delimiter=',') #clean the file content before processing with open("merge.csv","w") as merge_out: pass with open("merge.csv", "a") as merge_out: #write the header of the output csv file for header in csvF435_in: merge_out.write(','.join(header)+'\n') break for l435 in csvF435_in: with open('F550M.csv') as csvF550: csvF550_in = csv.reader(csvF550, delimiter=',') for l550 in csvF550_in: if isfloat(l435[13]) and isfloat(l550[13]) and abs(float(l435[13])-float(l550[13])) < interval: merge_out.write(','.join(l435)+'\n') F435W.csv: NUMBER,FLUX_APER,FLUXERR_APER,MAG_APER,MAGERR_APER,FLUX_BEST,FLUXERR_BEST,MAG_BEST,MAGERR_BEST,BACKGROUND,X_IMAGE,Y_IMAGE,ALPHA_J2000,DELTA_J2000,X2_IMAGE,Y2_IMAGE,XY_IMAGE,A_IMAGE,B_IMAGE,THETA_IMAGE,ERRA_IMAGE,ERRB_IMAGE,ERRTHETA_IMAGE 1,2017.013,0.01242859,-8.2618,0,51434.12,0.3269918,-11.7781,0,0.01957931,1387.9406,541.916,49.9898514,41.5266996,8.81E+01,1.63E+03,1.44E+02,40.535,8.65,84.72,0.00061,0.00035,62.14 2,84.73392,0.01245409,-4.8201,0.0002,112.9723,0.04012135,-5.1324,0.0004,-0.002142646,150.306,146.7986,49.9942613,41.5444109,4.92E+00,5.60E+00,-2.02E-01,2.379,2.206,-74.69,0.00339,0.0029,88.88 3,215.1939,0.01242859,-5.8321,0.0001,262.2751,0.03840466,-6.0469,0.0002,-0.002961465,3248.686,52.8478,50.003155,41.5019044,4.77E+00,5.05E+00,-1.63E-01,2.263,2.166,-65.29,0.002,0.0019,-66.78 4,0.3796681,0.01240305,1.0515,0.0355,0.5823653,0.05487975,0.587,0.1023,-0.00425157,3760.344,11.113,50.0051049,41.4949256,1.93E+00,1.02E+00,-7.42E-02,1.393,1.007,-4.61,0.05461,0.03818,-6.68 5,0.9584663,0.01249223,0.0461,0.0142,1.043696,0.0175857,-0.0464,0.0183,-0.004156116,4013.2063,9.1225,50.0057256,41.4914444,1.12E+00,9.75E-01,1.09E-01,1.085,0.957,28.34,0.01934,0.01745,44.01 F550M.csv: NUMBER,FLUX_APER,FLUXERR_APER,MAG_APER,MAGERR_APER,FLUX_BEST,FLUXERR_BEST,MAG_BEST,MAGERR_BEST,BACKGROUND,X_IMAGE,Y_IMAGE,ALPHA_J2000,DELTA_J2000,X2_IMAGE,Y2_IMAGE,XY_IMAGE,A_IMAGE,B_IMAGE,THETA_IMAGE,ERRA_IMAGE,ERRB_IMAGE,ERRTHETA_IMAGE,,FALSE 2,1921.566,0.01258874,-8.2091,0,37128.06,0.2618096,-11.4243,0,0.01455503,4617.5225,554.576,49.9887896,41.5264699,6.09E+01,8.09E+02,1.78E+01,28.459,7.779,88.63,0.00054,0.00036,77.04,, 3,1.055918,0.01256313,-0.0591,0.0129,9.834856,0.1109255,-2.4819,0.0122,-0.002955142,3936.4946,85.3255,49.9949149,41.5370016,3.98E+01,1.23E+01,1.54E+01,6.83,2.336,24.13,0.06362,0.01965,23.98,, 4,151.2355,0.01260153,-5.4491,0.0001,184.0693,0.03634057,-5.6625,0.0002,-0.002626019,3409.2642,76.9891,49.9931935,41.5442109,4.02E+00,4.35E+00,-1.47E-03,2.086,2.005,-89.75,0.00227,0.00198,66.61,, 5,0.3506025,0.01258874,1.138,0.039,0.3466277,0.01300407,1.1503,0.0407,-0.002441164,3351.9893,8.9147,49.9942299,41.5451727,4.97E-01,5.07E-01,7.21E-03,0.715,0.702,62.75,0.02,0.01989,82.88 merge.csv: NUMBER,FLUX_APER,FLUXERR_APER,MAG_APER,MAGERR_APER,FLUX_BEST,FLUXERR_BEST,MAG_BEST,MAGERR_BEST,BACKGROUND,X_IMAGE,Y_IMAGE,ALPHA_J2000,DELTA_J2000,X2_IMAGE,Y2_IMAGE,XY_IMAGE,A_IMAGE,B_IMAGE,THETA_IMAGE,ERRA_IMAGE,ERRB_IMAGE,ERRTHETA_IMAGE 2,84.73392,0.01245409,-4.8201,0.0002,112.9723,0.04012135,-5.1324,0.0004,-0.002142646,150.306,146.7986,49.9942613,41.5444109,4.92E+00,5.60E+00,-2.02E-01,2.379,2.206,-74.69,0.00339,0.0029,88.88
Python data wrangling issues
I'm currently stumped by some basic issues with a small data set. Here are the first three lines to illustrate the format of the data: "Sport","Entry","Contest_Date_EST","Place","Points","Winnings_Non_Ticket","Winnings_Ticket","Contest_Entries","Entry_Fee","Prize_Pool","Places_Paid" "NBA","NBA 3K Crossover #3 [3,000 Guaranteed] (Early Only) (1/15)","2015-03-01 13:00:00",35,283.25,"13.33","0.00",171,"20.00","3,000.00",35 "NBA","NBA 1,500 Layup #4 [1,500 Guaranteed] (Early Only) (1/25)","2015-03-01 13:00:00",148,283.25,"3.00","0.00",862,"2.00","1,500.00",200 The issues I am having after using read_csv to create a DataFrame: The presence of commas in certain categorical values (such as Prize_Pool) results in python considering these entries as strings. I need to convert these to floats in order to make certain calculations. I've used python's replace() function to get rid of the commas, but that's as far as I've gotten. The category Contest_Date_EST contains timestamps, but some are repeated. I'd like to subset the entire dataset into one that has only unique timestamps. It would be nice to have a choice in which repeated entry or entries are removed, but at the moment I'd just like to be able to filter the data with unique timestamps.
Use thousands=',' argument for numbers that contain a comma In [1]: from pandas import read_csv In [2]: d = read_csv('data.csv', thousands=',') You can check Prize_Pool is numerical In [3]: type(d.ix[0, 'Prize_Pool']) Out[3]: numpy.float64 To drop rows - take first observed, you can also take last In [7]: d.drop_duplicates('Contest_Date_EST', take_last=False) Out[7]: Sport Entry \ 0 NBA NBA 3K Crossover #3 [3,000 Guaranteed] (Early ... Contest_Date_EST Place Points Winnings_Non_Ticket Winnings_Ticket \ 0 2015-03-01 13:00:00 35 283.25 13.33 0 Contest_Entries Entry_Fee Prize_Pool Places_Paid 0 171 20 3000 35
Edit: Just realized you're using pandas - should have looked at that. I'll leave this here for now in case it's applicable but if it gets downvoted I'll take it down by virtue of peer pressure :) I'll try and update it to use pandas later tonight Seems like itertools.groupby() is the tool for this job; Something like this? import csv import itertools class CsvImport(): def Run(self, filename): # Get the formatted rows from CSV file rows = self.readCsv(filename) for key in rows.keys(): print "\nKey: " + key i = 1 for value in rows[key]: print "\nValue {index} : {value}".format(index = i, value = value) i += 1 def readCsv(self, fileName): with open(fileName, 'rU') as csvfile: reader = csv.DictReader(csvfile) # Keys may or may not be pulled in with extra space by DictReader() # The next line simply creates a small dict of stripped keys to original padded keys keys = { key.strip(): key for (key) in reader.fieldnames } # Format each row into the final string groupedRows = {} for k, g in itertools.groupby(reader, lambda x : x["Contest_Date_EST"]): groupedRows[k] = [self.normalizeRow(v.values()) for v in g] return groupedRows; def normalizeRow(self, row): row[1] = float(row[1].replace(',','')) # "Prize_Pool" # and so on return row if __name__ == "__main__": CsvImport().Run("./Test1.csv") Output: More info: https://docs.python.org/2/library/itertools.html Hope this helps :)