Selectin Dataframe columns name from a csv file - python
I have a .csv to read into a DataFrame and the names of the columns are in the same .csv file in the previos rows. Usually I drop all the 'unnecesary' rows to create the DataFrame and then hardcode the names of each dataframe
Trigger time,2017-07-31,10:45:38
CH,Signal name,Input,Range,Filter,Span
CH1, "Tin_MIX_Air",TEMP,PT,Off,2000.000000,-200.000000,degC
CH2, "Tout_Fan2b",TEMP,PT,Off,2000.000000,-200.000000,degC
CH3, "Tout_Fan2a",TEMP,PT,Off,2000.000000,-200.000000,degC
CH4, "Tout_Fan1a",TEMP,PT,Off,2000.000000,-200.000000,degC
Here you can see the rows where the columns names are in double quotes "TinMix","Tout..",etc there are exactly 16 rows with names
Logic/Pulse,Off
Data
Number,Date&Time,ms,CH1,CH2,CH3,CH4,CH5,CH7,CH8,CH9,CH10,CH11,CH12,CH13,CH14,CH15,CH16,CH20,Alarm1-10,Alarm11-20,AlarmOut
NO.,Time,ms,degC,degC,degC,degC,degC,degC,%RH,%RH,degC,degC,degC,degC,degC,Pa,Pa,A,A1234567890,A1234567890,A1234
1,2017-07-31 10:45:38,000,+25.6,+26.2,+26.1,+26.0,+26.3,+25.7,+43.70,+37.22,+25.6,+25.3,+25.1,+25.3,+25.3,+0.25,+0.15,+0.00,LLLLLLLLLL,LLLLLLLLLL,LLLL
And here the values of each variables start.
What I need to do is create a Dataframe from this .csv and place these names in the columns names. I'm new to Python and I'm not very sure how to do it
import pandas as pd
path = r'path-to-file.csv'
data=pd.DataFrame()
with open(path, 'r') as f:
for line in f:
data = pd.concat([data, pd.DataFrame([tuple(line.strip().split(','))])], ignore_index=True)
data.drop(data.index[range(0,29)],inplace=True)
x=len(data.iloc[0])
data.drop(data.columns[[0,1,2,x-1,x-2,x-3]],axis=1,inplace=True)
data.reset_index(drop=True,inplace=True)
data = data.T.reset_index(drop=True).T
data = data.apply(pd.to_numeric)
This is what I've done so far to get my dataframe with the usefull data, I'm dropping all the other columns that arent useful to me and keeping only the values. Last three lines are to reset row/column indexes and to transform the whole df to floats. What I would like is to name the columns with each of the names I showed in the first piece of coding as a I said before I'm doing this manually as:
data.columns = ['a','b','c','d','e','f','g','h','i','j','k','l','m','n','o','p']
But I would like to get them from the .csv file since theres a possibility on changing the CH# - "Name" combination
Thank you very much for the help!
Comment: possible for it to work within the other "OPEN " loop that I have?
Assume Column Names from Row 2 up to 6, Data from Row 7 up to EOF.
For instance (untested code)
data = None
columns = []
with open (path) as fh:
for row, line in enumerate (fh, 1):
if row > 2 and row <= 6:
ch, name = line.split(',')[:2]
columns.append(name)
else:
row_data = [tuple(line.strip().split(','))]
if not data:
data = pd.DataFrame(row_data, columns=columns, ignore_index=True)
else:
data.append(row_data)
Question: ... I would like to get them from the .csv file
Start with:
with open (path) as fh:
for row, line in enumerate (fh, 1):
if row > 2:
ch, name = line.split(',')[:2]
Related
Load csv files with multiple columns into several dataframe
I am trying to load some large csv files which appear to have multiple columns and I am struggling with it. I don't know who design these csv files, but they appear to have event data as well as log data in each csv. At the start of each csv file there is some initial status liens as well Everything is in a separate rows The Event data uses 2 columns (Data and Event comment) The Log data has multiple columns( Date and 20+ columns. I give an example of the type of data setup below: Initial; [Status] The Zoo is Closed; Initial; Status] The Sun is Down; Initial; [Status] Monkeys ar sleeping; Time;No._Of_Monkeys;Monkeys_inside;Monkeys_Outside;Number_of_Bananas 06:00; 5;5;0;10 07:00; 5;5;0;10 07:10;[Event] Sun is up 08:00; 5;5;0;10 08:30; [Event] Monkey Doors open and Zoo Opens 09:00; 5;5;0;10 08:30; [Event] Monkey Goes out 09:00; 5;4;1;10 08:30; [Event] Monkey Eats Banana 09:00; 5;4;1;9 08:30; [Event] Monkey Goes out 09:00; 5;5;2;9 Now what I want to do is to put the Log data into one data frame and the Initial and Event data into another. Now I can read the csv files with csv_reader and go row by row but this is proving very slow, especially when trying to go thorough multiple files and each file containing about 40k rows Below is code I am using below csv_files = [f for f in os.listdir('.') if f.endswith('.log')] for file in csv_files: # Open the CSV file in read mode with open(file, 'r') as csv_file: # Use the csv module to parse the file csv_reader = csv.reader(csv_file, delimiter=';') # Loop through the rows of the file for row in csv_reader: # If the row has event data if len(row) == 2: # Add the row to the Eventlog EventLog = EventLog.append(pd.Series(row), ignore_index=True) # If the row is separated by a single separator elif len(row) > 2: #First row entered into data log will be the column headers if DataLog.empty: DataLog=pd.DataFrame(columns=row) else: # Add the row to the single_separator_df DataFrame DataLog = DataLog.append(pd.Series(row), ignore_index=True) Is there a better way to do this....preferably faster IF I use pandas read_csv it seems to only load the Initial data. i.e first 3 lines of my data above. I can use skip rows to skip down to where the data is and then it will load the rest, but I can't see to figure out how to separate out the event and log data from there so looking for ideas before i lose what little hair I have left.
If I understood your data format corectly, I would do something like this: # simply read data as one column data without headers and indexes df = pd.read_csv("your_file_name.log", header=None, sep=',') # split values in this column by ; (in each row will be list of values) tmp_df = df[0].str.split(";") # delete empty values in the first 3 rows (because we have ; in the end of these rows) tmp_df = tmp_df.map(lambda x: [y for y in x if y != '']) # those rows which have 2 values we insert in one dataframe EventLog = pd.DataFrame(tmp_df[tmp_df.str.len() == 2].to_list()) # other ones we inset in another dataframe (in the first row will be column names) data_log_tmp = tmp_df[tmp_df.str.len() != 2].to_list() DataLog = pd.DataFrame(data_log_tmp[1:], columns=data_log_tmp[0])
Here is an example of loading a CSV file, assuming that Monkeys_inside field is always NaN in Event data and assigned in log data, because I used it as a condition to retrieve the event data : df = pd.read_csv('huge_data.csv', skiprows=3, sep=';') log_df = df.dropna().reset_index(drop=True) event_df = df[~df['Monkeys_inside'].notnull()].reset_index(drop=True) And assuming also that all your CSV file contains those 3 Status lines. Keep in mind that the dataframe will hold duplicated rows if you have some in your csv files, to remove them, you need just to call the drop_duplicates function and you good : event_df = event_df.drop_duplicates()
Changing Headers in .csv files
Right now I am trying to read in data which is provided in a messy to read-in format. Here is an example #Name, #Comment,"" #ExtComment,"" #Source, [Data] 1,2 3,4 5,6 #[END_OF_FILE] When working with one or two of these files, I have manually changed the ['DATA'] header to ['x', 'y'] and am able to read in data just fine by skipping the first few rows and not reading the last line. However, right now I have 30+ files, split between two different folders and I am trying to figure out the best way to read in the files and change the header of each file from ['DATA'] to ['x', 'y']. The excel files are in a folder one path lower than the file that is supposed to read them (i.e. folder 1 contains set of code below, and folder 2 contains the excel files, folder 1 contains folder 2) Here is what I have right now: #sets - refers to the set containing the name of each file (i.e. [file1, file2]) #df - the dataframe which you are going to store the data in #dataLabels - the headers you want to search for within the .csv file #skip - the number of rows you want to skip #newHeader - what you want to change the column headers to be #pathName - provide path where files are located def reader (sets, df, dataLabels, skip, newHeader, pathName): for i in range(len(sets)): df_temp = pd.read_csv(glob.glob(pathName+ sets[i]+".csv"), sep=r'\s*,', skiprows = skip, engine = 'python')[:-1] df_temp.column.value[0] = [newHeader] for j in range(len(dataLabels)): df_temp[dataLabels[j]] = pd.to_numeric(df_temp[dataLabels[j]],errors = 'coerce') df.append(df_temp) return df When I run my code, I run into the error: No columns to parse from file I am not quite sure why - I have tried skipping past the [DATA] header and I still receive that error. Note, for this example I would like the headers to be 'x', 'y' - I am trying to make a universal function so that I could change it to something more useful depending on what I am measuring.
If the #[DATA] row is to be replaced regardless, just ignore it. You can just tell pandas to ignore lines that start with # and then specify your own names: import pandas as pd df = pd.read_csv('test.csv', comment='#', names=['x', 'y']) which gives x y 0 1 2 1 3 4 2 5 6
Expanding Kraigolas's answer, to do this with multiple files you can use a list comprehension: files = [glob.glob(f"{pathName}{set_num}.csv") for set_num in sets] df = pd.concat([pd.read_csv(file, comment="#", names = ["x", "y"]) for file in files])
If you're lucky, you can use Kraigolas' answer to treat those lines as comments. In other cases you may be able to use the skiprows argument to skip header columns: df= pd.read_csv(path,skiprows=10,skipfooter=2,names=["x","y"]) And yes, I do have an unfortunate file with a 10-row heading and 2 rows of totals. Unfortunately I also have very unfortunate files where the number of headings change. In this case I used the following code to iterate until I find the first "good" row, then create a new dataframe from the rest of the rows. The names in this case are taken from the first "good" row and the types from the first data row This is certainly not fast, it's a last resort solution. If I had a better solution I'd use it: data = df if(first_col not in df.columns): # Skip rows until we find the first col header for i, row in df.iterrows(): if row[0] == first_col: data = df.iloc[(i + 1):].reset_index(drop=True) # Read the column names series = df.iloc[i] series = series.str.strip() data.columns = list(series) # Use only existing column types types = {k: v for k, v in dtype.items() if k in data.columns} # Apply the column types again data = data.astype(dtype=types) break return data In this case the condition is finding the first column name (first_col) in the first cell. This can be adopted to use different conditions, eg looking for the first numeric cell: columns = ["x", "y"] dtypes = {"x":"float64", "y": "float64"} data = df # Skip until we find the first numeric value for i, row in df.iterrows(): if row[0].isnumeric(): data = df.iloc[(i + 1):].reset_index(drop=True) # Apply names and types data.columns = columns data = data.astype(dtype=dtypes) break return data
pd.read_csv question with two different tables on top of each other in .csv
i have got a csv file that is setup with information on top of information and im struggling to read it into a dataframe. The raw CSV Looks like: I am hoping to get essentially 3 different things: 1) Define the date and company name in the first row 2) Put the summary table (top table) into a dataframe 3) Put the detailed sales table into another dataframe I tried df = pd.read_cs(filepath,error_bad_lines=False) which just gives me the summary table but in only 3 rows due to the first row being only 3 columns. Any ideas on how to read these files? the row numbers for the summary table are not fixed (varies how many rows). Any help would be much appreciated! Thanks!
You can specify the number of rows you want to read with parameter nrows and you can also use skiprows to skip reading certain rows in pd.read_excel: You can read the top-table like below into a df: Here, you can skip the first row which has some not useful headers and read the next 10 rows which contain top-table's data. df1 = pd.read_excel('test.xls', skiprows = 1, nrows= 10, usecols = 'A:D') Then the second-table in another df like this: Here, you can skip the rows already read in df1, and read the remaining data from the file. df2 = pd.read_excel('test.xls', skiprows = 6)
For those interested this is what i used to solve the problem: from csv import reader with open('*.csv', 'r') as read_obj: csv_reader = reader(read_obj) list1 = [] list2 = [] list3 = [] for row in csv_reader: if len(row) == 3: list1.append(row) if len(row) == 4: list2.append(row) if len(row) == 7: list3.append(row) df1 = pd.DataFrame(list1) df2 = pd.DataFrame(list2) df3 = pd.DataFrame(list3)
How to combine multiple columns into one long column using python and pandas
Hi everyone I am currently working on data like the following: Example of original data file There are a total of 51 files, each with more than 800 oscillating columns, e.g. (Time, ID, x1, x2, ID, x1, x2,...), the columns are all unlabelled. Within the file, each row has different numbers of columns, something looks like this:Shape of one data file I need to merge all 51 files into one file, and then stack the columns vertically like this: Example of output file So for each timestamp, each student will have a specific row with their location x,y. Can anyone please help me with this, thanks I used the following code to merge CSV files with different columns, but the output file is twice the size of the originals (e.g. 100MB VS 50MB). My approach was to combine the files using the maximum number of columns and expand to each row. However, this approach created a lot of missing values in the data, and thus, increasing the size of output files. import os import glob import pandas as pd def concatenate(indir="C:\Test Files",outfile="F:\Research Assitant\PROJECT_Position Data\Test File\Concatenate.csv"): os.chdir(indir) fileList=glob.glob("*.csv") dfList=[] for filename in fileList: ### Loop over each line with open(filename, 'r') as f: ### Skip first four lines for _ in range(4): next(f) ### Get the numbers of columns in each line col_count = [ len(l.split(",")) for l in f.readlines() ] ### Read the current csv file df = pd.read_csv(filename, header=None, delimiter=",", names=range(max(col_count)), skiprows=4, keep_default_na=False, na_values=[""]) ### Append to the list dfList.append(df) concatDf=pd.concat(dfList,axis=0) concatDf.to_csv(outfile,index=None) Is there any way to reduce the size of the output files? Or a more efficient way to deal with heterogeneous CSV files in python? And how do I stack the columns vertically after merged all the CSV files?
with open(os.path.join(working_folder, file_name)) as f: student_data = [] for line in f: row = line.strip().split(",") number_of_results = round(len(row[1:]) / 4) # if we do not count time column, data repeats every 4 times time_column = row[0] results = row[1:] for i in range(number_of_results): data = [time_column] + results[i*4: (i+1)*4] student_data.append(data) df = pd.DataFrame(student_data, columns=["Time", "ID", "Name", "x1", "x2"]) df
Pandas - read_table read selected lines
I work with text files that contain some basic information in the first 6 rows including empty rows. I have to import, process and export the data into another csv. Here is an example of the first 6 rows: Foov7.9 - bar.raw created at 10:45:25 on 10.02.2015: (empty row) (empty row) A B C D a b c d (empty row) In pandas I use row 4: A B C D as header for the dataframe: data1 = pd.read_table(dataset1,header = 1, skiprows = (4,5), index_col=None, delimiter=r"\t", engine='python') When writing to_csv after processing the data I now would like to place back the first 6 rows but I already fail when reading the rows. By solely writing the header from row 4 into the csv I would loose all additional information. How can I read these rows and later put them back into the csv without interfering with the dataframe header?
There is most likely a more neat way to do it, but it works and it only reads your data once, for performance: (1) Read data in_df = pd.read_excel("test.xls", header=0) (2) create a header for later header = in_df[:5] #only first rows (3) save the header columns for concat later cols = list(header.columns.values) #a list with headers (4) create a copy for data processing data = in_df data.rename(columns=in_df.iloc[2,:], inplace=True) # rename your columns data = data[5:] # you want just the data body data = data.reset_index(drop = True) # reindex #DO WHATEVER WITH DATA (5) output: concat [header & data]. write output data.columns = cols # we need the old col names for concat out_df = pd.concat([header,data]) # do the concat out_df = out_df.reset_index(drop = True) # reset index (if you want to) out_df.to_csv("out.csv") #write it. out_df.to_csv("out.csv", index = False) if you don't want index in output