How to read an excel file with nested columns in pandas - python

Using Pandas, I'm trying to read an excel file that looks like the following:
another sample
I tried to read the excel file using the regular approach by running: df = pd.read_excel('filename.xlsx', skiprows=6).
But the problem with it is that I don't get all the columns names needed and most of the column names are Unnamed:1
Is there a way to solve these and read all the columns? Or an approach were I can convert it to a json file

Related

Skipping rows and columns when reading csv with Pandas

I need help about read csv file with pandas.
I have a .csv file that recorded machine parameters and want to read this excel with pandas and analyze. But problem is this excel file not in a proper table format. That means there are a lot of empty rows and columns. Also parameter values are starting from 301st line (example).
How can I read as properly this csv file?
You can use skiprows:
pd.read_csv(csv_file, skiprows=301)
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html

How to create a dataframe by using column names and text/excel files using pyspark

I have a text/excel files in by blob storage, and I want to fecth only certain columns from that files and create a dataframe.
for ex:
a,b,c,d,e are the columns present in my file
I have the path for that file :/mnt/reservoir/files/file1.txt
I want to create a dataframe by using this file and fetching only columns b,c,d
how to achieve this using pyspark?
To work around this and create a dataframe using specific columns we have to create another data frame after reading the whole text/excel file.
Code Example:
df = spark.read.format("csv").option("header", "true").option("delimter", ",").load("/mnt/reservoir/files/file1.txt")
df2 = df.select(df.b, df.c, df.d)
OR
df3 = df.select(df.columns[1:4])
My sample file reading
Output

Parsing and saving the rows of excel file using pandas

Have an excel file with a column with some text in each row of this column.
I'm using pandas pd.read_excel() to open this .xlsx file. Then I would like to do the following: I would like to save every row of this column as a distinct .txt file (that would have the text from this row inside this file). Is it possible to be done via pandas?
the basic idea would be to use an iterator to loop over the rows, opening a each file and writing the value in, something like:
import pandas as pd
df = pd.read_excel('test.xlsx')
for i, value in enumerate(df['column']):
with open(f'row-{i}.txt', 'w') as fd:
fd.write(value)

Read CSV file with specific columns as String in PySpark

I have a large file which is dynamically generated, a small sample of which is given below:
ID,FEES,I_CLSS
11,5555,00000110
12,5555,654321
13,5555,000030
14,5555,07640
15,5555,14550
17,5555,99070
19,5555,090090
My issue is that I will always have a column like I_CLSS that is starting with 0s in this file. I'd like to read the file to a spark dataframe with I_CLSS column as StringType.
For this, in python I can do something like,
df = pandas.read_csv('INPUT2.csv',dtype={'I_CLSS': str})
But is there an alternative to this command in pyspark?
I understand that I can manually specify the schema of a file in Pyspark. But it would be extremely diffficult to do for a file whose columns are dynamically generated.
So I'd appreciate it if somebody could help me with this.

How to read data from excel from a particular column in python

I have an excel sheet and I am reading the excel sheet using pandas in python.
Now I want to read the excel file based on a column, if the column has some value then do not read that row, if the column is empty than read that and store the values in a list.
Here is a screenshot
Excel Example
Now in the above image when the uniqueidentifier is yes then it should not read that value, but if it is empty then it should start reading from that value.
How to do that using python and how to get index so that after I have performed some function that I am again able to write to that blank unique identifier column saying that row has been read
This is possible for csv files. There you could do
iter_csv = pandas.read_csv('file.csv', iterator=True, chunksize=100000)
df = pd.concat([chunk[chunk['UniqueIdentifier'] == 'True'] for chunk in iter_csv])
But pd.read_excel does not offer to return an iterator object, maybe some other excel-readers can. But I don't no which ones. Nevertheless you could export your excel file as csv and use the solution for csv files.

Categories