So I have a data file, which i must extract specific data from. Using;
x=15 #need a way for code to assess how many lines to skip from given data
maxcol=2000 #need a way to find final row in data
data=numpy.genfromtxt('data.dat.csv',skip_header=x,delimiter=',')
column_one=data[0;max,0]
column_two=data[0:max,1]
this gives me an array for the specific case where there are (x=)15 lines of metadata above the required data and where the number of rows of data is (maxcol=)2000. In what way do I go about changing the code to satisfy any value for x and maxcol?
Use pandas. Its read_csv function does all that you want (I don't include its equivalent of delimiter, sep=',', because comma-delimited is the default):
import pandas as pd
data = pd.read_csv('data.dat.csv', skiprows=x, nrows=maxcol)
If you really want that as a numpy array, you can do this:
data = data.values
But you can probably just leave it as a pandas DataFrame.
Related
I am trying to write a code that reads a csv file and can save each columns as a specific variable. I am having difficulty because the header is 7 lines long (something I can control but would like to just ignore if I can manipulate it in code), and then my data is full of important decimal places so it can not change to int( or maybe string?) I've also tried just saving each column by it's placement in the file but am struggling to run it. Any ideas?
Image shows my current code that I have slimmed to show important parts and circles data that prints in my console.
save each columns as a specific variable
import pandas as pd
pd.read_csv('file.csv')
x_col = df['X']
https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html
If what you are looking for is how to iterate through the columns, no matter how many there are. (Which is what I think you are asking.) Then this code should do the trick:
import pandas as pd
import csv
data = pd.read_csv('optitest.csv', skiprows=6)
for column in data.columns:
# You will need to define what this save() method is.
# Just placing it here as an example.
save(data[column])
The line about formatting your data as a number or a string was a little vague. But if it's decimal data, then you need to use float. See #9637665.
I would like to write my spark dataframe as a set of JSON files and in particular each of which as an array of JSON.
Let's me explain with a simple (reproducible) code.
We have:
import numpy as np
import pandas as pd
df = spark.createDataFrame(pd.DataFrame({'x': np.random.rand(100), 'y': np.random.rand(100)}))
Saving the dataframe as:
df.write.json('s3://path/to/json')
each file just created has one JSON object per line, something like:
{"x":0.9953802385540144,"y":0.476027611419198}
{"x":0.929599290575914,"y":0.72878523939521}
{"x":0.951701684432855,"y":0.8008064729546504}
but I would like to have an array of those JSON per file:
[
{"x":0.9953802385540144,"y":0.476027611419198},
{"x":0.929599290575914,"y":0.72878523939521},
{"x":0.951701684432855,"y":0.8008064729546504}
]
It is not currently possible to have spark "natively" write a single file in your desired format because spark works in a distributed (parallel) fashion, with each executor writing its part of the data independently.
However, since you are okay with having each file be an array of json not only [one] file, here is one workaround that you can use to achieve your desired output:
from pyspark.sql.functions import to_json, spark_partition_id, collect_list, col, struct
df.select(to_json(struct(*df.columns)).alias("json"))\
.groupBy(spark_partition_id())\
.agg(collect_list("json").alias("json_list"))\
.select(col("json_list").cast("string"))\
.write.text("s3://path/to/json")
First you create a json from all of the columns in df. Then group by the spark partition ID and aggregate using collect_list. This will put all the jsons on that partition into a list. Since you're aggregating within the partition, there should be no shuffling of data required.
Now select the list column, convert to a string, and write it as a text file.
Here's an example of how one file looks:
[{"x":0.1420523746714616,"y":0.30876114874052263}, ... ]
Note you may get some empty files.
Presumably you can force spark to write the data in ONE file if you specified an empty groupBy, but this would result in forcing all of the data into a single partition which could result in an out of memory error.
If the data is not super huge and it's okay to have the list as one JSON file, the following workaround is also valid. First, convert the Pyspark data frame to Pandas and then to a list of dicts. Then, the list can be dumped as JSON.
list_of_dicts = df.toPandas().to_dict('records')
json_file = open('path/to/file.json', 'w')
json_file.write(json.dumps(list_of_dicts))
json_file.close()
I'm writing a Python program that will import a square matrix from an Excel sheet and do some NumPy work with it. So far it looks like OpenPyXl is the best way to transfer the data from an XLSX file to the Python environment, but it's not clear the best way to turn that data from a tuple of tuples* of cell references into an array of the actual values that are in the Excel sheet.
*created by calling sheet_ranges = wb['Sheet1'] and then mat = sheet_ranges['A1:IQ251']
Of course I could check the size of the tuple, write a nested for loop, check every element of each tuple within the tuple, and fill up an array.
But is there really no better way?
As commented above, the ideal solution is to use a pandas dataframe. For example:
import pandas as pd
dataframe = pd.read_excel("name_of_my_excel_file.xlsx")
print(dataframe)
Just pip install pandas and then run the code above, only replacing name_of_my_excel_file with the full path to your Excel file. Then you can proceed with Pandas functions to deeply analyse your data, for example. See docs at here!
I have a messy text file that I need to sort into columns in a dataframe so I
can do the data analysis I need to do. Here is the messy looking file:
Messy text
I can read it in as a csv file, that looks a bit nicer using:
import pandas as pd
data = pd.read_csv('phx_30kV_indepth_0_0_outfile.txt')
print(data)
And this prints out the data aligned, but the issue is that the output is [640 rows x 1 column]. And I need to separate it into multiple columns and manipulate it as a dataframe.
I have tried a number of solutions using StringIO that have worked here before, but nothing seems to be doing the trick.
However, when I do this, there is the issue that the
delim_whitespace=True
Link to docs ^
df = pd.read_csv('phx_30kV_indepth_0_0_outfile.txt', delim_whitespace=True)
Your input file is actually not in CSV format.
As you provided only .png picture, it is even not clear, whether this file
is divided into rows or not.
If not, you have to start from "cutting" the content into individual lines and
read the content from the output file - result of this cutting.
I think, this is the first step, before you can use either read_csv or read_table (of course, with delim_whitespace=True).
So I am using pandas to read in excel files and csv files. These files contain both strings and numbers not just numbers. Problem is all my strings get converted into NaN which I do not want at all. I do not know what the types of the columns will be ahead of time (it is actually my job to handle the system that figures this out) so I can't tell pandas what they will be (that must come later). I just want to read in each cell as a string for now.
here is my code
if csv: #check weather to read in excell file or csv
frame = pandas.read_csv(io.StringIO(data))
else:
frame = pandas.read_excel(io.StringIO(data))
tbl = []
print frame.dtypes
for (i, col) in enumerate(frame):
tmp = [col]
for (j, value) in enumerate(frame[col]):
tmp.append(unicode(value))
tbl.append(tmp)
I just need to be able to produce a column wise 2D list and I can do everything from there. I also need to be able to handle Unicode (data is already in Unicode).
How do I construct 'tbl' so that cells that should be strings do not come out as 'NaN'?
In general cases where you can't know the dtypes or column names of a CSV ahead of time, using a CSV sniffer can be helpful.
import csv
[...]
dialect = csv.Sniffer().sniff(f.read(1024))
f.seek(0)
frame = pandas.read_csv(io.StringIO(data), dialect=dialect)