Read Values from .csv file and convert them to float arrays - python

I stumbled upon a little coding problem. I have to basically read data from a .csv file which looks a lot like this:
2011-06-19 17:29:00.000,72,44,56,0.4772,0.3286,0.8497,31.3587,0.3235,0.9147,28.5751,0.3872,0.2803,0,0.2601,0.2073,0.1172,0,0.0,0,5.8922,1,0,0,0,1.2759
Now, I need to basically an entire file consisting of rows like this and parse them into numpy arrays. Till now, I have been able to get them into a big string type object using code similar to this:
order_hist = np.loadtxt(filename_input,delimiter=',',dtype={'names': ('Year', 'Mon', 'Day', 'Stock', 'Action', 'Amount'), 'formats': ('i4', 'i4', 'i4', 'S10', 'S10', 'i4')})
The format for this file consists of a set of S20 data types as of now. I need to basically extract all of the data in the big ORDER_HIST data type into a set of arrays for each column. I do not know how to save the date time column (I've kept it as String for now). I need to convert the rest to float, but the below code is giving me an error:
temparr=float[:len(order_hist)]
for x in range(len(order_hist['Stock'])):
temparr[x]=float(order_hist['Stock'][x]);
Can someone show me just how I can convert all the columns to the arrays that I need??? Or possibly direct me to some link to do so?

Boy, have I got a treat for you. numpy.genfromtxt has a converters parameter, which allows you to specify a function for each column as the file is parsed. The function is fed the CSV string value. Its return value becomes the corresponding value in the numpy array.
Morever, the dtype = None parameter tells genfromtxt to make an intelligent guess as to the type of each column. In particular, numeric columns are automatically cast to an appropriate dtype.
For example, suppose your data file contains
2011-06-19 17:29:00.000,72,44,56
Then
import numpy as np
import datetime as DT
def make_date(datestr):
return DT.datetime.strptime(datestr, '%Y-%m-%d %H:%M:%S.%f')
arr = np.genfromtxt(filename, delimiter = ',',
converters = {'Date':make_date},
names = ('Date', 'Stock', 'Action', 'Amount'),
dtype = None)
print(arr)
print(arr.dtype)
yields
(datetime.datetime(2011, 6, 19, 17, 29), 72, 44, 56)
[('Date', '|O4'), ('Stock', '<i4'), ('Action', '<i4'), ('Amount', '<i4')]
Your real csv file has more columns, so you'd want to add more items to names, but otherwise, the example should still stand.
If you don't really care about the extra columns, you can assign a fluff-name like this:
arr = np.genfromtxt(filename, delimiter=',',
converters={'Date': make_date},
names=('Date', 'Stock', 'Action', 'Amount') +
tuple('col{i}'.format(i=i) for i in range(22)),
dtype = None)
yields
(datetime.datetime(2011, 6, 19, 17, 29), 72, 44, 56, 0.4772, 0.3286, 0.8497, 31.3587, 0.3235, 0.9147, 28.5751, 0.3872, 0.2803, 0, 0.2601, 0.2073, 0.1172, 0, 0.0, 0, 5.8922, 1, 0, 0, 0, 1.2759)
You might also be interested in checking out the pandas module which is built on top of numpy, and which takes parsing CSV to an even higher level of luxury: It has a pandas.read_csv function whose parse_dates = True parameter will automatically parse date strings (using dateutil).
Using pandas, your csv could be parsed with
df = pd.read_csv(filename, parse_dates = [0,1], header = None,
names=('Date', 'Stock', 'Action', 'Amount') +
tuple('col{i}'.format(i=i) for i in range(22)))
Note there is no need to specify the make_date function. Just to be clear --pands.read_csvreturns aDataFrame, not a numpy array. The DataFrame may actually be more useful for your purpose, but you should be aware it is a different object with a whole new world of methods to exploit and explore.

Related

How to check file is as per format in python

So i have excel sheet of data which have 20 something columns the customer have requirment that they want to know if any of column is missing from excel im using pandas for converting data into dataframes i used if statements for few columns but as its rigid soulution they want something better
any suggestion ? are there any libraries there?
Thanks
want to check if file have all required columns and display check file if there is some erorr
Here I created a dataframe, but you would be usingdf = pd.read_excel('myfile.xlsx)`
My dataframe has only the three following columns
data = {'Name':['Tom', 'Nick', 'Sarah', 'Jack'],
'Age':[20, 21, 19, 18],
'Sex':['M', 'M', 'F', 'M']}
df = pd.DataFrame(data)
I'll make a list then of required cols
REQUIRED_COLUMNS = [
'Name',
'Age',
'Occupation',
'Sex'
]
# I'll make the columns a set to avoid O^2 looping.
dfColumns = set(df.columns)
for col in REQUIRED_COLUMNS:
if col not in dfColumns:
print(f"Column '{col}' is missing.")
Et voilà
>>> Column 'Occupation' is missing.

Convert a dataframe column into a list of object

I am using pandas to read a CSV which contains a phone_number field (string), however, I need to convert this field into the below JSON format
[{'phone_number':'+01 373643222'}] and put it under a new column name called phone_numbers, how can I do that?
Searched online but the examples I found are converting the all the columns into JSON by using to_json() which is apparently cannot solve my case.
Below is an example
import pandas as pd
df = pd.DataFrame({'user': ['Bob', 'Jane', 'Alice'],
'phone_number': ['+1 569-483-2388', '+1 555-555-1212', '+1 432-867-5309']})
use map function like this
df["phone_numbers"] = df["phone_number"].map(lambda x: [{"phone_number": x}] )
display(df)

What's the best way to convert string array into a table?

Given some array parsed from a CSV as follows (don't worry about the parsing part, just consider this array as the start point).
say:
['name,age,city', 'tom,12,new york','john, 10, los angeles']
Such that the first index is the column names, what's the best way to convert this into a table. I was thinking of using numpy and pandas to create a dataframe, but what would be the most memory/time efficient way to convert to do this? Then I am planning do some data analysis and create some new features. Is there something in the standard python library I can use or is pandas the best way to go about this? If I was to use just builtin functions how would I go about this? At the end I would need to combine the features back into the original form of an array.
Builtins only (aside from pprint for printing):
import pprint
data = [
"name,age,city",
"tom,12,new york",
"john, 10, los angeles",
]
cols = None
out_data = []
for line in data:
line = line.split(",")
# We don't know the columns yet; must be the first line
if not cols:
cols = line
continue
out_data.append(dict(zip(cols, line)))
pprint.pprint(out_data)
Using the csv standard module:
import csv
import io
import pprint
data = [
"name,age,city",
"tom,12,new york",
"john, 10, los angeles",
]
reader = csv.DictReader(io.StringIO('\n'.join(data)))
out_data = list(reader)
pprint.pprint(out_data)
Both approaches output the expected:
[{'age': '12', 'city': 'new york', 'name': 'tom'},
{'age': ' 10', 'city': ' los angeles', 'name': 'john'}]
Pandas is the way to go. You do not need to parse values. Instead you can just use read_csv functionality to create a dataframe out of your CSV file and do feature generation/extraction or data cleaning on this frame. Python standard library does not/should not offer such capability out of box.
To gather your values as a Python list at the end of the day use df.values.tolist().
pandas has C code in critical sections which makes it orders of magnitude faster.
I can't speak for efficiency sake, but as far as an easy way to convert it to a table goes using pandas would be the best option. I would use pandas.read_csv for it.

read_csv read in categorical values?

I was wondering if there was a way to read in Categorical values during the read_csv() process.
Normally you can do a convert after the fact with something like:
df.zone = df.zone.astype('category')
At this point the df takes up more memory and I'm looking for a way to reduce that.
I've tried things like:
parking_meters = pd.read_csv('parking_meter_data.csv',
converters={'zone': pd.Categorical(),
'sub_area': pd.Categorical(),
'area': pd.Categorical(),
'config_name': pd.Categorical(),
'pole' : str(),
'longitude' : np.float(),
'latitude' : np.float()
})
parking_meters.memory_usage(deep=True).sum()
However categorical data needs an initialization argument of the actual data, which is in CSV file.
Let's try with dtype:
parking_meters = pd.read_csv('parking_meter_data.csv',
dtype={'zone': 'category',
'sub_area': 'category',
'area': 'category',
'config_name': 'category'
})

Converting a set to a list with Pandas grouopby agg function causes 'ValueError: Function does not reduce'

Sometimes, it seems that the more I use Python (and Pandas), the less I understand. So I apologise if I'm just not seeing the wood for the trees here but I've been going round in circles and just can't see what I'm doing wrong.
Basically, I have an example script (that I'd like to implement on a much larger dataframe) but I can't get it to work to my satisfaction.
The dataframe consists of columns of various datatypes. I'd like to group the dataframe on 2 columns and then produce a new dataframe that contains lists of all the unique values for each variable in each group. (Ultimately, I'd like to concatenate the list items into a single string – but that's a different question.)
The initial script I used was:
import numpy as np
import pandas as pd
def tempFuncAgg(tempVar):
tempList = set(tempVar.dropna()) # Drop NaNs and create set of unique values
print(tempList)
return tempList
# Define dataframe
tempDF = pd.DataFrame({ 'id': [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],
'date': ["02/04/2015 02:34","06/04/2015 12:34","09/04/2015 23:03","12/04/2015 01:00","15/04/2015 07:12","21/04/2015 12:59","29/04/2015 17:33","04/05/2015 10:44","06/05/2015 11:12","10/05/2015 08:52","12/05/2015 14:19","19/05/2015 19:22","27/05/2015 22:31","01/06/2015 11:09","04/06/2015 12:57","10/06/2015 04:00","15/06/2015 03:23","19/06/2015 05:37","23/06/2015 13:41","27/06/2015 15:43"],
'gender': ["male","female","female","male","male","female","female",np.nan,"male","male","female","male","female","female","male","female","male","female",np.nan,"male"],
'age': ["young","old","old","old","old","old",np.nan,"old","old","young","young","old","young","young","old",np.nan,"old","young",np.nan,np.nan]})
# Groupby based on 2 categorical variables
tempGroupby = tempDF.groupby(['gender','age'])
# Aggregate for each variable in each group using function defined above
dfAgg = tempGroupby.agg(lambda x: tempFuncAgg(x))
print(dfAgg)
The output from this script is as expected: a series of lines containing the sets of values and a dataframe containing the returned sets:
{'09/04/2015 23:03', '21/04/2015 12:59', '06/04/2015 12:34'}
{'01/06/2015 11:09', '12/05/2015 14:19', '27/05/2015 22:31', '19/06/2015 05:37'}
{'15/04/2015 07:12', '19/05/2015 19:22', '06/05/2015 11:12', '04/06/2015 12:57', '15/06/2015 03:23', '12/04/2015 01:00'}
{'02/04/2015 02:34', '10/05/2015 08:52'}
{2, 3, 6}
{18, 11, 13, 14}
{4, 5, 9, 12, 15, 17}
{1, 10}
date \
gender age
female old set([09/04/2015 23:03, 21/04/2015 12:59, 06/04...
young set([01/06/2015 11:09, 12/05/2015 14:19, 27/05...
male old set([15/04/2015 07:12, 19/05/2015 19:22, 06/05...
young set([02/04/2015 02:34, 10/05/2015 08:52])
id
gender age
female old set([2, 3, 6])
young set([18, 11, 13, 14])
male old set([4, 5, 9, 12, 15, 17])
young set([1, 10])
The problem occurs when I try to convert the sets to lists. Bizarrely, it produces 2 duplicated rows containing identical lists but then fails with a 'ValueError: Function does not reduce' error.
def tempFuncAgg(tempVar):
tempList = list(set(tempVar.dropna())) # This is the only difference
print(tempList)
return tempList
tempDF = pd.DataFrame({ 'id': [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],
'date': ["02/04/2015 02:34","06/04/2015 12:34","09/04/2015 23:03","12/04/2015 01:00","15/04/2015 07:12","21/04/2015 12:59","29/04/2015 17:33","04/05/2015 10:44","06/05/2015 11:12","10/05/2015 08:52","12/05/2015 14:19","19/05/2015 19:22","27/05/2015 22:31","01/06/2015 11:09","04/06/2015 12:57","10/06/2015 04:00","15/06/2015 03:23","19/06/2015 05:37","23/06/2015 13:41","27/06/2015 15:43"],
'gender': ["male","female","female","male","male","female","female",np.nan,"male","male","female","male","female","female","male","female","male","female",np.nan,"male"],
'age': ["young","old","old","old","old","old",np.nan,"old","old","young","young","old","young","young","old",np.nan,"old","young",np.nan,np.nan]})
tempGroupby = tempDF.groupby(['gender','age'])
dfAgg = tempGroupby.agg(lambda x: tempFuncAgg(x))
print(dfAgg)
But now the output is:
['09/04/2015 23:03', '21/04/2015 12:59', '06/04/2015 12:34']
['09/04/2015 23:03', '21/04/2015 12:59', '06/04/2015 12:34']
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
...
ValueError: Function does not reduce
Any help to troubleshoot this problem would be appreciated and I apologise in advance if it's something obvious that I'm just not seeing.
EDIT
Incidentally, converting the set to a tuple rather than a list works with no problem.
Lists can sometimes have weird problems in pandas. You can either :
Use tuples (as you've already noticed)
If you really need lists, just do it in a second operation like this :
dfAgg.applymap(lambda x: list(x))
full example :
import numpy as np
import pandas as pd
def tempFuncAgg(tempVar):
tempList = set(tempVar.dropna()) # Drop NaNs and create set of unique values
print(tempList)
return tempList
# Define dataframe
tempDF = pd.DataFrame({ 'id': [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],
'date': ["02/04/2015 02:34","06/04/2015 12:34","09/04/2015 23:03","12/04/2015 01:00","15/04/2015 07:12","21/04/2015 12:59","29/04/2015 17:33","04/05/2015 10:44","06/05/2015 11:12","10/05/2015 08:52","12/05/2015 14:19","19/05/2015 19:22","27/05/2015 22:31","01/06/2015 11:09","04/06/2015 12:57","10/06/2015 04:00","15/06/2015 03:23","19/06/2015 05:37","23/06/2015 13:41","27/06/2015 15:43"],
'gender': ["male","female","female","male","male","female","female",np.nan,"male","male","female","male","female","female","male","female","male","female",np.nan,"male"],
'age': ["young","old","old","old","old","old",np.nan,"old","old","young","young","old","young","young","old",np.nan,"old","young",np.nan,np.nan]})
# Groupby based on 2 categorical variables
tempGroupby = tempDF.groupby(['gender','age'])
# Aggregate for each variable in each group using function defined above
dfAgg = tempGroupby.agg(lambda x: tempFuncAgg(x))
# Transform in list
dfAgg.applymap(lambda x: list(x))
print(dfAgg)
There's many such bizzare behaviours in pandas, it is generally better to go on with a workaround (like this), than to find a perfect solution

Categories