How to read bytes as bytes from csv? - python

I have a csv with ~10 columns.. One of the columns has information in bytes i.e., b'gAAAA234'. But when I read this from pandas via .read_csv("file.csv"), I get it all in a dataframe and this particular column is in string rather than bytes i.e., b'gAAAA234'.
How do I simply read it as bytes without having to read it as string and then reconverting?
Currently, I'm working with this:
b = df['column_with_data_in_bytes'][i]
bb = bytes(b[2:len(b)-1],'utf-8')
#further processing of bytes
This works but I was hoping to find a more elegant/pythonic or more reliable way to do this?

You might consider parsing with ast.literal_eval:
import ast
df['column_with_data_in_bytes'] = df['column_with_data_in_bytes'].apply(ast.literal_eval)
Demo:
In [322]: df = pd.DataFrame({'Col' : ["b'asdfghj'", "b'ssdgdfgfv'", "b'asdsfg'"]})
In [325]: df
Out[325]:
Col
0 b'asdfghj'
1 b'ssdgdfgfv'
2 b'asdsfg'
In [326]: df.Col.apply(ast.literal_eval)
Out[326]:
0 asdfghj
1 ssdgdfgfv
2 asdsfg
Name: Col, dtype: object

Related

Reading decimal representation floats from a CSV with pandas

I am trying to read in the contents of a CSV file containing what I believe are IEEE 754 single precision floats, in decimal format.
By default, they are read in as int64. If I specify the data type with something like dtype = {'col1' : np.float32}, the dtype shows up correctly as float32, but they are just the same values as a float instead of an int, ie. 1079762502 becomes 1.079763e+09 instead of 3.435441493988037.
I have managed to do the conversion on single values with either of the following:
from struct import unpack
v = 1079762502
print(unpack('>f', v.to_bytes(4, byteorder="big")))
print(unpack('>f', bytes.fromhex(str(hex(v)).split('0x')[1])))
Which produces
(3.435441493988037,)
(3.435441493988037,)
However, I can't seem to implement this in a vectorised way with pandas:
import pandas as pd
from struct import unpack
df = pd.read_csv('experiments/test.csv')
print(df.dtypes)
print(df)
df['col1'] = unpack('>f', df['col1'].to_bytes(4, byteorder="big"))
#df['col1'] = unpack('>f', bytes.fromhex(str(hex(df['col1'])).split('0x')[1]))
print(df)
Throws the following error
col1 int64
dtype: object
col1
0 1079762502
1 1079345162
2 1078565306
3 1078738012
4 1078635652
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-8-c06d0986cc96> in <module>
7 print(df)
8
----> 9 df['col1'] = unpack('>f', df['col1'].to_bytes(4, byteorder="big"))
10 #df['col1'] = unpack('>f', bytes.fromhex(str(hex(df['col1'])).split('0x')[1]))
11
~/anaconda3/envs/test/lib/python3.7/site-packages/pandas/core/generic.py in __getattr__(self, name)
5177 if self._info_axis._can_hold_identifiers_and_holds_name(name):
5178 return self[name]
-> 5179 return object.__getattribute__(self, name)
5180
5181 def __setattr__(self, name, value):
AttributeError: 'Series' object has no attribute 'to_bytes'
Or if I try the second way, TypeError: 'Series' object cannot be interpreted as an integer
I am at the limits of my Python knowledge here, I suppose I could iterate through every single row, cast to hex, then to string, then strip the 0x, unpack and store. But that seems very convoluted, and already takes several seconds on smaller sample datasets, let along for hundreds of thousands of entries. Am I missing something simple here, is there any better way of doing this?
CSV is a text format, IEEE 754 single precision floats are binary numeric format. If you have a CSV, you have text, it is not that format at all. If I understand you correctly, I think you mean you have text which represent integers (in decimal format) that correspond to a 32bit integer interpretation of your 32bit floats.
So, for starters, when you read the data from a csv, pandas used 64 bit integers by default. So convert to 32bit integers, then re-interpret the bytes using .view:
In [8]: df
Out[8]:
col1
0 1079762502
1 1079345162
2 1078565306
3 1078738012
4 1078635652
In [9]: df.col1.astype(np.int32).view('f')
Out[9]:
0 3.435441
1 3.335940
2 3.150008
3 3.191184
4 3.166780
Name: col1, dtype: float32
Decomposed into steps to help understand:
In [10]: import numpy as np
In [11]: arr = df.col1.values
In [12]: arr
Out[12]: array([1079762502, 1079345162, 1078565306, 1078738012, 1078635652])
In [13]: arr.dtype
Out[13]: dtype('int64')
In [14]: arr_32 = arr.astype(np.int32)
In [15]: arr_32
Out[15]:
array([1079762502, 1079345162, 1078565306, 1078738012, 1078635652],
dtype=int32)
In [16]: arr_32.view('f')
Out[16]:
array([3.4354415, 3.33594 , 3.1500077, 3.191184 , 3.1667795],
dtype=float32)

Extracting value from JSON column very slow

I've got a CSV with a bunch of data. One of the columns, ExtraParams contains a JSON object. I want to extract a value using a specific key, but it's taking quite a while to get through the 60.000something rows in the CSV. Can it be sped up?
counter = 0 #just to see where I'm at
order_data['NewColumn'] = ''
for row in range(len(total_data)):
s = total_data['ExtraParams'][row]
try:
data = json.loads(s)
new_data = data['NewColumn']
counter += 1
print(counter)
order_data['NewColumn'][row] = new_data
except:
print('NewColumn not in row')
I use a try-except because a few of the rows have what I assume is messed up JSON, as they crash the program with a "expecting delimiter ','" error.
When I say "slow" I mean ~30mins for 60.000rows.
EDIT: It might be worth nothing each JSON contains about 35 key/value pairs.
You could use something like pandas and make use of the apply method. For some simple sample data in test.csv
Col1,Col2,ExtraParams
1,"a",{"dog":10}
2,"b",{"dog":5}
3,"c",{"dog":6}
You could use something like
In [1]: import pandas as pd
In [2]: import json
In [3]: df = pd.read_csv("test.csv")
In [4]: df.ExtraParams.apply(json.loads)
Out[4]:
0 {'dog': 10}
1 {'dog': 5}
2 {'dog': 6}
Name: ExtraParams, dtype: object
If you need to extract a field from the json, assuming the field is present in each row you can write a lambda function like
In [5]: df.ExtraParams.apply(lambda x: json.loads(x)['dog'])
Out[5]:
0 10
1 5
2 6
Name: ExtraParams, dtype: int64

Splitting Regex response column on python

I am receiving an object array after applying re.findall for link and hashtags on Tweets data. My data looks like
b=['https://t.co/1u0dkzq2dV', 'https://t.co/3XIZ0SN05Q']
['https://t.co/CJZWjaBfJU']
['https://t.co/4GMhoXhBQO', 'https://t.co/0V']
['https://t.co/Erutsftlnq']
['https://t.co/86VvLJEzvG', 'https://t.co/zCYv5WcFDS']
Now I want to split it in columns, I am using following
df = pd.DataFrame(b.str.split(',',1).tolist(),columns = ['flips','row'])
But it is not working because of weird datatype I guess, I tried few other solutions as well. Nothing worked.And this is what I am expecting, two separate columns
https://t.co/1u0dkzq2dV https://t.co/3XIZ0SN05Q
https://t.co/CJZWjaBfJU
https://t.co/4GMhoXhBQO https://t.co/0V
https://t.co/Erutsftlnq
https://t.co/86VvLJEzvG
It's not clear from your question what exactly is part of your data. (Does it include the square brackets and single quotes?). In any case, the pandas read_csv function is very versitile and can handle ragged data:
import StringIO
import pandas as pd
raw_data = """
['https://t.co/1u0dkzq2dV', 'https://t.co/3XIZ0SN05Q']
['https://t.co/CJZWjaBfJU']
['https://t.co/4GMhoXhBQO', 'https://t.co/0V']
['https://t.co/Erutsftlnq']
['https://t.co/86VvLJEzvG', 'https://t.co/zCYv5WcFDS']
"""
# You'll probably replace the StringIO part with the filename of your data.
df = pd.read_csv(StringIO.StringIO(raw_data), header=None, names=('flips','row'))
# Get rid of the square brackets and single quotes
for col in ('flips', 'row'):
df[col] = df[col].str.strip("[]'")
df
Output:
flips row
0 https://t.co/1u0dkzq2dV https://t.co/3XIZ0SN05Q
1 https://t.co/CJZWjaBfJU NaN
2 https://t.co/4GMhoXhBQO https://t.co/0V
3 https://t.co/Erutsftlnq NaN
4 https://t.co/86VvLJEzvG https://t.co/zCYv5WcFDS

Convert DataFrame string complex i to j python

I have this type of DataFrame I wish to utilize. But because the data i imported is using the i letter for the imaginary part of the complex number, python doesn't allow me to convert it as a float.
5.0 0.01511+0.0035769i
5.0298 0.015291+0.0075383i
5.0594 0.015655+0.0094534i
5.0874 0.012456+0.011908i
5.1156 0.015332+0.011174i
5.1458 0.015758+0.0095832i
How can I proceed to change the i to j in each row of the DataFrame?
Thank you.
If you have a string like this: complexStr = "0.015291+0.0075383i", you could do:
complexFloat = complex(complexStr[:-1] + 'j')
If your data is a string like this: str = "5.0 0.01511+0.0035769i", you have to separate the first part, like this:
number, complexStr = str.split()
complexFloat = complex(complexStr[:-1] + 'j')
>>> complexFloat
>>> (0.015291+0.0075383j)
>>> type(complexFloat)
>>> <type 'complex'>
I'm not sure how you obtain your dataframe, but if you're reading it from a text file with a suitable header, then you can use a converter function to sort out the 'j' -> 'i' so that your dtype is created properly:
For file test.df:
a b
5.0 0.01511+0.0035769i
5.0298 0.015291+0.0075383i
5.0594 0.015655+0.0094534i
5.0874 0.012456+0.011908i
5.1156 0.015332+0.011174i
5.1458 0.015758+0.0095832i
the code
import pandas as pd
df = pd.read_table('test.df',delimiter='\s+',
converters={'b': lambda v: complex(str(v.replace('i','j')))}
)
gives df as:
a b
0 5.0000 (0.01511+0.0035769j)
1 5.0298 (0.015291+0.0075383j)
2 5.0594 (0.015655+0.0094534j)
3 5.0874 (0.012456+0.011908j)
4 5.1156 (0.015332+0.011174j)
5 5.1458 (0.015758+0.0095832j)
with column dtypes:
a float64
b complex128

Pandas Dataframe to JSON File with Separate Records

I'm attempting to dump data from a Pandas Dataframe into a JSON file to import into MongoDB. The format I require in a file has JSON records on each line of the form:
{<column 1>:<value>,<column 2>:<value>,...,<column N>:<value>}
df.to_json(,orient='records') gets close to the result but all the records are dumped within a single JSON array.
Any thoughts on an efficient way to get this result from a dataframe?
UPDATE: The best solution I've come up with is the following:
dlist = df.to_dict('records')
dlist = [json.dumps(record)+"\n" for record in dlist]
open('data.json','w').writelines(dlist)
docs here, there are several orient options you can pass, you need at least pandas 0.12
In [2]: df = DataFrame(np.random.randn(10,2),columns=list('AB'))
In [3]: df
Out[3]:
A B
0 -0.350949 -0.428705
1 -1.732226 1.895324
2 0.314642 -1.494372
3 -0.492676 0.180832
4 -0.985848 0.070543
5 -0.689386 -0.213252
6 0.673370 0.045452
7 -1.403494 -1.591106
8 -1.836650 -0.494737
9 -0.105253 0.243730
In [4]: df.to_json()
Out[4]: '{"A":{"0":-0.3509492646,"1":-1.7322255701,"2":0.3146421374,"3":-0.4926764426,"4":-0.9858476787,"5":-0.6893856618,"6":0.673369954,"7":-1.4034942394,"8":-1.8366498622,"9":-0.1052531862},"B":{"0":-0.4287054732,"1":1.8953235554,"2":-1.4943721459,"3":0.1808322313,"4":0.0705432211,"5":-0.213252257,"6":0.045451995,"7":-1.5911060576,"8":-0.4947369551,"9":0.2437304866}}'
format your data in a python dictionary to your liking and use simplejson:
json.dumps(your_dictionary)

Categories