Related
It's my first time creating a code for processing files with a lot of data, so I am kinda stuck here.
What I'm trying to do is to read a list of path, listing all of the csv files that need to be read, retrieve the HEAD and TAIL from each files and put it inside a list.
I have 621 csv files in total, with each files consisted of 5800 rows, and 251 columns
This is the data sample
[LOGGING],RD81DL96_1,3,4,5,2,,,,
LOG01,,,,,,,,,
DATETIME,INDEX,SHORT[DEC.0],SHORT[DEC.0],SHORT[DEC.0],SHORT[DEC.0],SHORT[DEC.0],SHORT[DEC.0],SHORT[DEC.0],SHORT[DEC.0]
TIME,INDEX,FF-1(1A) ,FF-1(1B) ,FF-1(1C) ,FF-1(2A),FF-2(1A) ,FF-2(1B) ,FF-2(1C),FF-2(2A)
47:29.6,1,172,0,139,1258,0,0,400,0
47:34.6,2,172,0,139,1258,0,0,400,0
47:39.6,3,172,0,139,1258,0,0,400,0
47:44.6,4,172,0,139,1263,0,0,400,0
47:49.6,5,172,0,139,1263,0,0,450,0
47:54.6,6,172,0,139,1263,0,0,450,0
The problem is, while it took about 13 seconds to read all the files (still kinda slow honestly)
But when I add a single line of append code, the process took a lot of times to finish, about 4 minutes.
Below is the snipset of the code:
# CsvList: [File Path, Change Date, File size, File Name]
for x, file in enumerate(CsvList):
timeColumn = ['TIME']
df = dd.read_csv(file[0], sep =',', skiprows = 3, encoding= 'CP932', engine='python', usecols=timeColumn)
# The process became long when this code is added
startEndList.append(list(df.head(1)) + list(df.tail(1)))
Why that happened? I'm using dask.dataframe
Currently, your code isn't really leveraging Dask's parallelizing capabilities because:
df.head and df.tail calls will trigger a "compute" (i.e., convert your Dask DataFrame into a pandas DataFrame -- which is what we try to minimize in lazy evaluations with Dask), and
the for-loop is running sequentially because you're creating Dask DataFrames and converting them to pandas DataFrames, all inside the loop.
So, your current example is similar to just using pandas within the for-loop, but with the added Dask-to-pandas-conversion overhead.
Since you need to work on each of your files, I'd suggest checking out Dask Delayed, which might be more elegant+ueful here. The following (pseudo-code) will parallelize the pandas operation on each of your files:
import dask
import pandas as pd
for file in list_of_files:
df = dask.delayed(pd.read_csv)(file)
result.append(df.head(1) + df.tail(1))
dask.compute(*result)
The output of dask.visualize(*result) when I used 4 csv-files confirms parallelism:
If you really want to use Dask DataFrame here, you may try to:
read all files into a single Dask DataFrame,
make sure each Dask "partition" corresponds to one file,
use Dask Dataframe apply to get the head and tail values and append them to a new list
call compute on the new list
A first approach using only Python as starting point:
import pandas as pd
import io
def read_first_and_last_lines(filename):
with open(filename, 'rb') as fp:
# skip first 4 rows (headers)
[next(fp) for _ in range(4)]
# first line
first_line = fp.readline()
# start at -2x length of first line from the end of file
fp.seek(-2 * len(first_line), 2)
# last line
last_line = fp.readlines()[-1]
return first_line + last_line
data = []
for filename in pathlib.Path('data').glob('*.csv'):
data.append(read_first_and_last_lines(filename))
buf = io.BytesIO()
buf.writelines(data)
buf.seek(0)
df = pd.read_csv(buf, header=None, encoding='CP932')
I have a txt file with following format:
{"results":[{"statement_id":0,"series":[{"name":"datalogger","columns":["time","ActivePower0","CosPhi0","CurrentRms0","DcAnalog5","FiringAngle0","IrTemperature0","Lage 1_Angle1","Lage 1_Angle2","PotentioMeter0","Rotation0","SNR","TNR","Temperature0","Temperature3","Temperature_MAX31855_0","Temperature_MAX31855_3","Vibra0_X","Vibra0_Y","Vibra0_Z","VoltageAccu0","VoltageRms0"],"values":[["2017-10-06T08:50:25.347Z",null,null,null,null,null,null,null,null,null,null,"41762721","Testcustomer",null,null,null,null,-196,196,-196,null,null],["2017-10-06T08:50:25.348Z",null,null,null,null,null,null,346.2964,76.11179,null,null,"41762721","Testcustomer",null,null,null,null,null,null,null,null,null],["2017-10-06T08:50:25.349Z",null,null,2596,null,null,null,null,null,null,null,"41762721","Testkunde",null,null,null,null,null,null,null,null,80700],["2017-10-06T08:50:25.35Z",null,null,null,null,null,null,null,null,null,1956,"41762721","Testkunde",null,null,null,null,null,null,null,null,null],["2017-10-06T09:20:05.742Z",null,null,null,null,null,67.98999,null,null,null,null,"41762721","Testkunde",null,null,null,null,null,null,null,null,null]]}]}]}
...
So in the text file everything is saved in one line. CSV file is not available.
I would like to have it as a data frame in pandas. when I use read.csv:
df = pd.read_csv('time-series-data.txt', sep = ",")
the output of print(df) is someting like [0 rows x 3455.. columns]
So currently everything is read in as one line. However, I would like to have 22 columns (time, activepower0, CosPhi0,..). I ask for tips, thank you very much.
Is a pandas dataframe even suitable for this? the text files are up to 2 GB in size.
Here's an example which can read the file you posted.
Here's the test file, named test.json:
{"results":[{"statement_id":0,"series":[{"name":"datalogger","columns":["time","ActivePower0","CosPhi0","CurrentRms0","DcAnalog5","FiringAngle0","IrTemperature0","Lage 1_Angle1","Lage 1_Angle2","PotentioMeter0","Rotation0","SNR","TNR","Temperature0","Temperature3","Temperature_MAX31855_0","Temperature_MAX31855_3","Vibra0_X","Vibra0_Y","Vibra0_Z","VoltageAccu0","VoltageRms0"],
"values":[
["2017-10-06T08:50:25.347Z",null,null,null,null,null,null,null,null,null,null,"41762721","Test-customer",null,null,null,null,-196,196,-196,null,null],
["2017-10-06T08:50:25.348Z",null,null,null,null,null,null,346.2964,76.11179,null,null,"41762721","Test-customer",null,null,null,null,null,null,null,null,null]]}]}]}
Here's the python code used to read it in:
import json
import pandas as pd
# Read test file.
# This reads the entire file into memory at once. If this is not
# possible for you, you may want to look into something like ijson:
# https://pypi.org/project/ijson/
with open("test.json", "rb") as f
data = json.load(f)
# Get the first element of results list, and first element of series list
# You may need a loop here, if your real data has more than one of these.
subset = data['results'][0]['series'][0]
values = subset['values']
columns = subset['columns']
df = pd.DataFrame(values, columns=columns)
print(df)
Good day everyone.
I was hoping someone here could help me with a bit of a problem. I've run an experiment, where data has been gathered from 6 separate sensors simultaneously. The data has then been exported to a single shared txt file. Now I need to import the data to python to analyze it.
I know I can do this by taking each of the lines and simply copy&pasting data output from each sensor into a separate document, and then import those in a loop - but that is a lot of work and brings in a high potential of human error.
But is there no way of using readline with specific lines read, and porting that to pandas DataFrame? There is a fixed header spacing, and line spacing between each sensor.
I tried:
f=open('OR0024622_auto3200.txt')
lines = f.readlines()
base = 83
sensorlines = 6400
Sensor=[]
Sensor = lines[base:sensorlines+base]
df_sens = pd.DataFrame(Sensor)
df_sens
but the output isn't very useful:
Snip from of Output
--
Here's the file i am importing:
link.
Any suggestions ?
Looks like a tab separated data.
use
>>> df = pd.read_csv('OR0024622_auto3200.txt', delimiter=r'\t', skiprows=83, header=None, nrows=38955-84)
>>> df.tail()
0 1 2
38686 6397 3.1980000000e+003 9.28819e-009
38687 6398 3.1985000000e+003 9.41507e-009
38688 6399 3.1990000000e+003 1.11703e-008
38689 6400 3.1995000000e+003 9.64276e-009
38690 6401 3.2000000000e+003 8.92203e-009
>>> df.head()
0 1 2
0 1 0.0000000000e+000 6.62579e+000
1 2 5.0000000000e-001 3.31289e+000
2 3 1.0000000000e+000 2.62362e-011
3 4 1.5000000000e+000 1.51130e-011
4 5 2.0000000000e+000 8.35723e-012
abhilb's answer is to the point and correct, but there is a lot to be said regarding loading/reading files. A quick browser search will take you a long way (I encourage you to read up on this!), but I'll add a few details here:
If you want to load multiple files that match a pattern you can do so iteratively via glob:
import pandas as pd
from glob import glob as gg
filePattern = "/path/to/file/*.txt"
for fileName in gg(filePattern):
df = pd.read_csv('OR0024622_auto3200.txt', delimiter=r'\t')
This will load each file one-by-one. What if you want to put all data into a single dataframe? Do this:
masterDF = pd.Dataframe()
for fileName in gg(filePattern):
df = pd.read_csv('OR0024622_auto3200.txt', delimiter=r'\t')
masterDF = pd.concat([masterDF, df], axis=0)
This works great for pandas, but what if you want to read into a numpy array?
import numpy as np
# using previous imports
base = 83
sensorlines = 6400
# create an empty array that has three columns
masterArray = np.full((0, 3), np.nan)
for fileName in gg(filePattern):
# open the file (NOTE: this does not read the file, just puts it in a buffer)
with open(fileName, "r") as tmp:
# now read the file and split each line by the carriage return (could be "\r\n")
# you now have a list of strings
data = tmp.read().split("\n")
# keep only the "data" portion of the file
data = data[base:sensorlines + base]
# convert list of strings to an array of floats
# here, I use a "list comprehension" for speed and simplicity
data = np.array([r.split("\t") for r in data]).astype(float)
# stack your new data onto your master array
masterArray = np.vstack([masterArray, data])
Opening a file via the "with open(fileName, "r")" syntax is handy because Python automatically closes the file when you are done. If you don't use "with" then you must manually close the file (e.g. tmp.close()).
These are just some starting points to get you on your way. Feel free to ask for clarification.
The problem:
I have lists of genes expressed in 53 different tissues. Originally, this data was stored in a maximal array of the genes, with 'NaN' where there was no expression. I am trying to create new lists for each tissue that just have the genes expressed, as it was very inefficient to be searching through this array every time I was running my script. I have a code that finds the genes for each tissue as required, but I do not know how to store the ouptut.
I was using pandas data frame, and then converting to csv. But this does not accept lists of varying length, unless I put this list as a single item. However, then when I save the data frame to a csv, it tries to squeeze this very long list (all genes exprssed for one tissue) into a single cell. I get an error of the string length exceeding the excel character-per-cell limit.
Therefore I need a way of either dealing with this limit, or stroing my lists in a different way. I would rather just have one file for all lists.
My code:
import csv
import pandas as pd
import math
import numpy as np
#Import list of tissues:
df = pd.read_csv(r'E-MTAB-5214-query-results.tsv', skiprows = [0,1,2,3], sep='\t')
tissuedict=df.to_dict()
tissuelist = list(tissuedict.keys())[2:]
all_genes = [gene for key,gene in tissuedict['Gene Name'].items()]
data = []
for tissue in tissuelist:
#Create array to keep track of the protein mRnaS in tissue that are not present in the network
#initiate with first tissue, protein
nanInd = [key for key,value in tissuedict[tissue].items() if math.isnan(value)]
tissueExpression = np.delete(all_genes, nanInd)
datatis = [tissue, tissueExpression.tolist()]
print(datatis)
data.append(datatis)
print(data)
df = pd.DataFrame(data)
df.to_csv(r'tissue_expression_data.csv')
Link to data (either one):
https://github.com/joanna-lada/gene_data/blob/master/E-MTAB-5214-query-results.tsv
https://raw.githubusercontent.com/joanna-lada/gene_data/master/E-MTAB-5214-query-results.tsv
IIUC you need lists of the gene names found in each tissue. This writes these lists as columns into a csv:
import pandas as pd
df = pd.read_csv('E-MTAB-5214-query-results.tsv', skiprows = [0,1,2,3], sep='\t')
df = df.drop(columns='Gene ID').set_index('Gene Name')
res = pd.DataFrame()
for c in df.columns:
res = pd.concat([res, pd.Series(df[c].dropna().index, name=c)], axis=1)
res.to_csv('E-MTAB-5214-query-results.csv', index=False)
(Writing them as rows would have been easier, but Excel can't import so many columns)
Don't open the csv in Excel directly, but use a blank worksheet and import the csv (Data - External data, From text), otherwise you can't separate them into Excel columns in one run (at least in Excel 2010).
create your data variable as a dictionary
you can save the dictionary to a json file using json.dump refer here
import json
data = {}
for tissue in tissuelist:
nanInd = [key for key,value in tissuedict[tissue].items() if math.isnan(value)]
tissueExpression = np.delete(all_genes, nanInd)
data[tissue] = tissueExpression.tolist()
with open('filename.json', 'w') as fp:
json.dump(data, fp)
I have a 10gb CSV file that contains some information that I need to use.
As I have limited memory on my PC, I can not read all the file in memory in one single batch. Instead, I would like to iteratively read only some rows of this file.
Say that at the first iteration I want to read the first 100, at the second those going to 101 to 200 and so on.
Is there an efficient way to perform this task in Python?
May Pandas provide something useful to this? Or are there better (in terms of memory and speed) methods?
Here is the short answer.
chunksize = 10 ** 6
for chunk in pd.read_csv(filename, chunksize=chunksize):
process(chunk)
Here is the very long answer.
To get started, you’ll need to import pandas and sqlalchemy. The commands below will do that.
import pandas as pd
from sqlalchemy import create_engine
Next, set up a variable that points to your csv file. This isn’t necessary but it does help in re-usability.
file = '/path/to/csv/file'
With these three lines of code, we are ready to start analyzing our data. Let’s take a look at the ‘head’ of the csv file to see what the contents might look like.
print pd.read_csv(file, nrows=5)
This command uses pandas’ “read_csv” command to read in only 5 rows (nrows=5) and then print those rows to the screen. This lets you understand the structure of the csv file and make sure the data is formatted in a way that makes sense for your work.
Before we can actually work with the data, we need to do something with it so we can begin to filter it to work with subsets of the data. This is usually what I would use pandas’ dataframe for but with large data files, we need to store the data somewhere else. In this case, we’ll set up a local sqllite database, read the csv file in chunks and then write those chunks to sqllite.
To do this, we’ll first need to create the sqllite database using the following command.
csv_database = create_engine('sqlite:///csv_database.db')
Next, we need to iterate through the CSV file in chunks and store the data into sqllite.
chunksize = 100000
i = 0
j = 1
for df in pd.read_csv(file, chunksize=chunksize, iterator=True):
df = df.rename(columns={c: c.replace(' ', '') for c in df.columns})
df.index += j
i+=1
df.to_sql('table', csv_database, if_exists='append')
j = df.index[-1] + 1
With this code, we are setting the chunksize at 100,000 to keep the size of the chunks managable, initializing a couple of iterators (i=0, j=0) and then running a through a for loop. The for loop read a chunk of data from the CSV file, removes space from any of column names, then stores the chunk into the sqllite database (df.to_sql(…)).
This might take a while if your CSV file is sufficiently large, but the time spent waiting is worth it because you can now use pandas ‘sql’ tools to pull data from the database without worrying about memory constraints.
To access the data now, you can run commands like the following:
df = pd.read_sql_query('SELECT * FROM table', csv_database)
Of course, using ‘select *…’ will load all data into memory, which is the problem we are trying to get away from so you should throw from filters into your select statements to filter the data. For example:
df = pd.read_sql_query('SELECT COl1, COL2 FROM table where COL1 = SOMEVALUE', csv_database)
You can use pandas.read_csv() with chuncksize parameter:
http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html#pandas.read_csv
for chunck_df in pd.read_csv('yourfile.csv', chunksize=100):
# each chunck_df contains a part of the whole CSV
This code may help you for this task. It navigates trough a large .csv file and does not consume lots of memory so that you can perform this in a standard lap top.
import pandas as pd
import os
The chunksize here orders the number of rows within the csv file you want to read later
chunksize2 = 2000
path = './'
data2 = pd.read_csv('ukb35190.csv',
chunksize=chunksize2,
encoding = "ISO-8859-1")
df2 = data2.get_chunk(chunksize2)
headers = list(df2.keys())
del data2
start_chunk = 0
data2 = pd.read_csv('ukb35190.csv',
chunksize=chunksize2,
encoding = "ISO-8859-1",
skiprows=chunksize2*start_chunk)
headers = []
for i, df2 in enumerate(data2):
try:
print('reading cvs....')
print(df2)
print('header: ', list(df2.keys()))
print('our header: ', headers)
# Access chunks within data
# for chunk in data:
# You can now export all outcomes in new csv files
file_name = 'export_csv_' + str(start_chunk+i) + '.csv'
save_path = os.path.abspath(
os.path.join(
path, file_name
)
)
print('saving ...')
except Exception:
print('reach the end')
break
Method to transfer huge CSV into database is good because we can easily use SQL query.
We have also to take into account two things.
FIRST POINT: SQL also are not a rubber, it will not be able to stretch the memory.
For example converted to bd file:
https://nycopendata.socrata.com/Social-Services/311-Service-Requests-
from-2010-to-Present/erm2-nwe9
For this db file SQL language:
pd.read_sql_query("SELECT * FROM 'table'LIMIT 600000", Mydatabase)
It can read maximum about 0,6 mln records no more with 16 GB RAM memory of PC (time of operation 15,8 second).
It could be malicious to add that downloading directly from a csv file is a bit more efficient:
giga_plik = 'c:/1/311_Service_Requests_from_2010_to_Present.csv'
Abdul = pd.read_csv(giga_plik, nrows=1100000)
(time of operation 16,5 second)
SECOND POINT: To effectively using SQL data series converted from CSV we ought to memory about suitable form of date. So I proposer add to ryguy72's code this:
df['ColumnWithQuasiDate'] = pd.to_datetime(df['Date'])
All code for file 311 as about I pointed:
start_time = time.time()
### sqlalchemy create_engine
plikcsv = 'c:/1/311_Service_Requests_from_2010_to_Present.csv'
WM_csv_datab7 = create_engine('sqlite:///C:/1/WM_csv_db77.db')
#----------------------------------------------------------------------
chunksize = 100000
i = 0
j = 1
## --------------------------------------------------------------------
for df in pd.read_csv(plikcsv, chunksize=chunksize, iterator=True, encoding='utf-8', low_memory=False):
df = df.rename(columns={c: c.replace(' ', '') for c in df.columns})
## -----------------------------------------------------------------------
df['CreatedDate'] = pd.to_datetime(df['CreatedDate']) # to datetimes
df['ClosedDate'] = pd.to_datetime(df['ClosedDate'])
## --------------------------------------------------------------------------
df.index += j
i+=1
df.to_sql('table', WM_csv_datab7, if_exists='append')
j = df.index[-1] + 1
print(time.time() - start_time)
At the end I would like to add: converting a csv file directly from the Internet to db seems to me a bad idea. I propose to download base and convert locally.