Python Importing data with multiple delimiters - python

In Python, how can I import data that looks like this:
waveform [0]
t0 26/11/2014 10:53:03.639218
delta t 2.000000E-5
time[0] Y[0]
26/11/2014 10:53:03.639218 1.700977E-2
26/11/2014 10:53:03.639238 2.835937E-4
26/11/2014 10:53:03.639258 2.835937E-4
26/11/2014 10:53:03.639278 -8.079492E-3
There are two delimiters, : and white space. I want to get rid of the date 24/11/2014 and delete the semicolons so that the time array looks like 105303.639218, etc. So is there a way to specify two delimiters in the code, or is there a better way to analyse the data?
So far I have got:
import numpy as np
import matplotlib.pyplot as plt
_, time, y = np.loadtxt('data.txt', delimiter=':', skiprows=5)
plt.plot(time,y)
plt.show()

You can do this:
time = '10:34:20.454068'
list_ = time.split(':')
''.join(list_)
# '103420.454068'
for each row.

Maybe it's sort of a roundabout way of doing this, but...
import numpy as np
import matplotlib.pyplot as plt
mydata = np.loadtxt('data.txt', dtype='string', skiprows=5)
time = mydata[:,1]
time = np.array([s.replace(':','') for s in time])
y = np.array(mydata[:,2])
plt.plot(time,y)
plt.show()

Related

Plotting graph in python using matplotlib-pyplot by taking data from multiple csv files

I want to plot a graph by taking data from multiple csv files and plot them against round number(= number of csv files for that graph). Suppose I have to take a max value of a particular column from all csvs and plot them with the serial number of the csvs on x axis. I am able to read csv files and plot too but from a single file. I am unable to do as stated above.
Below is what I did-
import pandas as pd
import matplotlib.pyplot as plt
import glob
import numpy as np
csv = glob.glob("path" + "*csv")
csv.sort()
N = len(csv)
r = 0
for rno in range(1, N + 1):
r += 1
for f in csv:
df = pd.read_csv(f)
col1 = pd.DataFrame(df, columns=['col. name'])
a = col1[:].to_numpy()
Max = col1.max()
plt.plot(r, Max)
plt.show()`
If anyone has an idea it'd be helpful. Thank you.

Python - ValueError: could not convert string to float

I am a beginner in python and I'm trying to graph some data from a file. The code is the following:
import matplotlib.pyplot as plt
import pandas as pd
from scipy.signal import find_peaks
import os
dataFrame = pd.read_csv('soporte.txt', sep='\t',skiprows=1, encoding = 'utf-8-sig')
x = dataFrame['Wavelength nm.']
y = dataFrame['Abs.']
indices, _ = find_peaks(y, threshold=1)
plt.plot(x, y)
plt.show()
And I get the following error:
ValueError: could not convert string to float: '-0,04008'
I'll show you a piece of the file I am trying to work with:
"soporte.spc - RawData"
"Wavelength nm." "Abs."
180,0 -0,04008
181,0 -0,00084
182,0 -0,00746
183,0 0,00854
184,0 -0,01525
185,0 -0,00354
Thank you very much!!!
L
Use the decimal=',' option in pandas, i.e.,
dataFrame = pd.read_csv('soporte.txt', sep='\t',skiprows=1, encoding = 'utf-8-sig', decimal=',')

Plot csv file with min/max/avg using python

I have a csv file with measuring points every 100 seconds.
5 different measurements have been made.
I am looking for a simple solution how to create a line plot with the average value for each measuring point with the min, max values as a bar with Python.
The CSV file looks like this:
0,0.000622,0.000027,0.000033,0.000149,0.000170
100,0.014208,0.017168,0.017271,0.015541,0.027972
200,0.042873,0.067629,0.035837,0.033160,0.018006
300,0.030700,0.018563,0.016640,0.020294,0.020338
400,0.018906,0.016507,0.015445,0.018734,0.017593
500,0.027344,0.045668,0.015214,0.016045,0.015520
600,0.021233,0.098135,0.016511,0.015892,0.018342
First column is in seconds.
Maybe someone can help me with a quick idea.
thanks in advance
--------------------added
What i have so far:
import pandas as pd
input_df = pd.read_csv(input.csv")
input_df['max_value'] = input_df.iloc[:,1:6].max(axis=1)
input_df['min_value'] = input_df.iloc[:,1:6].min(axis=1)
input_df['avg_value'] = input_df.iloc[:,1:6].mean(axis=1)
input_df.plot(x=input_df["0"], y='avg_value')
How can i add error bars (min_value,max_value)
You can use matplotlib. For your problem:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
data=np.array([ [0,0.000622,0.000027,0.000033,0.000149,0.000170],
[100,0.014208,0.017168,0.017271,0.015541,0.027972],
[200,0.042873,0.067629,0.035837,0.033160,0.018006],
[300,0.030700,0.018563,0.016640,0.020294,0.020338],
[400,0.018906,0.016507,0.015445,0.018734,0.017593],
[500,0.027344,0.045668,0.015214,0.016045,0.015520],
[600,0.021233,0.098135,0.016511,0.015892,0.018342] ])
mean = np.mean(data[:,1:], axis=1)
min = np.min(data[:,1:], axis=1)
max = np.max(data[:,1:], axis=1)
errs = np.concatenate((mean.reshape(1,-1)-min.reshape(1,-1), max.reshape(1,-1)-
mean.reshape(1,-1)),axis=0)
plt.figure()
plt.errorbar(data[:,0], mean, yerr=errs)
plt.show()

Drawing a graph using matplotlib

I have a text file that looks the following way:
14:49:15
0.00152897834778
14:49:22
0.00193500518799
14:49:29
0.00154614448547
14:49:36
0.0024299621582
14:49:43
0.00161910057068
14:49:50
0.00165987014771
14:49:57
0.00150108337402
I want to create a graph using the plot() method in which i wish every odd line from the text file to be a coordinate on the x axis and every non-odd line to be a y-axis coordinate to it's respective x(the line before the non-odd)
In this particular case 14:49:15 would be the first x and 0.00152897834778 the first y
You could convert the datetimes to numeric and plot them
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
import datetime
string = """14:49:15
0.00152897834778
14:49:22
0.00193500518799
14:49:29
0.00154614448547
14:49:36
0.0024299621582
14:49:43
0.00161910057068
14:49:50
0.00165987014771"""
x = string.split('\n')[::2]
x = matplotlib.dates.date2num([datetime.datetime.strptime(xi, '%H:%M:%S') for xi in x])
y = np.array(string.split('\n')[1::2], dtype=float)
plt.plot(x, y)
You may split the input by linebreaks .split("\n") and convert every second one to a datetime object. Then using plt.plot_date() gives you a plot showing the times.
import datetime
import numpy as np
import matplotlib.pyplot as plt
u = u"""14:49:15
0.00152897834778
14:49:22
0.00193500518799
14:49:29
0.00154614448547
14:49:36
0.0024299621582
14:49:43
0.00161910057068
14:49:50
0.00165987014771"""
# split the string by linebreaks
l = u.split("\n")
# take every second substring and convert it to datetime
x = [datetime.datetime.strptime(i, "%H:%M:%S") for i in l[::2] ]
# take every second substring starting at the second one
y = l[1::2]
plt.plot_date(x,y)
plt.show()

Python: faster way of counting occurences in numpy arrays (large dataset)

I am new to Python. I have a numpy.array which size is 66049x1 (66049 rows and 1 column). The values are sorted smallest to largest and are of float type, with some of them being repeated.
I need to determine the frequency of occurrences of each value (the number of times a given value is equalled but not surpassed, e.g. X<=x in statistical terms), in order to later plot the Sample Cumulative Distribution Function.
The code I am currently using is as follows, but it is extremely slow, as it has to loop 66049x66049=4362470401 times. Is there any way to augment the speed of such piece of code? Will perhaps the use of dictionaries help in any way? Unfortunately I cannot change the size of the arrays I am working with.
+++Function header+++
...
...
directoryPath=raw_input('Directory path for native csv file: ')
csvfile = numpy.genfromtxt(directoryPath, delimiter=",")
x=csvfile[:,2]
x1=numpy.delete(x, 0, 0)
x2=numpy.zeros((x1.shape[0]))
x2=sorted(x1)
x3=numpy.around(x2, decimals=3)
count=numpy.zeros(len(x3))
#Iterates over the x3 array to find the number of occurrences of each value
for i in range(len(x3)):
temp=x3[i]
for j in range(len(x3)):
if (temp<=x3[j]):
count[j]=count[j]+1
#Creates a 2D array with (value, occurrences)
x4=numpy.zeros((len(x3), 2))
for i in range(len(x3)):
x4[i,0]=x3[i]
x4[i,1]=numpy.around((count[i]/x1.shape[0]),decimals=3)
...
...
+++Function continues+++
import numpy as np
import pandas as pd
from collections import Counter
import matplotlib.pyplot as plt
arr = np.random.randint(0, 100, (100000,1))
df = pd.DataFrame(arr)
cnt = Counter(df[0])
df_p = pd.DataFrame(cnt, index=['data'])
df_p.T.plot(kind='hist')
plt.show()
That whole script took a very short period to execute (~2s) for (100,000x1) array. I didn't time, but if you provide the time it took to do yours we can compare.
I used [Counter][2] from collections to count the number of occurrences, my experiences with it have always been great (timewise). I converted it into DataFrame to plot and used T to transpose.
Your data does replicate a bit, but you can try and refine it some more. As it is, it's pretty fast.
Edit
Create CDF using cumsum()
import numpy as np
import pandas as pd
from collections import Counter
import matplotlib.pyplot as plt
arr = np.random.randint(0, 100, (100000,1))
df = pd.DataFrame(arr)
cnt = Counter(df[0])
df_p = pd.DataFrame(cnt, index=['data']).T
df_p['cumu'] = df_p['data'].cumsum()
df_p['cumu'].plot(kind='line')
plt.show()
Edit 2
For scatter() plot you must specify the (x,y) explicitly. Also, calling df_p['cumu'] will result in a Series, not a DataFrame.
To properly display a scatter plot you'll need the following:
import numpy as np
import pandas as pd
from collections import Counter
import matplotlib.pyplot as plt
arr = np.random.randint(0, 100, (100000,1))
df = pd.DataFrame(arr)
cnt = Counter(df[0])
df_p = pd.DataFrame(cnt, index=['data']).T
df_p['cumu'] = df_p['data'].cumsum()
df_p.plot(kind='scatter', x='data', y='cumu')
plt.show()
You should use np.where and then count the length of the obtained vector of indices:
indices = np.where(x3 <= value)
count = len(indices[0])
If efficiency counts, you can use the numpy function bincount, which need integers :
import numpy as np
a=np.random.rand(66049).reshape((66049,1)).round(3)
z=np.bincount(np.int32(1000*a[:,0]))
it takes about 1ms.
Regards.
# for counting a single value
mask = (my_np_array == value_to_count).astype('uint8')
# or a condition
mask = (my_np_array <= max_value).astype('uint8')
count = np.sum(mask)

Categories