Get string from numpy array - python

Im looking for an easy way to extract filename1.npy from following numpy array test:
array([['filename1.npy'],
['filename2.npy'],
['filename3.npy']],
dtype=object)
I can easily do: str(test[1]) but then I still have those brackets around it ['filename1.npy']. Though, I just want the name to insert it into np.load(path+'filename1.npy')

Thanks, for the quick response:
test[0][0]
works fine to get filename1.npy.
To loop through all names, I now use:
for index in range(3):
name = test[index]
name = name[0]
# load single file
single_file = np.load(os.path.join(path,name))

Related

PySpark / Python Slicing and Indexing Issue

Can someone let me know how to pull out certain values from a Python output.
I would like the retrieve the value 'ocweeklyreports' from the the following output using either indexing or slicing:
'config': '{"hiveView":"ocweeklycur.ocweeklyreports"}
This should be relatively easy, however, I'm having problem defining the Slicing / Indexing configuation
The following will successfully give me 'ocweeklyreports'
myslice = config['hiveView'][12:30]
However, I need the indexing or slicing modified so that I will get any value after'ocweeklycur'
I'm not sure what output you're dealing with and how robust you're wanting it but if it's just a string you can do something similar to this (for a quick and dirty solution).
input = "Your input"
indexStart = input.index('.') + 1 # Get the index of the input at the . which is where you would like to start collecting it
finalResponse = input[indexStart:-2])
print(finalResponse) # Prints ocweeklyreports
Again, not the most elegant solution but hopefully it helps or at least offers a starting point. Another more robust solution would be to use regex but I'm not that skilled in regex at the moment.
You could almost all of it using regex.
See if this helps:
import re
def search_word(di):
st = di["config"]["hiveView"]
p = re.compile(r'^ocweeklycur.(?P<word>\w+)')
m = p.search(st)
return m.group('word')
if __name__=="__main__":
d = {'config': {"hiveView":"ocweeklycur.ocweeklyreports"}}
print(search_word(d))
The following worked best for me:
# Extract the value of the "hiveView" key
hive_view = config['hiveView']
# Split the string on the '.' character
parts = hive_view.split('.')
# The value you want is the second part of the split string
desired_value = parts[1]
print(desired_value) # Output: "ocweeklyreports"

How to turn items from extracted data to numbers for plotting in python?

So i have a text document with a lot of values from calculations. I have extracted all the data and stored it in an array, but they are not numbers that I can use for anything. I want to use the number to plot them in a graph, but the elements in the array are text-strings, how would i turn them into numbers and remove unneccesary signs like commas and n= for instance?
Here is code, and under is my print statement.
import numpy as np
['n=1', 'n=2', 'n=3', 'n=4', 'n=5', 'n=6', 'n=7', 'n=8', 'n=9', 'n=10', 'n=11', 'n=12', 'n=13', 'n=14', 'n=15', 'n=16', 'n=17', 'n=18', 'n=19'])
I'd use the conversion method presented in this post within the extract function, so e.g.
...
delta_x.append(strtofloat(words[1]))
...
where you might as well do the conversion inline (my strtofloat is a function you'd have to write based on mentioned post) and within a try/except block, so failed conversions are just ignored from your list.
To make it more consistent, any conversion error should discard the whole line affected, so you might want to use intermediate variables and a check for each field.
Btw. I noticed the argument to the extract function, it would seem logical to make the argument a string containing the file name from which to extract the data?
EDIT: as a side note, you might want to look into pandas, which is a library specialised in numerical data handling. Depending on the format of your data file there are probably standard functions to read your whole file into a DataFrame (which is a kind of super-charged array class which can handle a lot of data processing as well) in a single command.
I would consider using regular expression:
import re
match_number = re.compile('-?[0-9]+\.?[0-9]*(?:[Ee]-?[0-9]+)?')
for line in infile:
words = line.split()
new_delta_x = float(re.search(match_number, words[1]).group())
new_abs_error = float(re.search(match_number, words[7]).group())
new_n = int(re.search(match_number, words[10]).group())
delta_x.append(new_delta_x)
abs_error.append(new_abs_error)
n.append(new_n)
But it seems like your data is already in csv format. So try using pandas.
Then read data into dataframe without header (column names will be integers).
import numpy as np
import pandas as pd
df = pd.read_csv('approx_derivative_sine.txt', header=None)
delta_x = df[1].to_numpy()
abs_error = df[7].to_numpy()
# if n is always number of the row
n = df.index.to_numpy(dtype=int)
# if n is always in the form 'n=<integer>'
n = df[10].apply(lambda x: x.strip()[2:]).to_numpy(dtype=int)
If you could post a few rows of your approx_derivative_sine.txt file, that would be useful.
From the given array in the question, If you would like to remove the 'n=' and convert each element to an integer, you may try the following.
import numpy as np
array = np.array(['n=1', 'n=2', 'n=3', 'n=4', 'n=5', 'n=6', 'n=7', 'n=8', 'n=9',
'n=10', 'n=11', 'n=12', 'n=13', 'n=14', 'n=15', 'n=16', 'n=17', 'n=18', 'n=19'])
array = [int(i.replace('n=', '')) for i in array]
print(array)

How can I increase the amount of array iterated during the run-time of script?

My script cleans arrays from the unwanted string like "##$!" and other stuff.
The script works as intended but the speed of it is extremely slow when the excel row size is big.
I tried to use numpy if it could speed it up but I'm not too familiar with is so I might be using it incorrectly.
xls = pd.ExcelFile(path)
df = xls.parse("Sheet2")
TeleNum = np.array(df['telephone'].values)
def replace(orignstr): # removes the unwanted string from numbers
for elem in badstr:
if elem in orignstr:
orignstr = orignstr.replace(elem, '')
return orignstr
for UncleanNum in tqdm(TeleNum):
newnum = replace(str(UncleanNum)) # calling replace function
df['telephone'] = df['telephone'].replace(UncleanNum, newnum) # store string back in data frame
I also tried removing the method to if that would help and just place it as one block of code but the speed remained the same.
for UncleanNum in tqdm(TeleNum):
orignstr = str(UncleanNum)
for elem in badstr:
if elem in orignstr:
orignstr = orignstr.replace(elem, '')
print(orignstr)
df['telephone'] = df['telephone'].replace(UncleanNum, orignstr)
TeleNum = np.array(df['telephone'].values)
The current speed of the script running an excel file of 200,000 is around 70it/s and take around an hour to finish. Which is not that good since this is just one function of many.
I'm not too advanced in python. I'm just learning as I script so if you have any pointer it would be appreciated.
Edit:
Most of the array elements Im dealing with are numbers but some have string in them. I trying to remove all string in the array element.
Ex.
FD3459002912
*345*9002912$
If you are trying to clear everything that isn't a digit from the strings you can directly use re.sub like this:
import re
string = "FD3459002912"
regex_result = re.sub("\D", "", string)
print(regex_result) # 3459002912

Python list-like string representation of numpy array

Consider there is a few rather long numpy arrays:
importy numpy as np;
long_array1 = np.array([random.random() for i in range(10000)]);
long_array2 = np.array([random.random() for i in range(10000)]);
long_array3 = np.array([random.random() for i in range(10000)]);
I would like to save the arrays into the file file.dat, one row per numpy array.
The text representation of an array should be in a python array-like format, i.e. in the case of following numpy array:
a = np.array([0.3213,0.145323,0.852,0.723,0.421452])
I want to save following line in the file.
[0.3213,0.145323,0.852,0.723,0.421452]
There is what I do:
array1_str = ",".join([str(item) for item in long_array1]);
array2_str = ",".join([str(item) for item in long_array2]);
array3_str = ",".join([str(item) for item in long_array3]);
with open("file.dat","w") as file_arrays:
file_arrays.write("[" + array1_str + "]\n");
file_arrays.write("[" + array2_str + "]\n");
file_arrays.write("[" + array3_str + "]\n");
Everything works fine actually. I am just doubtful about the efficiency of my code. I am almost sure there has to be another (better and more efficient) way how to do this.
I welcome comments to the random list generation as well.
This is the fastest way:
','.join(map(str, long_array1.tolist()))
If you want to keep the text more compact, this is fast too:
','.join(map(lambda x: '%.7g' % x, long_array1.tolist()))
Source: I benchmarked every possible method for this as the maintainer of the pycollada library.
Since you want a Python-list-like format, how about actually using the Python list format?
array1_str = repr(list(long_array1))
That's going to stay mostly in C-land and performance should be much better.
If you don't want the spaces, take 'em out after:
array1_str = repr(list(long_array1)).translate(None, " ")
Memory usage may be an issue, however.
sounds like you might be able to use the numpy.savetxt() for this;
something like:
def dump_array(outfile, arraylike):
outfile.write('[')
numpy.savetxt(outfile, arraylike, newline=',', fmt="%s")
outfile.write(']\n')
although i don't think the corresponding numpy.loadtxt() will be able to read in this format.

ValueError: could not convert string to float: id

I'm running the following python script:
#!/usr/bin/python
import os,sys
from scipy import stats
import numpy as np
f=open('data2.txt', 'r').readlines()
N=len(f)-1
for i in range(0,N):
w=f[i].split()
l1=w[1:8]
l2=w[8:15]
list1=[float(x) for x in l1]
list2=[float(x) for x in l2]
result=stats.ttest_ind(list1,list2)
print result[1]
However I got the errors like:
ValueError: could not convert string to float: id
I'm confused by this.
When I try this for only one line in interactive section, instead of for loop using script:
>>> from scipy import stats
>>> import numpy as np
>>> f=open('data2.txt','r').readlines()
>>> w=f[1].split()
>>> l1=w[1:8]
>>> l2=w[8:15]
>>> list1=[float(x) for x in l1]
>>> list1
[5.3209183842, 4.6422726719, 4.3788135547, 5.9299061614, 5.9331108706, 5.0287087832, 4.57...]
It works well.
Can anyone explain a little bit about this?
Thank you.
Obviously some of your lines don't have valid float data, specifically some line have text id which can't be converted to float.
When you try it in interactive prompt you are trying only first line, so best way is to print the line where you are getting this error and you will know the wrong line e.g.
#!/usr/bin/python
import os,sys
from scipy import stats
import numpy as np
f=open('data2.txt', 'r').readlines()
N=len(f)-1
for i in range(0,N):
w=f[i].split()
l1=w[1:8]
l2=w[8:15]
try:
list1=[float(x) for x in l1]
list2=[float(x) for x in l2]
except ValueError,e:
print "error",e,"on line",i
result=stats.ttest_ind(list1,list2)
print result[1]
My error was very simple: the text file containing the data had some space (so not visible) character on the last line.
As an output of grep, I had 45  instead of just 45. 
This error is pretty verbose:
ValueError: could not convert string to float: id
Somewhere in your text file, a line has the word id in it, which can't really be converted to a number.
Your test code works because the word id isn't present in line 2.
If you want to catch that line, try this code. I cleaned your code up a tad:
#!/usr/bin/python
import os, sys
from scipy import stats
import numpy as np
for index, line in enumerate(open('data2.txt', 'r').readlines()):
w = line.split(' ')
l1 = w[1:8]
l2 = w[8:15]
try:
list1 = map(float, l1)
list2 = map(float, l2)
except ValueError:
print 'Line {i} is corrupt!'.format(i = index)'
break
result = stats.ttest_ind(list1, list2)
print result[1]
For a Pandas dataframe with a column of numbers with commas, use this:
df["Numbers"] = [float(str(i).replace(",", "")) for i in df["Numbers"]]
So values like 4,200.42 would be converted to 4200.42 as a float.
Bonus 1: This is fast.
Bonus 2: More space efficient if saving that dataframe in something like Apache Parquet format.
Perhaps your numbers aren't actually numbers, but letters masquerading as numbers?
In my case, the font I was using meant that "l" and "1" looked very similar. I had a string like 'l1919' which I thought was '11919' and that messed things up.
Your data may not be what you expect -- it seems you're expecting, but not getting, floats.
A simple solution to figuring out where this occurs would be to add a try/except to the for-loop:
for i in range(0,N):
w=f[i].split()
l1=w[1:8]
l2=w[8:15]
try:
list1=[float(x) for x in l1]
list2=[float(x) for x in l2]
except ValueError, e:
# report the error in some way that is helpful -- maybe print out i
result=stats.ttest_ind(list1,list2)
print result[1]
Shortest way:
df["id"] = df['id'].str.replace(',', '').astype(float) - if ',' is the problem
df["id"] = df['id'].str.replace(' ', '').astype(float) - if blank space is the problem
Update empty string values with 0.0 values:
if you know the possible non-float values then update it.
df.loc[df['score'] == '', 'score'] = 0.0
df['score']=df['score'].astype(float)
I solved the similar situation with basic technique using pandas. First load the csv or text file using pandas.It's pretty simple
data=pd.read_excel('link to the file')
Then set the index of data to the respected column that needs to be changed. For example, if your data has ID as one attribute or column, then set index to ID.
data = data.set_index("ID")
Then delete all the rows with "id" as the value instead of number using following command.
data = data.drop("id", axis=0).
Hope, this will help you.

Categories