I'm trying to convert a csv file to numpy array. When I'm trying to specify the required column, it just says that it doesn't exist in the csv file. So I checked the csv file and found out that a # is getting added to the header name I specified.
Is there anyway I can avoid that?
Code below
np.savetxt(path13 + 'bkg_sample.csv', predictions, fmt = '%5.5f' ,delimiter = ',' , header='predict')
Header name - # predict
Error on jupyter - 'Dataframe' object has no attribute 'predict'
predict = pd.read_csv('/home/user1/AAAA/Predictions/Code testingbkg.csv',usecols=[2,3,4,5,6,7,8,9,10])
predictions = model.predict(standardscaler.transform(predict))
np.savetxt(path13+'bkg_sample.csv', predictions, fmt = '%5.5f',delimiter = ',',header='predict')
true = pd.read_csv('/home/user1/AAAA/Predictions/bkg_sample.csv')
true[true.predict>0.67] ##This is where the error occurs
Links for image:
bkgsample : https://imgur.com/a/tzh0o2M
predict.csv : https://imgur.com/a/DhPAzqa
Try listing out columns of your DataFrame:
print(true.columns)
It looks like in your bkg_sample.csv there is no column named predict or even # predict.
Found the answer here https://stackoverflow.com/a/17361181/16355784
So apparently it inserts # because the line is a comment, and if you wanna pass it you just need to use comment='' in savetxt.
Related
I'm trying to build a multi-class text classifier using Spacy and I have built the model, but facing a problem applying it to my full dataset. The model I have built so far is in the screenshot:
Screenshot
Below is the code I used to apply to my full dataset using Pandas:
Messages = pd.read_csv('Messages.csv', encoding='cp1252')
Messages['Body'] = Messages['Body'].astype(str)
Messages['NLP_Result'] = nlp(Messages['Body'])._.cats
But it gives me the error:
ValueError: [E1041] Expected a string, Doc, or bytes as input, but got: <class 'pandas.core.series.Series'>
The reason I wanted to use Pandas in this case is the dataset has 2 columns: ID and Body. I want to apply the NLP model only to the Body column, but I want the final dataset to have 3 columns: ID, Body and the NLP result like in the screenshot above.
Thanks so much
I tried Pandas apply method too, but had no luck. Code used:
Messages['NLP_Result'] = Messages['Body'].apply(nlp)._.cats
The error I got: AttributeError: 'Series' object has no attribute '_'
Expectation is to generate 3 columns as described above
You should provide a callable into Series.apply call:
Messages['NLP_Result'] = Messages['Body'].apply(lambda x: nlp(x)._.cats)
Here, each value in the NLP_Result column will be assigned to x variable.
The nlp(x) will create an NLP object that contains the necessary properties you'd like to access. Then, the nlp(x)._.cats will return the expected value.
import spacy
import classy classification
import csv
import pandas as pd
with open ('Deliveries.txt', 'r') as d:
Deliveries = d.read().splitlines()
with open ("Not Spam.txt", "r") as n:
Not_Spam = n.read().splitlines()
data = {}
data["Deliveries"] = Deliveries
data["Not_Spam"] = Not_Spam
# NLP model
nlp = spacy.blank("en")
nlp.add pipe("text_categorizer",
config={
"data": data,
"model": "sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"device": "gpu"
}
)
Messages['NLP_Result'] = Messages['Body'].apply(lambda x: nlp(x)._.cats)
I have a Tensor dataset that is a list of file names and a Pandas dataframe that contains metadata for each file.
filename_ds = tf.data.Dataset.list_files(path + "/*.bmp")
metadata_df = pandas.read_csv(path + "/metadata.csv")
File names contain an idx that references a line in the metadata dataframe, like "3_data.bmp" where 3 is the idx. I hoped to call filename_ds.map(combine_data).
It appears to be not as simple as parsing the file name and doing a dataframe lookup. The following fails because filename is a Tensor, and since I'm running this on a Dataset.map() call, I do not have access to tf.executing_eagerly() methods like .numpy() and cannot get a string value from the filename to do my regex and df lookup.
combine_data(filename)
idx = re.findall("(\d+)_data.bmp", filename)[0]
val = metadata_df.loc[metadata_df["idx"] == idx]["test-col"]
...
New to Tensorflow, and I suspect I'm going about this in an odd way. What would be the correct way to go about this? I could list my files and concatenate a dataset for each file, but I'm wondering if I'm just missing the "Tensorflow way" of doing it.
One way of iteration is through as_numpy_iterator()
dataset_list=list(filename_ds.as_numpy_iterator())
for each_file in dataset_list:
file_name=each_file.decode('utf-8') # this will contain the abs path /user/me/so/file_1.png
try:
idx=re.findall("(\d+).*.png", file_name)[0] # changed for my case
except :
print("Exception==>")
print(f"File:{file_name},idx:{idx}")
I am reading in a netCDF file and want to keep data from other variables based on the filter condition (vwc < 5) where vwc is a variable.
My errors include AttributeError: NetCDF: Attribute not found and AttributeError: 'numpy.ndarray' object has no attribute 'where' since I tried using the where function but I could have been using incorrect syntax.
# read in netCDF file
f = nc4.Dataset(fn[0], 'r')
# read in group from file
sm_am = f.groups['Retrieval_Data_AM']
# extract variables
vwc = np.asarray(sm_am.variables['water_content'][:,:])
any_variable = np.asarray(sm_am.variables['generic_variables'][:,:])
Then I want to extract other variables only when vwc<5.
This works for filtering out my data.
x = vwc > 5.0 # Boolean condition
new_var = old_var.copy() # copying old data as new variable
new_var[x] = np.nan # replace elements that meet the condition with NaN
I'm reading wind components (u and v) data from a netCDF file from NCEP/NCAR Reanalysis 1 to make some computations. I'm using xarray to read the file.
In one of the computations, I'd like to mask out all data below some threshould, make them be equal to the missing_value attribute. I don't want to use NaN's.
However, when reading the data with xarray, the missing_value attribute - present in the variable in the netCDF file - isn't copied to xarray.DataArray that contained the data.
I couldn't find a way to copy this attribute from netCDF file variable, with xarray.
Here is an example of what I'm trying to do:
import xarray as xr
import numpy as np
DS1 = xr.open_dataset( "u_250_850_2009012600-2900.nc" )
DS2 = xr.open_dataset( "v_250_850_2009012600-2900.nc" )
u850 = DS1.uwnd.sel( time='2009-01-28 00:00', level=850, lat=slice(10,-60), lon=slice(260,340) )
v850 = DS2.vwnd.sel( time='2009-01-28 00:00', level=850, lat=slice(10,-60), lon=slice(260,340) )
vvel850 = np.sqrt( u850*u850 + v850*v850 )
jet850 = vvel850.where( vvel850 >= 12 )
#jet850 = vvel850.where( vvel850 >= 12, vvel850, vvel850.missing_value )
The last commented line is what I want to do: to use missing_value attribute to fill where vvel850 < 12. The last uncommented line gives me NaN's, what I'm trying to avoid.
Is it the default behaviour of xarray when reading data from netCDF? Whether yes or not, how could I get this attribute from file variable?
An additional information: I'm using PyNGL (http://www.pyngl.ucar.edu/) to make contour plots and it doesn't work with NaN's.
Thanks.
Mateus
The "missing_value" attribute is kept in the encoding dictionary. Other attributes like "units" or "standard_name" are kept in the attrs dictionary. For example:
v850.encoding['missing_value']
You may also be interested a few other xarray features that may help your use case:
xr.open_dataset has a mask_and_scale keyword argument. This will turn off converting missing/fill values to nans.
DataArray.to_masked_array will convert a DataArray (filled with NaNs) to a numpy.MaskedArray for use in plotting programs like Matplotlib or PyNGL.
I've written the function (tested and working) below:
import pandas as pd
def ConvertStrDateToWeekId(strDate):
dateformat = '2016-7-15 22:44:09'
aDate = pd.to_datetime(strDate)
wk = aDate.isocalendar()[1]
yr = aDate.isocalendar()[0]
Format_4_5_4_date = str(yr) + str(wk)
return Format_4_5_4_date'
and from what I have seen on line I should be able to use it this way:
ml_poLines = result.value.select('PURCHASEORDERNUMBER', 'ITEMNUMBER', PRODUCTCOLORID', 'RECEIVINGWAREHOUSEID', ConvertStrDateToWeekId('CONFIRMEDDELIVERYDATE'))
However when I "show" my dataframe the "CONFIRMEDDELIVERYDATE" column is the original datetime string! NO errors are given.
I've also tried this:
ml_poLines['WeekId'] = (ConvertStrDateToWeekId(ml_poLines['CONFIRMEDDELIVERYDATE']))
and get the following error:
"ValueError: Cannot convert column into bool: please use '&' for 'and', '|' for 'or', '~' for 'not' when building DataFrame boolean expressions." which makes no sense to me.
I've also tried this with no success.
x = ml_poLines.toPandas();
x['testDates'] = ConvertStrDateToWeekId(x['CONFIRMEDDELIVERYDATE'])
ml_poLines2 = spark.createDataFrame(x)
ml_poLines2.show()
The above generates the following error:
AttributeError: 'Series' object has no attribute 'isocalendar'
What have I done wrong?
Your function ConvertStrDateToWeekId takes a string. But in the following line the argument of the function call is a series of strings:
x['testDates'] = ConvertStrDateToWeekId(x['CONFIRMEDDELIVERYDATE'])
A possible workaround for this error is to use the apply-function of pandas:
x['testDates'] = x['CONFIRMEDDELIVERYDATE'].apply(ConvertStrDateToWeekId)
But without more information about the kind of data you are processing it is hard to provide further help.
This was the work-around that I got to work:
`# convert the confirimedDeliveryDate to a WeekId
x= ml_poLines.toPandas();
x['WeekId'] = x[['ITEMNUMBER', 'CONFIRMEDDELIVERYDATE']].apply(lambda y:ConvertStrDateToWeekId(y[1]), axis=1)
ml_poLines = spark.createDataFrame(x)
ml_poLines.show()`
Not quite as clean as I would like.
Maybe someone else cam propose a cleaner solution.