Is there any way to batch a large input file (111MB) made of 22 MLN cells (222 rows for 110k columns) in MLlib (something similar to this tutorial made in keras) Keras batching tutorial.
The file contains the actual features extracted from 222 images using the above tutorial, but instead of using a keras model I would like to replicate such code using pyspark and MLlib.
Unfortunately I've not enough resources for dealing in memory for such big file and the computation fails for Java Heap Space memory error.
The file structure is composed by for each row (representing an image) we have these columns: "_c0" the label 0/1, from "_c1" up to "_c100353" features extracted.
Here's my code, I don't care about precision and accuracy, I'm just interested on running the model for making resource usage metrics.
sql,sc = init_spark()
df = sql.read.option("maxColumns", 100400).load(file3,format="csv",inferSchema="true",sep=',',header="false")
labelIndexer = StringIndexer(inputCol="_c0", outputCol="indexedLabel").fit(df)
cols = df.columns
cols.remove("_c0")
assembler = VectorAssembler(inputCols=cols,outputCol="features")
data = assembler.transform(df)
featureIndexer =\
VectorIndexer(inputCol="features", outputCol="indexedFeatures", maxCategories=100).fit(data)
(trainingData, testData) = data.randomSplit([0.7, 0.3])
dt = DecisionTreeClassifier(labelCol="indexedLabel", featuresCol="indexedFeatures")
#
# # Chain indexers and tree in a Pipeline
pipeline = Pipeline(stages=[labelIndexer, featureIndexer, dt])
#
# # Train model. This also runs the indexers.
model = pipeline.fit(trainingData)
#
# # Make predictions.
predictions = model.transform(testData)
#
# # Select example rows to display.
predictions.select("prediction", "indexedLabel", "features").show(100)
predictions.printSchema()
#
evaluator = MulticlassClassificationEvaluator(
labelCol="indexedLabel", predictionCol="prediction", metricName="accuracy")
accuracy = evaluator.evaluate(predictions)
print("Accuracy = %g " % accuracy)
Please don't suggest me to use sparkdl library for features extraction using DeepImageFeaturizer because it's completely broken.
Related
I'm using sklearn and have a model based on network intrusion detection which has over 50 columns.
I want to make the model transferable and used elsewhere for other data that isn't just X_test. As far as I know, I calculate the mean and standard deviation of the training data, and then use that to transform the testing data.
If I were to use this model elsewhere, JUST the prediction part of my code, how would I transfer it elsewhere and make it usable? Am I saving the wrong part here: fit_new_input, or should I be saving the x = sc.fit_transform part because that is ultimately what the new test data later on will be using?
from joblib import dump
from joblib import load
df1 = pd.read_csv('trainingdata.csv', sep=r'\s*,\s*', engine='python')
df2 = pd.read_csv('testdata.csv', sep=r'\s*,\s*', engine='python')
saved_model = keras.models.load_model("Model.h5")
sc = MinMaxScaler()
x = pd.get_dummies(trainingdata.drop(['Label', ], axis = 1))
x = sc.fit_transform(x) # <<< SHOULD I BE SAVING THIS INSTEAD AND REUSING IT ELSEWHERE?
testdata = testdata.drop(['Label', ], axis = 1)
fit_new_input = sc.transform(testdata) # <<<< I'M SAVING THIS, IS THIS CORRECT?
dump(fit_new_input, 'scaler_transform.joblib')
scaler_transform = load('scaler_transform.joblib')
#pred = saved_model.predict(scaler.reshape(-1,77))
I am still lost on the Spark and Deep Learning model.
If I have a (2D) time series that I want to use for e.g. an LSTM model. Then I first convert it to a 3D array and then pass it to the model. This is normally done in memory with numpy.
But what happens when I manage my BIG file with Spark?
The solutions I've seen so far all do it by working with Spark and then converting the 3D data in numpy at the end. At that solution puts everything in memory.... or am I thinking wrong?
A common Spark LSTM solution is looks like this:
# create fake dataset
import random
from keras import models
from keras import layers
data = []
for node in range(0,100):
for day in range(0,100):
data.append([str(node),
day,
random.randrange(15, 25, 1),
random.randrange(50, 100, 1),
random.randrange(1000, 1045, 1)])
df = spark.createDataFrame(data,['Node', 'day','Temp','hum','press'])
# transform the data
df_trans = df.groupBy('day').pivot('Node').sum()
df_trans = df_trans.orderBy(['day'], ascending=True)
#make train/test data
trainDF = df_trans[df_trans.day < 70]
testDF = df_trans[df_trans.day > 70]
################## we lost the SPARK #############################
# create train/test array
trainArray = np.array(trainDF.select(trainDF.columns).collect())
testArray = np.array(testDF.select(trainDF.columns).collect())
# drop the target columns
xtrain = trainArray[:, 0:-1]
xtest = testArray[:, 0:-1]
# take the target column
ytrain = trainArray[:, -1:]
ytest = testArray[:, -1:]
# reshape 2D to 3D
xtrain = xtrain.reshape((xtrain.shape[0], 1, xtrain.shape[1]))
xtest = xtest.reshape((xtest.shape[0], 1, xtest.shape[1]))
# build the model
model = models.Sequential()
model.add(layers.LSTM(1, input_shape=(1,400)))
model.add(layers.Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
# train the model
loss = model.fit(xtrain, ytrain, batch_size=10, epochs=100)
My problem with this is:
If my Spark data uses millions of rows and thousands of columns, then when the # create train/test array program line tries to transform the data, it causes a memory overflow. Am I right?
My question is:
Can SPARK be used to train LSTM models on big data, or is it not possible?
Is there any Generator function that can solve this? Like the Keras Generator function?
Perhaps you have too many columns in your dataframe - why would you have hundreds of columns? Are you collecting that many data points for each timestamp? If so, then I would argue that you need to subset your data. In my experience, a time-series is driven largely by the timestamp - even a small number of data points stretched across a long collection of time provides enormous information. In other words, you have a dataset that is wide and tall, but it should perhaps be thin and tall instead.
I'm trying to use Huggingface zero-shot text classification using 12 labels with large data set (57K sentences) read from a CSV file as follows:
csv_file = tf.keras.utils.get_file('batch.csv', filename)
df = pd.read_csv(csv_file)
classifier = pipeline('zero-shot-classification')
results = classifier(df['description'].to_list(), labels, multi_class=True)
This keeps crashing as python runs out of memory.
I tried to create a dataset instead as follows:
dataset = load_dataset('csv', data_files=filename)
But not sure how to use it with Huggingface's classifier. What is the best way to batch process classification?
I eventually would like to feed it over 1M sentences for classification.
The problem isn't that your dataset is too big to fit into RAM, but that you're trying to pass the whole thing through a large transformer model at once. Hugging Face's pipelines don't do any mini-batching under the hood at the moment, so pass the sequences one by one or in small subgroups instead:
results = [classifier(desc, labels, multi_class=True for desc in df['description']]
If you're using a GPU, you'll get the best speed by using as many sequences at each pass as will fit into the GPU's memory, so you could try the following:
batch_size = 4 # see how big you can make this number before OOM
classifier = pipeline('zero-shot-classification', device=0) # to utilize GPU
sequences = df['description'].to_list()
results = []
for i in range(0, len(sequences), batch_size):
results += classifier(sequences[i:i+batch_size], labels, multi_class=True)
and see how large you can make batch_size before you get OOM errors.
The Zero-shot-classification model takes 1 input in one go, plus it's very heavy model to run, So as recommended run it on GPU only,
The very simple approach is to convert the text into list
df = pd.read_csv(csv_file)
classifier = pipeline('zero-shot-classification')
filter_keys = ['labels']
output = []
for index, row in df.iterrows():
d = {}
sequence = row['description']
result = classifier(sequence, labels, multi_class=True)
temp = list(map(result.get, filter_keys))
d['description'] = row['description']
d['label'] = temp[0][0]
output.append(d)
#convert the list of dictionary into pandas DataFrame
new = pd.DataFrame(output)
new.head()
There is a deep learning based model using Transfer Learning and LSTM in this article, that author used 10 fold cross validation (as explained in table 3) and took the average of results.
I am familiar with 10 fold cross validation as we need to divide the data and pass to the model, however in this code(here) I can't figure out how to partition data and pass it.
There is two train/test/dev datasets (one for emotion analysis, and one for sentiment analysis we use both for transfer learning, but my focus is on emotion analysis). The raw data is in couple of files in txt format, and after running the model, it gives two new txt files, one for predicted labels, one for true labels.
There is a line of code in the main file:
model = BiLstm(args, data, ckpt_path='./' + args.data_name + '_output/')
if args.mode=='train':
model.train(data)
sess = model.restore_last_session()
model.predict(data, sess)
if args.mode=='test':
sess = model.restore_last_session()
model.predict(data, sess)
in which the 'data' is a class of Data(code) that includes test/train/dev datasets:
which I think I need to pass the divided data here. If I am right, how can I do partitioning and perform 10 fold cross validation?
data = Data('./data/'+args.data_name+'data_sample.bin','./data/'+args.data_name+'vocab_sample.bin',
'./data/'+args.data_name+'word_embed_weight_sample.bin',args.batch_size)
class Data(object):
def __init__(self,data_path,vocab_path,pretrained,batch_size):
self.batch_size = batch_size
data, vocab ,pretrained= self.load_vocab_data(data_path,vocab_path,pretrained)
self.train=data['train']
self.valid=data['valid']
self.test=data['test']
self.train2=data['train2']
self.valid2=data['valid2']
self.test2=data['test2']
self.word_size = len(vocab['word2id'])+1
self.max_sent_len = vocab['max_sent_len']
self.max_topic_len = vocab['max_topic_len']
self.word2id = vocab['word2id']
word2id = vocab['word2id']
#self.id2word = dict((v, k) for k, v in word2id.iteritems())
self.id2word = {}
for k, v in six.iteritems(word2id):
self.id2word[v]=k
self.pretrained=pretrained
by the look of it, seems the train method can get the session and continue to train from existing model def train(self, data, sess=None)
so with a very minimal changes to existing code and libraries you can do smth like
first load all the data and build the model
data = Data('./data/'+args.data_name+'data_sample.bin','./data/'+args.data_name+'vocab_sample.bin',
'./data/'+args.data_name+'word_embed_weight_sample.bin',args.batch_size)
model = BiLstm(args, data, ckpt_path='./' + args.data_name + '_output/')
then create the cross validation data set, smth like
def get_new_data_object():
return data = Data('./data/'+args.data_name+'data_sample.bin','./data/'+args.data_name+'vocab_sample.bin',
'./data/'+args.data_name+'word_embed_weight_sample.bin',args.batch_size)
cross_validation = []
for i in range(10):
tmp_data = get_new_data_object()
tmp_data.train= #get 90% of tmp_data['train']
tmp_data.valid= #get 90% of tmp_data['valid']
tmp_data.test= #get 90% of tmp_data['test']
tmp_data.train2= #get 90% of tmp_data['train2']
tmp_data.valid2= #get 90% of tmp_data['valid2']
tmp_data.test2= #get 90% of tmp_data['test2']
cross_validation.append(tmp_data)
than run the model n times (10 for 10-fold cross validation)
sess = null
for data in cross_validation:
model.train(data, sess)
sess = model.restore_last_session()
keep in mind to pay attention to some key ideas
I don't know how your data is structured exactly but that effect the way of splitting it to test, train and (in your case) valid
the splitting of data has to be the exact split for each triple of test, train and valid, it can be done randomly or taking different part every time, as long it consistent
you can train the model n times with cross validation or create n models and pick the best to avoid overfitting
this code is just a draft, you can implement it how you would like, there are some great library that already implemented such functionality, and of course can be optimize (not reading the whole data files each time)
one more consideration is to separate the model creation from the data, especially the data arg of the model constructor, from a quick look it seems it only use the dimension of the data, so its a good practice not to pass the whole object
more over, if the model integrate other properties of the data object in it's state (when creating), like the data itself, my code might not work and a more surgical approach
hope it helps, and point you in the right direction
I'm very new to Apache Spark and big data in general. I'm using the ALS method to create rating predictions based on a matrix of users, items, and ratings. The confusing part is that when I run the script to calculate the predictions, the results are different every time, without the input or the requested predictions changing. Is this expected behavior, or should the results be identical? Below is the Python code for reference.
from pyspark import SparkContext
from pyspark.mllib.recommendation import ALS
sc = SparkContext("local", "CF")
# get ratings from text
def parseRating(line):
fields = line.split(',')
return (int(fields[0]), int(fields[1]), float(fields[2]))
# define input and output files
ratingsFile = 's3n://weburito/data/weburito_ratings.dat'
unratedFile = 's3n://weburito/data/weburito_unrated.dat'
predictionsFile = '/root/weburito/data/weburito_predictions.dat'
# read training set
training = sc.textFile(ratingsFile).map(parseRating).cache()
# get unknown ratings set
predictions = sc.textFile(unratedFile).map(parseRating)
# define model
model = ALS.train(training, rank = 5, iterations = 20)
# generate predictions
predictions = model.predictAll(predictions.map(lambda x: (x[0], x[1]))).collect()
This is expected behaviour. The factor matrices in ALS are initialized randomly (well actually one of them is, and the other is solved based on that initialization in the first step).
So different runs will give slightly different results.