Keras custom fit_generator for numeric dataframe - python

I have several CSV files which are placed in a directory. What I want to do is to create a flow from this directory where each file is taken, prepossessed(such as null value fill, outlier treatment etc) and then each data point is passed to keras model and this process should repeat itself for every file placed in the directory. Any suggestions on this to create data flow same as for Image data available in keras. Also this should happen in python :)
Thanks in advance!

I don't think that Keras natively supplies such functionality.
You should make your own converter, using something like glob to go over each file, send it to preprocessing functions, and finally save it as a format readily usable by Keras, such as a numpy array.
You might want to have a look here for an example of inputting multiple files (although in this case they are already numpy arrays, not csv files) to use in the training of a model.

Related

How to export a fasttext model created by gensim, to a binary file?

I'm trying to export the fasttext model created by gensim to a binary file. But the docs are unclear about how to achieve this.
What I've done so far:
model.wv.save_word2vec_format('model.bin')
But this does not seems like the best solution. Since later when I want to load the model using the :
fasttext.load_facebook_model('model.bin')
I get into an infinite loop. While loading the fasttext.model created by model.save('fasttext.model) function gets completed in around 30 seconds.
Using .save_word2vec_format() saves just the full-word vectors, to a simple format that was used by Google's original word2vec.c release. It doesn't save unique things about a full FastText model. Such files would be reloaded with the matched .load_word2vec_format().
The .load_facebook_format() method loads files in the format saved by Facebook's original (non-Python) FastText code release. (The name of this method is pretty misguided, since 'facebook' could mean so many different things other than a specific data format.) Gensim doesn't have a matched method for saving to this same format – though it probably wouldn't be very hard to implement, and would make symmetric sense to support this export option.
Gensim's models typically implement gensim-native .save() and .load() options, which make use of a mix of Python 'pickle' serialization and raw large-array files. These are your best options if you want to save the full model state, for later reloading back into Gensim.
(Such files can't be loaded by other FastText implementations.)
Be sure to keep the multiple related files written by this .save() (all with the same user-supplied prefix) together when moving the saved model to a new location.
Update (May 2020): Recent versions of gensim such as 3.8.3 and later include a new contributed FastText.save_facebook_model() method which saves to the original Facebook FastTExt binary format.

Dask Read Data from Binary File

I am trying to implement an out of core processing version of k-means clustering algorithm in python. I learned about dask from this git project K-Mean Parallel... Dask...
I use the same git project but am trying to load my data which is in the form of a binary file. The binary file contains data points with 1024 floating point features each.
My problem is how do I load data if it is very huge i.e. larger than the available memory itself? I tried to use the numpy's fromFile function but my kernel somehow dies. Some of my questions are:
Q. Is it possible to load data into numpy created from some other source (the file was not created by numpy but a c script)?
Q. Is there a module for dask that can load data directly from a binary file? I have seen csv files used but nothing related to binary files.
I only dabble in Dask, but calling np.fromfile in the code below via Dask delayed should allow you to work with it lazily. That said, I'm working on this myself so this is currently a partial answer.
For your first question: I currently load .bin files created by a Labview program with no issues using similar code to this:
import numpy as np
method = "b+r" # binary read method
chunkSize = 1e6 # chunk as needed for your purposes
fileSize = os.path.getsize(myfile)
data = []
with open(myfile,method) as file:
for chunk in range(0,fileSize,chunkSize):
data.append(np.fromfile(file,dtype=np.float32,chunk))
For the second question: I have not been able to find anything in Dask for dealing with binary files. I find that a conversion to something that Dask can use, is worth it.

Import TensorFlow data from pyspark

I want to create a predictive model on several hundred GBs of data. The data needs some not-intensive preprocessing that I can do in pyspark but not in tensorflow. In my situation, it would be much more convenient to directly pass the result of the pre-processing to TF, ideally treating the pyspark data frame as a virtual input file to TF, instead of saving the pre-processed data to disk. However, I haven't the faintest idea how to do that and I couldn't find anywhere on the internet.
After some thought, it seems to me that I actually need an iterator (like as defined by tf.data.Iterator) over spark's data. However, I found comments online that hint to the fact that the distributed structure of spark makes it very hard, if not impossible. Why so? Imagine that I don't care about the order of the lines, why should it be impossible to iterate over the spark data?
It sounds like you simply want to use tf.data.Dataset.from_generator() you define a python generator which reads samples out of spark. Although I don't know spark very well, I'm certain you can do a reduce to the server that will be running the tensorflow model. Better yet, if you're distributing your training you can reduce to the set of servers who need some shard of your final dataset.
The import data programmers guide covers the Dataset input pipeline in more detail. The tensorflow Dataset will provide you with an iterator that's accessed directly by the graph so there's no need for tf.placeholders or marshaling data outside of the tf.data.Dataset.from_generator() code you write.

Representing time sequence input/output in tensorflow

I've been working through the TensorFlow documentation (still learning), and I can't figure out how to represent input/output sequence data. My inputs are a sequences of 20 8-entry vectors, making a 8x20xN matrix, where N is the number of instances. I'd like to eventually pass these through an LSTM for sequence to sequence learning. I know I need a 3D vector, but I'm unsure which dimensions are which.
RTFMs with pointers to the correct documentation greatly appreciated. I feel like this is obvious and I'm just missing it.
As described in the excellent blog post by WildML, the proper way is to save your example in a TFRecord using the formate tf.SequenceExample(). Using TFRecords for this provides the following advantages:
You can split your data in to many files and load them each on a different GPU.
You can use Tensorflow utilities for loading the data (for example using Queues to load you data on demand.
Your model code will be separate from your dataset processing (this is a good habit to have).
You can bring new data to your model just by putting it into this format.
TFRecords uses protobuf or protocol buffers as a way to format your data. The documentation of which can be found here. The basic idea is you have a format for your data (in this case in the format of tf.SequenceExample) save it to a TFRecord and load it using the same data definition. Code for this pattern can be found at this ipython notebook.
As my answer is mostly summarizing the WildML blog post on this topic, I suggest you check that out, again found here.

save images in hdf5 files

I'm using python 3.4.2 and I was wondering to know if there is any way to save images to hdf5 files without change the attributes of dataset.
I mean, I want to use the HDFViewer to see this images later.
What I'm doing...
I'm using the h5py package to save images as numpy array in different datasets. Then a need to change the datasets attributes.
My final result is already what I have imagined. But I'm not really happy.
If there is another way, please share here...

Categories