Implementing a CNN Deep Learning Model in C++ - python

I apologize in advance if anything about the structure of my question or the way I’m asking it isn’t exactly correct; this is my first time asking a question on here.
The Problem
I’ve built a GUI application for my research position that interfaces with a radar sensor’s API in order to perform real-time imaging in a variety of formats (this uses C++/Qt). My next step will be architecting and implementing a CNN that will essentially take in image data retrieved by the sensor and perform binary classification on it. For the past week or so, I’ve had an absolutely horrible time attempting to include any kind of mainstream deep learning framework in my pre-existing app.
What I’ve Tried
TensorFlow
My first thought was to employ TensorFlow (due to its popularity) to construct my network in Python and then load the model into my C++ app. Unfortunately, I’ve been completely unable to include TensorFlow in my app properly due to a lack of any kind of clear documentation or instruction on how to do so.
PyTorch
After beating my head against a wall for a few days with TensorFlow, I figured I’d try a similar approach with PyTorch because I heard plenty of people commending it for being more user-friendly and intuitive. Once again, trying to include the PyTorch C++ API in my app has been a total nightmare. All I can find in the docs are a couple tutorials in which CMake is used to generate a new project, but for obvious reasons, this wouldn’t work for my use case.
I feel like I’m chasing my tail at this point; my next thought is to try again with another kind of deep learning framework, but I feel like I’ll fall right back into this same issue of being unable to include the library in my pre-existing app. Any recommendations/guidance would be greatly appreciated!

Related

Python and Bridge for JS

Im starting a new project for personal porpuse!
Im working personally in finance, I decided to create personal chart viewing software to suit my needs. I thought I'd create a good part of the backend in Python as it is a language I know quite well. Use Javascript for the graphic side, and use a webview in a windows form to make everything available as if it were software.
As for the graphics I thought of using: Lightweight Library for JS, I find that today it is the most avant-garde library compared to the classic plotly, matplotlib.
Use pywebview as a bridge between Python and JS and to redirect everything in a windows form.
However I find it a bit difficult to use this library (pywebview), there is a poor documentation around and not very clear to me (my level of JS is really basic). However, I believe it is one of the most convenient solutions.
I would have thought of using pyscript, but it still seems early to use this library in production.
Questions:
Do you think such a job is feasible?
Do you know other libraries/better solutions to do such jobs?
What kind of approach would you use if you were to do such work?
What I am trying to achieve is to write most of the functions in python and use only JS to make calls through buttons or to get data from various inputs.
Wandering around the web I found a work already partially created by this guy (if you are reading, Thanks Filipe you have been very helpful! here you can see his project hosted on Github) but unfortunately it is difficult for me to get my hands on a code not produced by me.

How to implement OCL (Object Constraint Language) in Python

I am working on Google Colaboratory, and I have to implement OCL (Object Constraint Language), I searched a lot, but I didn't find how to implement it. Can someone give me an idea please?
AgileUML has a Python code generator from UML/OCL (itself written in OCL): this is the uml2py plugin for AgileUML (https://github.com/eclipse/agileuml). There are some videos on using the tool at: https://www.youtube.com/watch?v=NXq2L0fo7N4
It is surely possible for you to implement OCL, duplicating the efforts of one of the existing Open Source implementations such as Eclipse OCL or USE. There is an official OMG specification that will define what you need to do, however it has many deficiencies that will require research to solve and design around. I would be surprised if you can implement a 'full' implementation of OCL from scratch with plausible accuracy in less than a person year.
I suspect that you have mis-stated what you want to do or have misunderstood what someone has instructed you to do.

Deploying and maintaining states of machine learning models [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 3 years ago.
Improve this question
Alright, so I am pretty new to Machine/Deep Learning world and getting my hands dirty on implementing algorithms but there are few gaps which I have and want to understand how the industry is doing this.
I am an experienced developer with an experience of five and a half years of application (Web and Mobile) development with complex use cases. I have detailed knowledge on how to take an application from development to production starting right from scratch.
While going through the tutorials and all of the documentation available online explaining how to develop the model locally which basically looks like a python script. This script takes some data in either the structured or unstructured way, preprocess the data and the train the model on the received data. All this is fine as there are a lot of documents, tutorials available online for this but no one explains how to take it to production and how to maintain it on the production.
I am yet to identify, how to push the model to production, also when the model is deployed on the production, from which data we should train the model on the server. Should it be the same data which we used locally on the machine or some other data? Also, when we make any changes to the model and redeploy it on the production, do we have to train the model again (as the model seems to be stateless and must storing all of the training results and finding on the memory). If it is true, then redeploying the model will bring downtime to the application during the time model is being trained on the server.
I bet industry must be handling all of this in some or the other way. I want to understand the whole process and if I can have references for the same it would be really great.
I am using Tensorlfow, Scikitlearn and other popular libraries for development.
this is just my take, and it may not be standard.
A model is just a model, you can save it to a file, it's just a file, you can make copies of it, there's nothing ezoteric about it.
One way of deploying is by exposing the .predict() method via some form of API. You can do this with your favorite web framework but since you're using tensorflow it's probably easier with flask/django.
Another thing is, in the event where you're not actually messing with the internals of layers is to just use keras, which is much more straightforward to use for the layman than tensorflow and it sits on top of tensorflow so you're using the same tech.
So how to handle retraining? Well ML is compute intensive only when training so you do that separately, take your trained model and plug it back into your API. Ideally this can be done without actually interrupting production, it's just a file that can be overwritten at the end of the day. There are subtleties of course but it should be pretty close to seamless.
As for continuing training after the fact, i would start with this answer.

Adding support for stateful RNN models within the TF Serving API?

I have an interesting, non-novel but seemingly generally unsolved, at least in sources I could find online, problem. The issue is basically this:
I want to persist the states of my RNN in between calls/invokations to it via the TF Serving API.
I have found quite a few people online talking about this problem, such as in the following links:
https://github.com/tensorflow/serving/issues/724
https://groups.google.com/a/tensorflow.org/forum/#!topic/discuss/00tipdqxRZk
Tensorflow Serving - Stateful LSTM
Unfortunetly, there seems to be some discussion of the problem and discussion of a built in fix being added to TF but not being there yet, but no actual guides/examples for how to work around this problem.
The closest I have been able to find a guide for how to do this was the following in the groups.google link:
"As background for future readers, there are two broad cases to consider. The first is where you want to run an entire sequence computation, with multiple steps, each with state to preserve for the next step. This case is addressed with dynamic unrolling."
I am not really sure how to go about implementing this at all though. I dont really want to share my code here just because it is really quite long for a SO post and don't expect anyone to read through it all and tell me what to do.
Just any general tips would be awesome.
I have two files right now that I am using to deploy and use my RNN in the serving api. They are called rnn_save_model.py and rnn_client.py. rnn_save_model just creates the actual model in code and then uses SavedModelBuilder to save it to file. rnn_client then passes in some parameters to the model when I have it loaded via the following command:
tensorflow_model_server --port=9000 --model_name=rnn --model_base_path=/tmp/rnn_model/
I am just not even sure where to add code to get the model to load a state stored to file or something as the model itself is "created" once in rnn_save_model and I do not see how to clearly pass in states via the rnn_client file that are then added to the graph as the file in its current state is just using a model loaded in memory and interfacing with it VS actually editing the model like would be needed to load previous state from file or something.
I really appreciate any help!

Up-to-date Django-Facebook-Heroku Tutorial

I am trying to learn 3 tools at once, and I am in deep misery. Is there any up-to-date tutorial to build a simple Django app that uses Facebook Graph API and runs on Heroku?
I believe that your approach is to broad. I would recommend learning each piece individually then piecing them together.
The official Django tutorial is a good place to start for that piece. I'm not experienced with either of the others so I have no links to provide.
Remember, sometimes shortcuts take longer. Learn how to use each tool properly and you'll come out ahead in the end and will probably take less time to develop you're final application.

Categories