I have a React Native app where I want to send an image to my Flask backend to do some image processing (annotations) then return this new image back to React Native to display it.
I spent a whole day trying to figure this out but was unsuccessful. Does anyone have any ideas on how to achieve this?
I plan on using Firebase's storage system to store these images so I wouldn't mind using that either if that makes things easier.
What I've tried so far is sending the image uri to Flask and read the image file and was able to do the image processing however I couldn't figure out how to send the new image back to React Native...
How are you sending the image to Flask right now?
Typically you could implement an async function on RN that awaits a response.
In plain English:
A function that uploads an image to the back end and awaits for the image to be processed.
Expects in return a URL of the image on the back-end (or Firestore).
Related
I want to create a web application(Flask- A Flashcard AI), a part of which is a bot which needs to directly interact with the human through speech recognition and text-to-speech. I have pyttsx3 and speech_recognition installed for that, where I am confused is how am I supposed to get the user's audio as input and then send it to the backend. I have tried to look up YouTube tutorials and asked other people about the same, the only success I've had is learning about Navigator.MediaDevices.getUserMedia. I want to make the communication fluent, and I will have to send the data to the back-end as well. I am not sure how to send it to the back-end and get the user media fluently, I could use Navigator.MediaDevices.getUserMedia and convert it into an audio file(not sure how to do that yet but I think I'll figure it out eventually, and having the user upload a audio recording won't be nice at all), but then that'll take up a lot of space on the database.
If you just want to process some action based on voice you can use speech API.
https://developer.mozilla.org/en-US/docs/Web/API/Web_Speech_API
This API will be able to give you text based captions which you can easily store in the database.
If you need to store audio on server side you would convert that to some loassy format like mp3 or aac to save space.
I'm trying to send my image from django server to app.There are two appoaches for above problem:
1)send url of the image
2)send encoded image
I'm not sure which one to use due to following:-
If i choose the first way. Json resoponse will be quicker but if a user returns a page twice, it will make two request on server. Moreever it would be impossible to cache the image.
In case of second approach, i can cache the image at client app, but does encoding increase overall response time from the server?
Which is the recommended approach for sending image from api.
Currently i'm sending image by following:-
return JsonResponse({'image':model.image.url})
The answer is approach 1. Encoding images will destroy your server response time unless they’re very tiny like thumbnails or avatars, even then I wouldn’t make a habit of it. I’ve seen apps rendered unusable by this practice. Most browsers cache images during sessions automatically. If server performance is a big concern and images are dragging it down, Generally I’ll store images in some kind of static file host though like s3 and use an edge cache anyway in real world environments.
You probably want to go with #1 and send over the URL. Usually you would not deliver the image files directly from your server but from a cloud storage like S3 or GCS. More advanced setups even include CDNs (Content Delivery Networks) like Fastly or Cloudfront to enable easy caching and serve traffic on a global scale.
If you should choose to send over the encoded image (in base64 for example), be aware that the increased response size of the body will increase the response times incredibly high and might even lead to complete timeouts at around 30s. Users will ultimately have longer response times, are more likely to churn and you are paying way more for your servers.
I'm a beginner web developer and I've spent so many hours trying to just do the following simple thing and have gotten nowhere... :( can someone help?
every 5-10 seconds, a file called latest_event.png will be updated on my computer with new contents. Using Python's Flask server, I just want to have my client application periodically poll my server and render the latest image in a web browser. I also want the user to be able to check and uncheck a box and their selection should be sent back to the server and their choice will affect how I render the latest_event.png image.
Alternatively, I've also explored Flask Socket IO libraries, but can't seem to get them working with image passing.
Would someone save the day and share with me the barebones server and client code to do this?
Thanks!
#app.route('/latest_event.png')
def latest_event():
return send_from_directory(os.path.join(app.root_path, 'static'), 'latest_event.png',
mimetype='image/png')
You can put your image into a folder named 'static' and push it out through flask this way.
I want to upload an image to the blobstore, because I want to support files larger than 1MB. Now, the only way I can find is for the client to issue a POST where it sends the metadata, like geo-location, tags, and what-not, which the server puts in an entity. In this entity the server also puts the key of a blob where the actual image data is going to be stored, and the server concludes the request by returning to the client the url returned by create_upload_url(). This works fine, however I can get inconsistency, such as if the second request is never issued, and hence the blob is never filled. The entity is now pointing to an empty blob.
The only solution to this problem I can see is to trigger a deferred task which is going to check whether the blob was ever filled with an upload. I'm not a big fan of this solution, so I'm guessing if anybody has a better solution in mind.
I went through exactly the same thought process, but in Java, and ended up using Apache Commons FileUpload. I'm not familiar with Python, but you'll just need a way of handling a multipart/form-data upload.
I upload the image and my additional fields together, using JQuery to assemble the multipart form data, which I then POST to my server.
On the server side I then take the file and write it to Google Cloud Storage using the Google Cloud Storage client library (Python link). This can be done in one chunk, or 'streamed' if it's a large file. Once it's in GCS, your App Engine app can read it using the same library, or you can serve it directly with a public URL, depending on the ACL you set.
I have written a program using Python and OpenCV where I perform operations on a video stream in run time. It works fine. Now if I want to publish it on a website where someone can see this using their browser and webcam, how do I proceed?
not really sure what you want to happen but if your going to implement this kind of feature in a website I think you should use a flash application instead of python (or if possible html 5). though your using python on the development of you web app it would only run on the server side instead and the feature you want to use is on a client side so for me it's more feasible to use flash instead to capture the video then after capturing the video you upload it to your server then your python code will do the rest of the process on the server side.