I am trying to share an app with streamlit share. I have all documents in this repository.
When I create a New App in share.streamlit I get this message and It doesn't continue. Does anyone how to solve it?
I'd recommend rebooting the app if it doesn't finish loading. Otherwise, deleting it and re-deploying might be helpful. One possibility is that your dependencies exceed the memory limit of 1GB, so your app is getting stuck due to hitting the memory limit.
Related
I'm working on a Baseball Simulator app with Dash. It uses a SGD model to simulate gameplay between a lineup and a pitcher. The app (under construction) can be found here: https://capstone-baseball-simulator.herokuapp.com/ and the repo: https://github.com/c-fried/capstone_heroku
To summarize the question: I want to be able to run the lineup optimizer on the heroku server.
There are potentially two parts to this: 1. Running the actual function while avoiding timeout. & 2. Displaying the progress of the function as it's running.
There are several issues I'm having with solving this:
The function is expensive and cannot be completed before the 30-second timeout. (It takes several minutes to complete.)
For this, I attempted to follow these instructions (https://devcenter.heroku.com/articles/python-rq) by creating a worker.py (still in the repo), moving the function to the external .py...etc. The problem I believe was that the process still was taking too long and therefore terminating.
I'm (knowingly) using a global variable in the function which works when I run locally but does not work when deployed (for reasons I somewhat understand - workers don't share memory https://dash.plotly.com/sharing-data-between-callbacks)
I was using a global to be able to see live updates of what the function was doing as it was running. Again, worked as a hack locally, but doesn't work on the server. I don't know how else I can watch the progress of the function without some kind of global operation going on. I'd love a clever solution to this, but I can't think of it.
I'm not experienced with web apps, so thanks for the advice in advance.
A common approach to solve this problem is to,
Run the long calculation asynchronously, e.g. using a background service
On completion, put the result in a shared storage space, e.g. a redis cache or an S3 bucket
Check for updates using an Interval component or a Websocket component
I can recommend Celery for keeping track of the tasks.
I have a django web application and I have to create model for Machine Learning in a view.
It takes a long time so PythonAnyWhere does not allow it and it kills the process when it reaches 300 seconds. According to that, i want to ask two questions.
Without celery, django bg task or something else, my view which contains the long running process does not work in order. But when I use debugger, it works correctly. Probably, some lines of code try to work without waiting for each other without debugger. How can i fix this?
PythonAnyWhere does not support celery or other long running task packages. They suggest django-background-tasks but in its documentation, the usage is not explained clearly. So I could not integrate it. How can i integrate django-background-tasks?
Thank you.
I have a Google App Engine app running (Flask app) that seems to have a memory leak. See the plot of the memory usage below. The memory usage continually creeps up until it hits the limit and the instance is shutdown and a new one is started up.
It's a simple API with about 8 endpoints. None of them handle large amounts of data.
I added an endpoint that takes a memory snapshot with the tracemalloc package, and compares it to the last snapshot and then writes the output to Google Cloud Storage.
I don't see anything in the reports that indicates a memory leak. The peak memory usage is recorded as about 0.12 GiB.
I am also calling gc.collect() at the end of every function that is called by each endpoint.
Any ideas on how to diagnose this, or what might be causing it?
There could be many reasons for this situation to be encountered. Is your app creating temporary files? Temporary files can be a cause of a memory leak. Temporary files can also be created from errors, or warnings. First of all, I would check my Stackdriver logs for errors and warnings and I would try to fix them.
Is your application interacting with databases or storage buckets ? Some memory related issues can be related to a bad interaction of your app with any data storage service. This issue was also encountered here and was mitigated by treating the Google Cloud Storage errors.
Another thing that you can do is investigate a bit the way of the memory is used in your function. For this you have some nice tools you can use like Pympler and Heapy. Playing with those tools may give you valuable clues about what your issue is.
I am running an application on gae flexible with python and flask. I periodically dispatch cloud tasks with a cron job. These basically loop through all users and perform some cluster analysis. The tasks terminate without throwing any kind of error but don't perform all the work (meaning not all users were looped through). It doesn't seem to happen at a consistent time 276.5s - 323.3s nor does it ever stop at the same user. Has anybody experienced anything similar?
My guess is that I am breaching some type of resource limit or timeout somewhere. Things i have thought about or tried:
Cloud tasks should be allowed to run for up to an hour (as per this: https://cloud.google.com/tasks/docs/creating-appengine-handlers)
I increased the timeout of gunicorn workers to be 3600 to reflect this.
I have several workers running.
I tried to find if there are memory spikes or cpu overload but didn't see anything suspicious.
Sorry if I am too vague or am completely missing the point, I am quite confused with this problem. Thank you for any pointers.
Thank you for all the suggestions, I played around with them and have found out the root cause, although by accident reading firestore documentation. I had no indication that this had anything to do with firestore.
From here: https://googleapis.dev/python/firestore/latest/collection.html
I found out that Query.stream() (or Query.get()) has a timeout on the individual documents like so:
Note: The underlying stream of responses will time out after the
max_rpc_timeout_millis value set in the GAPIC client configuration for
the RunQuery API. Snapshots not consumed from the iterator before that
point will be lost.
So what eventually timed out was the query of all users, I came across this by chance, none of the errors I caught pointed me back towards the query. Hope this helps someone in the future!
Other than use Cloud Scheduler, you can inspect the logs to make sure the Tasks ran properly and make sure there's no deadline issues. As application logs are grouped, and after the task itself is executed, it’s sent to Stackdriver. When a task is forcibly terminated, no log may be output. Try catching the Deadline exception so that some log is output and you may see some helpful info to start troubleshooting.
I am using moviepy to insert a text into different parts of the video in my Django project. Here is my code.
from moviepy.editor import VideoFileClip, TextClip, CompositeVideoClip
txt = TextClip('Hello', font="Tox-Typewriter")
video = VideoFileClip("videofile.mp4").subclip(0,31)
final_clip = CompositeVideoClip([video, txt]).set_duration(video.duration)
final_clip.write_videofile("media/{}.mp4".format('hello'),
fps=24,threads=4,logger=None)
final_clip.close()
I am getting the video written to a file in 10s and showing the video in browser. The issue is when there are simultaneous requests to the server. Say there are 5 simultaneous requests coming to the server, then each response will take 50 s each. Instead of giving each response in 10s. It seems that there is some resource which is used by all these requests, and one is waiting for the another to release the resource. But could not find out where it is happening. I have tried using 5 separate file for each request thinking that all the requests opening same file is the problem, but did not work out. Please help me to find a solution.
So without knowing more about your application setup any answers to this question will really be a shot in the dark.
As you know editing video or any changes to video is going to be resource intensive. In this instance you are actually a lot better off loading any processing to a specific task runner (celery, django-q). Not only will this not hold open server resources until the task is complete it also means you can offload the "work" to machines which are better suited for the job (optimized for IO or CPU bound work (depending on use case).
In development, if you are running using the local development server you will only be using one process. One process, when sent multiple intensive requests, will get blocked. You could look at using something like gunicorn or waitress and set the number of processes to < 1.
But still, at some point you are going to have to offload this work to a task runner, doing such work in a production environment could result in over consuming of web server resources.
On a more technical note,
have you looked at this issue on github:
https://github.com/Zulko/moviepy/issues/645
They talk about passing in a parameter ``progress_bar=False`. If in your use case you are writing 4 files and they are all writing to a progress bar you might be getting IO swamped.
Also, consider running a profiler while replicating the issue, it might give you better insight as to where the bottleneck is occurring (IO, or CPU).