Tracking the progress of a moviepy process on the front end - python

I'm making a web application that uses moviepy functions but am only able to track progress at a basic level (video uploading, video processes happening, etc, not percentage of progress). Moviepy processes track progress on the server. How might I be able to track progress this information to give to the client?

The ability to pass a function for write_videofile() to call each iteration has been a often-requested feature. It currently is not possible using the main branch of moviepy. It is a feature that we are looking into properly implementing, and you can see all the thoughts about it on our project page here.
There are a few different pull requests submitted by different people with the same feature implemented in different ways. They may be exactly what you are looking for, so feel free to clone them and try them out here, here, here. The first one is much newer, so you're probably better starting off with that one.
I'll update this answer as soon as we decide on something to add.

Related

Create a Program in Python with a Node conection GUI

I want to create a program with a GUI using Python. This program should show a list of nodes somewhere and allow me to insert them on a working diagram. I also need this nodes connected in some sequence. The following image is similar to what I need, it's from Orange3.
I come from a web development background and I've used Python for some Data Science but all using Terminal so right now I feel a little lost on where to get started.
I would much appreciate some help on where to look. Also I would like to use, if possible, existing tools instead of having to develop everything from scratch. Maybe there even is a project that does what I need and I could fork it from Github.
Thanks a lot for the help.
Check out Tkinter. Its great for GUI. Hard to add in images though. You could use Base64 to add in images.
There are plenty but it's best to create it yourself. There are infinite tutorials. Besides its gonna be full of bugs if you try to alter code that isn't yours.

Batch processing in python.

I am just wondering, is there any way to process multiple videos in one go using a cluster may be? Specifically using python? For example, if I have 50+ videos in a folder and I have to analyze for movement related activity. Assuming I have a code written in python and I have to use that one particular code for the each video. What exactly I want is, instead of analyzing videos one by one (i.e., putting in loop) I need to analyze videos in parallel. Is there any way that I can implement the same?
You can do this with multiprocessing or threading. The details can be a bit involved, and since you didn't give any in your question, suggesting the above links is about all I can help.

How can I retrieve Proxmox node notes?

I am using proxmoxer to manipulate machines on ProxMox (create, delete etc).
Every time I am creating a machine, I provide a description which is being written in ProxMox UI in section "Notes".
I am wondering how can I retrieve that information?
Best would be if it can be done with ProxMox, but if there is not a way to do it with that Python module, I will also be satisfied to do it with plain ProxMox API call.
The description parameter is only a message to show in proxmox UI, and it's not related to any function
You could use https://github.com/baseblack/Proxmoxia to get started, I asked this very same question on the forum as I need to generate some reports from a legacy system with dozens of VMs (and descriptions).
Let me know if you still need this, perhaps we can collaborate on it.

Get external website's meta-description after returning success page in Django

I need to get the meta-description of an external website entered by the user. My code currently does this but it has to wait for the external website to respond. This is a problem for obvious reasons.
So what I want is: Add link to database (already does), then return success page to user and only then go get the meta-descrition, title and stuff like that from the external site and add to database.
From what I've been reading, Celery would be my best bet. However, I'm a novice and it seems a little too complicated for me yet.
One possibility I've came up with would be to make a script to run like every second and check for links on the database that are flagged or something like that. The script would check if there's a txt file present to start the job. This txt file would be created after the link is added to the database. Is this a viable option or would generate too many problems?
Celery is excellent, and the way to go for most background tasks. It has a learning curve, but it is well worth it if you intend to use this pattern frequently.
However, RQ (Redis Queue for Python) can be a much simpler solution (but it won't run on Windows like Celery can).
Take a look at "Getting Started" at http://python-rq.org/ to see how easy this could be for you.

Persistant database state strategies

Due to several edits, this question might have become a bit incoherent. I apologize.
I'm currently writing a Python server. It will never see more than 4 active users, but I'm a computer science student, so I'm planning for it anyway.
Currently, I'm about to implement a function to save a backup of the current state of all relevant variables into CSV files. Of those I currently have 10, and they will never be really big, but... well, computer science student and so on.
So, I am currently thinking about two things:
When to run a backup?
What kind of backup?
When to run:
I can either run a backup every time a variable changes, which has the advantage of always having the current state in the backup, or something like once every minute, which has the advantage of not rewriting the file hundreds of times per minute if the server gets busy, but will create a lot of useless rewrites of the same data if I don't implement a detection which variables have changed since the last backup.
Directly related to that is the question what kind of backup I should do.
I can either do a full backup of all variables (Which is pointless if I'm running a backup every time a variable changes, but might be good if I'm running a backup every X minutes), or a full backup of a single variable (Which would be better if I'm backing up each time the variables change, but would involve either multiple backup functions or a smart detection of the variable that is currently backed up), or I can try some sort of delta-backup on the files (Which would probably involve reading the current file and rewriting it with the changes, so it's probably pretty stupid, unless there is a trick for this in Python I don't know about).
I cannot use shelves because I want the data to be portable between different programming languages (java, for example, probably cannot open python shelves), and I cannot use MySQL for different reasons, mainly that the machine that will run the Server has no MySQL support and I don't want to use an external MySQL-Server since I want the server to keep running when the internet connection drops.
I am also aware of the fact that there are several ways to do this with preimplemented functions of python and / or other software (sqlite, for example). I am just a big fan of building this stuff myself, not because I like to reinvent the wheel, but because I like to know how the things I use work. I'm building this server partly just for learning python, and although knowing how to use SQLite is something useful, I also enjoy doing the "dirty work" myself.
In my usage scenario of possibly a few requests per day I am tending towards the "backup on change" idea, but that would quickly fall apart if, for some reason, the server gets really, really busy.
So, my question basically boils down to this: Which backup method would be the most useful in this scenario, and have I possibly missed another backup strategy? How do you decide on which strategy to use in your applications?
Please note that I raise this question mostly out of a general curiosity for backup strategies and the thoughts behind them, and not because of problems in this special case.
Use sqlite. You're asking about building persistent storage using csv files, and about how to update the files as things change. What you're asking for is a lightweight, portable relational (as in, table based) database. Sqlite is perfect for this situation.
Python has had sqlite support in the standard library since version 2.5 with the sqlite3 module. Since a sqlite database is implemented as a single file, it's simple to move them across machines, and Java has a number of different ways to interact with sqlite.
I'm all for doing things for the sake of learning, but if you really want to learn about data persistence, I wouldn't marry yourself to the idea of a "csv database". I would start by looking at the wikipedia page for Persistence. What you're thinking about is basically a "System Image" for your data. The Wikipedia article describes some of the same shortcomings of this approach that you've mentioned:
State changes made to a system after its last image was saved are lost
in the case of a system failure or shutdown. Saving an image for every
single change would be too time-consuming for most systems
Rather than trying to update your state wholesale at every change, I think you'd be better off looking at some other form of persistence. For example, some sort of journal could work well. This makes it simple to just append any change to the end of a log-file, or some similar construct.
However, if you end up with many concurrent users, with processes running on multiple threads, you'll run in to concerns of whether or not your changes are atomic, or if they conflict with one another. While operating systems generally have some ways of dealing with locking files for edits, you're opening up a can of worms trying to learn about how that works and interacts with your system. At this point you're back to needing a database.
So sure, play around with a couple different approaches. But as soon as you're looking to just get it working in a clear and consistent manner, go with sqlite.
If your data is in CSV files, why not use a revision control system on those files? E.g. git would be pretty fast and give excellent history. The repository would be wholly contained in the directory where the files reside, so it's pretty easy to handle. You could also replicate that repository to other machines or directories easily.

Categories