Django - Auto run scripts at certain time on Live Server [duplicate] - python

This question already has answers here:
Set up a scheduled job?
(26 answers)
Closed 5 years ago.
Newbie here. I have a lot of different functions in a Python program that downloads a bunch of data from the internet, manipulates it, and displays it to the public. I have bunch of links to different tables of data, and I figure it would be very inconvenient for people to have to wait for the data to download when they clicked the link on my website. How can I configure Django to run the scripts that download the data at say, like 6am? And save some type of cached template of the data so people could quickly view that data for the entire day, and then refresh and update the data for the next day. Your insight and guidance would be very appreciated! Thank you! and Happy holidays!

I'd suggest celery for any recurring tasks in Django. Their docs are great and already have a use with Django tutorial right in them.

Related

How to upload music to website like spotify, itunes [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
Improve this question
I would like to write a Python application where it automate the upload process of a music or podcast to iTunes, Spotify, and other streaming platforms. It supposed to get the music in my directory and then upload it into these platforms (ultimately monetize these media).
I have checked the official APIs of the iTunes and Spotify, but it seems that they don't have an upload feature. However, I have seen website, like this one, which claim to upload (to multiple platforms) and monetize the musics.
I would appreciate it if someone could help with this problem. Or tell me how such website accomplish this task.
Well this problem could have multiple solutions. One of them would be follow these steps:
Get all the data necessary for uploading it in every music distributor :
-Song name, artists, album, etc ...
Store the data in an excel, csv, json or whatever you prefer.
Read the data using python, could use pandas library for this
Create a Selenium(python library for webscraping) bot that accesses every website and program it to fill all the fields for every website
Finally, you could have a bot that reads the data you written and automatically uploads music to all the websites.
NOTE: Only follow these steps if API's from the website are not useful for this task.
PD: It is going to take lot's of time to build this functionality because you have to program every music distributor website. (7 to 15 days of hardwork) but then you are going to be able to upload tons of music in just a few seconds in all the plataforms.
Last note: Be aware of web scraping policy of every website, maybe they do not permit these type of operations and could ban your IP.

Is it possible to override request payload in python? [duplicate]

This question already exists:
How to add/edit data in request-payload available in google chrome dev tools [duplicate]
Closed 3 years ago.
I've been looking for this answer for quite long but still with no results. I'm working with selenium and I need to override one request which is generated after the submit button has been clicked. It contains data in json format under "Request payload" in chrome dev tools. I found something like seleniumwires which provides some functionality like request.overrides but I'm not sure it is working as I want. Can anyone give me some hint where to start or which tools are approporiate to do that ?

Need a guidance to choose best approach for dynamic web browsing with python [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I am working at the company and one of my tasks is to scan certain tender portals for relevant opportunities and share it with distribution lists I have in excel. It is not difficult but rather exhausting task, especially with other 100 things they put on me. So I decided to apply python to solve my pain, and provide opportunities for gains. I started with simple scraping with soup but I realized that I need something better, like bot or smart selenium based code.
Problem : manual search and collections of info from websites ( search, click, download files, send them)
Sub problem for automated site scraping - credentials
Code background - rare learns from different platforms based on problem at hand ( mostly boring ), mostly python and data science related courses
Desired help - suggest way, framework, examples, for automated web browsing using python so I can collect all info in the matter of clicks ( Data collection using excel is basic, do not have access to databases, however, more sophisticated ideas are appreciated)
PS. Working two jobs and trying to support my family while searching for other career options, but my dedicated and care for business eat up my time as I do not want to be a trouble maker, thus while trying to push to management (which is old school) for support, time goes by.
Please and thank you in advance for your mega smart advices! Many thanks
BeautifulSoup not going to be up to the job simply because it is a parser, not a web browser.
MechanicalSoup might be an option for you of the sites are not too complex and do not require Javascript execution to function.
Selenium is essentially a robotic version of your favourite web browser.
Whether I choose Selenium or MechanicalSoup depends on whether my target data requires Javascript execution, either during login or to get the data itself.
Let's go over your requirements:
Search: Can the search be conducted through a get request? I.e. is the search done based on variables in the URL? Google something and then look at the URL of that Google Search. Is there something similar on your target websites? If yes, MechanicalSoup. If not, Selenium.
Click: As far as I know, MechanicalSoup cannot explicitly click. It can follow URLs if it is given what to look for (and usually this is good enough), but it cannot click a button. Selenium is needed for this.
Download: Either of them can do this as long as no button clicking is required. Again, can it just follow the path of where the button leads to?
Send: Outside the scope of both. You need to look at something else for this, although plenty of mail libraries exist.
Credentials: Both can do this, so the key question is whether login is dependent on Javascript.
This really hinges on the specific details of what you seek to do.
EDIT: Here is an example of what I have done with MechanicalSoup:
https://github.com/MattGaiser/mindsumo-scraper
It is a program which logs into a website, is pointed to a specific page, scrapes that page as well as the other relevant pages to which it links, and from those scrapings generates a CSV of the challenges I have won, the score I earned, and the link to the image of the challenge (which often has insights).

Executing a script on background while the website browsing is ongoing on Django

I am using Django and would like to background execute a long script at a certain stage while browsing the website pages. For example, for a user calculating the shortest distance and save in the database by comparing with available i.e. 5000 locations. At present, when the script execution starts then the page got frozen and couldn't allow clicking on the links or buttons.
I have tried using Django Background Tasks but that needs manually executing through command. Also, looked at Celery with Redis but I found Celery could be huge for this and due to that looking for an easy way.
Could you please share your advise to achieve my goal?
Thank you!
extra is maybe what you need . And there is sample similar to your question in doc

How to back up whole webpage include picture with python? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How to download a file in python
I'm playing with Python for doing some crawling stuff. I do know there is urllib.urlopen("http://XXXX") That can help me to get the html for target website. However, The link to the original image in that webpage will usually make the image in the backup page unavailable. I am wondering is there a way that can also save the image in the local space, then we can read the full content on the website without internet connection. It's like back up the whole webpage, but I'm not sure is there any way to do that in Python. Also, if it can get rid of the advertisement stuff, it will be more awesome though. Thanks.
If you're looking to backup a single webpage, you're well on your way.
Since you mention crawling, if you want to backup an entire website, you'll need to do some real crawling and you'll need scrapy for that.
There are several ways of downloading files off the interwebs, just see these questions:
Python File Download
How to- download a file in python
Automate file download from http using python
Hope this helps

Categories