Using Python to automate web-based processing? - python

I'm trying to use Python to automatically upload, submit, and retrieve files on websites that do sequence processing.
Example: https://www.ncbi.nlm.nih.gov/Structure/bwrpsb/bwrpsb.cgi
Does anyone know the best way to do this, whether it be specific modules or tutorials? Would this work with the requests module? Thanks a bunch in advance.

The example looks like an older system but if at all possible, I would suggest adding automating via an API due to your "retrieval" requirements prior to considering Selenium.
However, if you find yourself using Python with Selenium Webdriver, save yourself some setup effort and check out SeleniumBase.
Also, something worth checking out if there is a budget associated with this project and you want vendor support, UiPath RPA.

I suggest you to use Selenium.
You can use it in different web browsers.
As the task is sequence processing it should be simple.
Regards!!!

Related

Selenium botting advice

So I'm using chrome in selenium Python 3.9.5. I want to make an automated bot, and the way I thought this was the best is to create a function around a sought after (by me) library that detects a graphical element in chromebrowser. Any suggestions on pypi? The point of this is that it'd make it easier to integrate pyHM to automate basically anything that has a static kind of element to perform an action on (even if the url is different), so any suggestions?

Why am I not seeing the "full" html case? [duplicate]

This question already has answers here:
Web-scraping JavaScript page with Python
(18 answers)
Closed 4 hours ago.
What is the best method to scrape a dynamic website where most of the content is generated by what appears to be ajax requests? I have previous experience with a Mechanize, BeautifulSoup, and python combo, but I am up for something new.
--Edit--
For more detail: I'm trying to scrape the CNN primary database. There is a wealth of information there, but there doesn't appear to be an api.
The best solution that I found was to use Firebug to monitor XmlHttpRequests, and then to use a script to resend them.
This is a difficult problem because you either have to reverse engineer the JavaScript on a per-site basis, or implement a JavaScript engine and run the scripts (which has its own difficulties and pitfalls).
It's a heavy weight solution, but I've seen people doing this with GreaseMonkey scripts - allow Firefox to render everything and run the JavaScript, and then scrape the elements. You can even initiate user actions on the page if needed.
Selenium IDE, a tool for testing, is something I've used for a lot of screen-scraping. There are a few things it doesn't handle well (Javascript window.alert() and popup windows in general), but it does its work on a page by actually triggering the click events and typing into the text boxes. Because the IDE portion runs in Firefox, you don't have to do all of the management of sessions, etc. as Firefox takes care of it. The IDE records and plays tests back.
It also exports C#, PHP, Java, etc. code to build compiled tests/scrapers that are executed on the Selenium server. I've done that for more than a few of my Selenium scripts, which makes things like storing the scraped data in a database much easier.
Scripts are fairly simple to write and alter, being made up of things like ("clickAndWait","submitButton"). Worth a look given what you're describing.
Adam Davis's advice is solid.
I would additionally suggest that you try to "reverse-engineer" what the JavaScript is doing, and instead of trying to scrape the page, you issue the HTTP requests that the JavaScript is issuing and interpret the results yourself (most likely in JSON format, nice and easy to parse). This strategy could be anything from trivial to a total nightmare, depending on the complexity of the JavaScript.
The best possibility, of course, would be to convince the website's maintainers to implement a developer-friendly API. All the cool kids are doing it these days 8-) Of course, they might not want their data scraped in an automated fashion... in which case you can expect a cat-and-mouse game of making their page increasingly difficult to scrape :-(
There is a bit of a learning curve, but tools like Pamie (Python) or Watir (Ruby) will let you latch into the IE web browser and get at the elements. This turns out to be easier than Mechanize and other HTTP level tools since you don't have to emulate the browser, you just ask the browser for the html elements. And it's going to be way easier than reverse engineering the Javascript/Ajax calls. If needed you can also use tools like beatiful soup in conjunction with Pamie.
Probably the easiest way is to use IE webbrowser control in C# (or any other language). You have access to all the stuff inside browser out of the box + you dont need to care about cookies, SSL and so on.
i found the IE Webbrowser control have all kinds of quirks and workarounds that would justify some high quality software to take care of all those inconsistencies, layered around the shvwdoc.dll api and mshtml and provide a framework.
This seems like it's a pretty common problem. I wonder why someone hasn't anyone developed a programmatic browser? I'm envisioning a Firefox you can call from the command line with a URL as an argument and it will load the page, run all of the initial page load JS events and save the resulting file.
I mean Firefox, and other browsers already do this, why can't we simply strip off the UI stuff?

Is it possible to create a python program that controls my web pages?

I have been wondering about this since I could really benefit from a program that makes actions on websites that I use for my job that require the same command over and over again.
I know some python and I love to learn new things.
I tried looking for it on google but I guess I'm not sure how to find it.
I would love it if you could direct me to a guide or something like that.
Thank you very much!
Selenium interacts with a web browser directly, although you can hide the browser window in the code (look up Selenium in --headless mode). This is a good choice for filling out a lot of forms or interacting with graphical user interface elements.
However, if you need to request information from websites, you don't always need to interact with the web browser directly. You can use the package called Requests. This doesn't depend on any web browsers and can run silently in the background.
I think you can do it with Python and some packages like selenium. Also you need some html knowledge to search in the html source code of the specific wegpage.
I found an interesting use case, maybe that helps you:
https://towardsdatascience.com/controlling-the-web-with-python-6fceb22c5f08

Finding Python webpages crawler complete solution

First of all - many thanks in advance. I really appreciate it all.
So I'm in need for crawling a small amount of urls rather constantly (around every hour) and get specific data
A PHP site will be updated with the crawled data, I cannot change that
I've read this solution: Best solution to host a crawler? which seems to be fine and has the upside of using cloud services if you want something to be scaled up.
I'm also aware of the existence of Scrapy
Now, I winder if there is a more complete solution to this matter without me having to set all these things up. It seems to me that it's not a very distinguish problem that I'm trying to solve and I'd like to save time and have some more complete solution or instructions.
I would contact the person in this thread to get more specific help, but I can't. (https://stackoverflow.com/users/2335675/marcus-lind)
Currently running Windows on my personal machine, trying to mess with Scrapy is not the easiest thing, with installation problems and stuff like that.
Do you think there is no way avoiding this specific work?
In case there isn't, how do I know if I should go with Python/Scrapy or Ruby On Rails, for example?
If the data you're trying to get are reasonably well structured, you could use a third party service like Kimono or import.io.
I find setting up a basic crawler in Python to be incredibly easy. After looking at a lot of them, including Scrapy (it didn't play well with my windows machine either due to the nightmare dependencies), I settled on using Selenium's python package driven by PhantomJS for headless browsing.
Defining your crawling function would probably only take a handful of lines of code. This is a little rudimentary but if you wanted to do it super simply as a straight python script you could even do something like this and just let it run while some condition is true or until you kill the script.
from selenium import webdriver
import time
crawler = webdriver.PhantomJS()
crawler.set_window_size(1024,768)
def crawl():
crawler.get('http://www.url.com/')
# Find your elements, get the contents, parse them using Selenium or BeautifulSoup
while True:
crawl()
time.sleep(3600)

Upload to website from python

I need my program (Python) to upload files (large reports) to services like (rapidshare, megaupload or easyshare) and grab the URL the site gives me (to them forward to the user)
What's easiest way ( I think Selenium, but maybe it's overkill) ?
What's the fastest ( can I do it with mechanize? ) ?
How would you do it?
Thanx in advance.
I would attack this with Selenium, even it beeing really heavy, I think the easy aspect of it is worth it.
I would do what you need to do (upload file to service) by hand while on the FireFox plugin SeleniumIDE would be recording it. Them, just export as Python and you have your code.
SeleniumIDE:
Selenium is a bit to slow, but the simplicity I showed you is well worth it (IMHO).
You might check first whether the sites in questions have an API meant for this sort of thing. easy-share for example does (the others are blocked to me at the moment, so haven't checked those): http://www.easy-share.com/be/developers.html (and they even have a ready-made python module available)

Categories