I tried to link my HTML file with Python code.
I tried this
import webbrowser
webbrowser.open_new_tab("data.HTML")
It returned my HTML page in Firefox..
But i need to return to my Python program to execute remaining lines
But when I closed this browser, it closes my Python script too.
And I tried to link my Python program by,
go to Python
it returns to text editor not to terminal...
But I need to return to terminal.
I need the solution
As someone described, you need to use one web framework (like flask, django others) to run python code. Or the second solution is using CGI(http://modwsgi.readthedocs.io/en/develop/).
For the second problem(want to keep running python code after browser is closed), I want to advice to use Selenium.
Cheers, John
Related
I have created a button in HTML , i want to invoke a external python script when that button is clicked.
and also pass on some variables into that python script if possible.
which method would be the simplest and fastest.
TIA
I tried flask but that didnt help me, my html code is being hosted on apache. i cannot / shouldn't (maybe) use flask
If you are not using Flask, you can use Django. If you don't want to use Django either, you can use ajax to call the script as Fastnlight has shown above.
In case that does not work, you can use "py-script". It is like javascript. It is in experimental phase but can do simple tasks in your browser window.
link to pyscript: https://pyscript.net/
Im new to python and programming and wanted to make a simple program that opens a webpage after its execution, how can it be made?
Yes, the most common way to do this is with selenium and a webdriver manager. If you don't need to open the whole webpage and just need the HTML, use beautifulsoup4 and requests.
Depends on what you mean with make python open a webpage.
You can either call your default browser to open the URL with something like the following:
firefox.exe <url>
Or you can create an application using QT to show the webpage in "plain" Python: https://pythonspot.com/pyqt5-webkit-browser/
If you need to interact with the page through your program, see the links in the answer mentioning selenium.
I'm practicing in parsing web pages with python. So what I do is
ans = requests.get(link)
Then I use re to extract some information from html, that is stored in
ans.content
What I faced is that some sites use scripts, that are automatically executed in a browser, but not when I try to download a page using requests. For example, instead of getting a page with information I get something like
scripts_to_get_info.run()
in html code
Browser is installed on my computer, so as a program that I wrote, this means that, theoretically, I should have a way to run this script and to get information while running python code to parse then.
Is it possible? Any suggestion?
(idea, that this is doable, came from the fact, that when I tried to inspect page in google, I saw real html file without any trashy scripts)
Alright, I've created a script that enters this local site and searches for a specific "Numero de Origem", a simple integer. However, that required clicking my way through a drop-down menu, typing the number and getting the results on that same page (no redirects or search-specific urls).
With all that in mind, I used Selenium and the script works fine.
Now I'm trying to run that script on a server. Unfortunatelly, the server does not support Selenium, neither BeautifulSoup or even requests.
Is there a way to make this work while using only urllib or urllib2? Here is a link of supported python modules, just in case.
Thanks, guys! Really appreciate any help!
I am a python beginner and I need some help to create a web service that calls a python web scraping script (a task for a course).
I can use Bottle to create the web service. I wanted to use static_file to call the script but I am not sure about that because the documentation says that static_file is for CSS.
The idea is first to create the web service and later used the web scraping script from a server.
Thanks for your help and greetings from Colombia!
P.S. I don't have an excellent English but I hope someone can understand me and help me.
unless it's already in a function, edit your scraping script so your code is contained within a function that returns whatever information you want. it should be as easy as indenting everything unindented and adding a def main():
let's say your script is called scraper.py and it's located along with your bottle controllers, at the top of your controller file add a import scraper.
in your callback you can call scraper.main()
(not sure why pasting the code here is not formatting it's below)
Having that said, it's usually bad practice to have something long running like a scraping script in a request. You'd usually want to use a queue of scraping jobs where your controller posts work to do, your scraper subscibes to it and notifies it when it's done caching the results somewhere.
from bottle import route, run
import scraper
#route('/scrape')
def scrape():
return scraper.main()
you could try this guide I found:
http://docs.python-guide.org/en/latest/scenarios/scrape/
For the xpath stuff, I would suggest using Mozilla Firefox with the "Firebug" pluggin. It can generate xpaths for you which will help you write your script faster