I'm trying to learn SimpleCV, and in it's tutorials it says to display an image in the browser, use
img.show(type="browser")
Whenever I do this, my browser leads me to localhost:8080, and the page will not load. How can I start a simple server in Python so that the page will load?
Try calling import webbrowser from the shell. If it fails you need to install the library (pip install or easy_install).
Related
I am trying to create a python application while using eel to create a user interface in html. My operating system is Ubuntu Linux and I'm using Firefox to display the web interface.
The problem I'm having is every time I run the python code, Firefox opens a blank page saying "Unable to connect" followed by "Firefox can't establish a connection to the server at localhost:8000". However, if I click the "Try Again" button once, twice, or three times, my interface is displayed.
Once open, I can navigate to different pages but I also noticed that once I navigate to a different page, some of my javascript stops working (specifically a window.close() function). I don't know if this is related but I thought I would mention it just in case.
Any advice on the matter would be greatly appreciated.
Thank you.
I changed my browser from firefox to chromium and now my interface loads on startup the first time. I know some documentation says it can be used with firefox, and it can, but it seems to be kind of buggy and works better with other browsers.
However, I'm still having trouble with my javascript not running but that will be another question.
I'm using Pytrends to extract Google trends data, like:
from pytrends.request import TrendReq
pytrend = TrendReq()
pytrend.build_payload(kw_list=['bitcoin'], cat=0, timeframe=from_date+' '+today_date)
And it returns an error:
ResponseError: The request failed: Google returned a response with code 429.
I made it yesterday and for some reason it doesn't work now! The source code from github failed too:
pytrends = TrendReq(hl='en-US', tz=360, proxies = {'https': 'https://34.203.233.13:80'})
How can I fix this? Thanks a lot!
TLDR; I solved the problem with a custom patch
Explanation
The problem comes from the Google bot recognition system. As other similar systems do, it stops serving too frequent requests coming from suspicious clients. Some of the features used to recognize trustworthy clients are the presence of specific headers generated by the javascript code present on the web pages. Unfortunately, the python requests library does not provide such a level of camouflage against those bot recognition systems since javascript code is not even executed.
So the idea behind my patch is to leverage the headers generated by my browser interacting with google trends. Those headers are generated by the browser meanwhile I am logged in using my Google account, in other words, those headers are linked with my google account, so for them, I am trustworthy.
Solution
I solved in the following way:
First of all you must use google trends from your web browser while you are logged in with your Google Account;
In order to track the actual HTTP GET made: (I am using Chromium) Go into "More Tools" -> "Developers Tools" -> "Network" tab.
Visit the Google Trend page and perform a search for a trend; it will trigger a lot of HTTP requests on the left sidebar of the "Network" tab;
Identify the GET request (in my case it was /trends/explore?q=topic&geo=US) and right-click on it and select Copy -> Copy as cURL;
Then go to this page and paste the cURL script on the left side and copy the "headers" dictionary you can find inside the python script generated on the right side of the page;
Then go to your code and subclass the TrendReq class, so you can pass the custom header just copied:
from pytrends.request import TrendReq as UTrendReq
GET_METHOD='get'
import requests
headers = {
...
}
class TrendReq(UTrendReq):
def _get_data(self, url, method=GET_METHOD, trim_chars=0, **kwargs):
return super()._get_data(url, method=GET_METHOD, trim_chars=trim_chars, headers=headers, **kwargs)
Remove any "import TrendReq" from your code since now it will use this you just created;
Retry again;
If in any future the error message comes back: repeat the procedure. You need to update the header dictionary with fresh values and it may trigger the captcha mechanism.
This one took a while but it turned out the library just needed an update. You can check out a few of the approaches I posted here, both of which resulted in Status 429 Responses:
https://github.com/GeneralMills/pytrends/issues/243
Ultimately, I was able to get it working again by running the following command from my bash prompt:
Run:
pip install --upgrade --user git+https://github.com/GeneralMills/pytrends
For the latest version.
Hope that works for you too.
EDIT:
If you can't upgrade from source you may have some luck with:
pip install pytrends --upgrade
Also, make sure you're running git as an administrator if on Windows.
I had the same problem even after updating the module with pip install --upgrade --user git+https://github.com/GeneralMills/pytrends and restart python.
But, the issue was solved via the below method:
Instead of
pytrends = TrendReq(hl='en-US', tz=360, timeout=(10,25), proxies=['https://34.203.233.13:80',], retries=2, backoff_factor=0.1, requests_args={'verify':False})
Just ran:
pytrend = TrendReq()
Hope this can be helpful!
After running the upgrade command via pip install, you should restart the python kernel and reload the pytrend library.
How do I change the browser used by the view(response) command in the scrapy shell? It defaults to safari on my machine but I'd like it to use chrome as the development tools in chrome are better.
As eLRuLL already mentioned, view(response) uses webbrowser to open the web page you downloaded. To change its behavior, you need to set a BROWSER environment variable.
You could do this by adding the following line at the end of your ~/.bashrc file:
export BROWSER=/usr/bin/firefox (if you would like firefox to be used).
I don't have Chrome installed, but by doing a fast search on Google, it seems its path is /usr/bin/google-chrome-stable; therefore, you could try export BROWSER=/usr/bin/google-chrome-stable instead. I didn't test it for Chrome though.
There is no current way to specify which browser to use to open the response, as it internally uses the webbrowser package. This package uses your default configured browser to open the current response.
You could always change the default browser to chrome on your system, that should make webbrowser use it.
This fixed it for me:
If you're on windows 10, find or create a random html-file on your system.
Right click the html-file
Open with
Choose another app
Select your browser (e.g Google Chrome) and check the box "Always use this app to open .html"
Now attempt to use view(response) in the Scrapy shell again and it should work.
Try this
import webbrowser
from scrapy.utils.response import open_in_browser
open_in_browser(response, _openfunc=webbrowser.get("/usr/bin/google-chrome").open)
I tried following code in VS2015, Eclipse and Spyder:
import urllib.request
with urllib.request.urlopen('https://www.python.org/') as response:
html = response.read()
In call cases it won't open the webpage in the browser. I am not sure what is the problem. Debug won't help. In VS2015 the program exists with code 0 which I suppose means successful.
You are using wrong library for the job. urllib module provides functions to send http requests and capture the result in your program. It has nothing to do with a web browser. What you are looking for is the webbrowser module. Here is an example:
import webbrowser
webbrowser.open('http://google.com')
This will show the web page in your browser.
urllib
is a module that is used to send request to web pages and read its contents.
Where as:
webbrowser
is used to open the desired url.
It can used as follows:
import webbrowser
webbrowser.open('http://docs.python.org/lib/module-webbrowser.html')
which usually re-uses existing browser window.
To open in new window:
webbrowser.open_new('http://docs.python.org/lib/module-webbrowser.html')
To open in new tab:
webbrowser.open_new_tab('http://docs.python.org/lib/module-webbrowser.html')
To access via command line interface:
$ python -m webbrowser -t "http://www.python.org"
-n: open new window
-t: open new tab
Here is python documentation for webbrowser:
python 3.6
python 2.7
I am running selenium webdriver (firefox) using python on a headless server. I am using pyvirtualdisplay to start and stop the Xvnc display to grab the image of the sites I am visiting. This is working great except flash content is not loading on the pages (I can tell because I am taking screenshots of the pages and I just see empty space where flash content should be on the screenshots).
When I run the same program on my local unix machine, the flash content loads just fine. I have installed flash on my server, and have libflashplayer.so in /usr/lib/mozilla/plugins. The only difference seems to be that I am using the Xvnc display on the server (unless plash wasn't installed properly? but I believe it was since I used to get a message asking me to install flash when I viewed a site that had flash content but since installing flash I dont get that message anymore).
Does anyone have any ideas or experience with this- is there a trick to getting flash to load using a firefox webdriver on a headless server? Thanks
It turns out, I needed to use selenium to scroll down the page to load all the content.