I want to try Flash-Selenium with the python driver, however I have some concerns regarding the available python extension, it seems aged and there is no example on how to use it... Is there anybody who is using it? Any example on how to use it ?
Example taken from FlashSelenium page:
from com.thoughtworks.selenium.FlashSelenium import FlashSelenium
from com.thoughtworks.selenium.selenium import selenium
url = "http://flashselenium.t35.com/colors.html"
browserType = "*firefox"
selenium = selenium("localhost", 4444, browserType, url)
selenium.start()
selenium.open(url)
flashApp = FlashSelenium(selenium, "coloredSquare")
flashApp.percent_loaded()
Related
I want to scrape google 'people also ask questions/answer'. I am doing it successfully with the following module.
pip install people_also_ask
The problem is the library is configured such that no one can send many requests to google. I want to send 1000 requests per day and to achieve that I have to add fake_useragent to module. I tried a lot but when I try to add fake user agent to header it gives error. I am not a pro so I must have done wrong myself. Can anyone help me add fake_useragent to module(people_also_ask). here is working code to get question/answer.
from encodings import utf_8
import people_also_ask as paa
from fake_useragent import UserAgent
ua = UserAgent()
while True:
input("Please make sure the queries are in \\query.txt file.\npress Enter to continue...")
try:
query_file = open("query.txt","r")
queries = query_file.readlines()
query_file.close()
break
except:
print("Error with the query.txt file...")
for query in queries:
res_file = open("result.csv","a",encoding="utf_8")
try:
query = query.replace("\n","")
except:
pass
print(f'Searching for "{query}"')
questions = paa.get_related_questions(query, 14)
questions.insert(0,query)
print("\n________________________\n")
main_q = True
for i in questions:
i = i.split('?')[0]
try:
answer = str(paa.get_answer(i)['response'])
if answer[-1].isdigit():
answer = answer[:-11]
print(f"Question:{i}?")
except Exception as e:
print(e)
print(f"Answer:{answer}")
if main_q:
a = ""
b = ""
main_q = False
else:
a = "<h2>"
b = "</h2>"
res_file.writelines(str(f'{a}{i}?{b},"<p>{answer}</p>",'))
print("______________________")
print("______________________")
res_file.writelines("\n")
res_file.close()
print("\nSearch Complete.")
input("Press any key to Exit!")
This is against Google's terms of service, and the wishes of the people_also_ask package. This answer is for educational purposes only.
You asked why fake_useragent is prevented from working. It's not prevented from working, but the people_also_ask package simply isn't implementing any calls to make use of any fake_useragent methods. You can't just import a package and expect another package to start using it. You manually have to make packages work together.
To do that, you have to have some idea of how the 2 packages work. Have a look at the source code and you will see you can make them work together very easily. Just substitute the constant header in people_also_ask with one generated by fake_useragent before you request any data.
paa.google.HEADERS = {'User-Agent': ua.random} # replace the HEADER with a randomised HEADER from fake_useragent
questions = paa.get_related_questions(query, 14)
and
paa.google.HEADERS = {'User-Agent': ua.random} # replace the HEADER with a randomised HEADER from fake_useragent
answer = str(paa.get_answer(i)['response'])
NOTE:
Not all user agents will work. Google doesn't give related questions depending on the user agent. It is not the fault of either the fake_useragent, or the people_also_ask package.
In order to alleviate this issue somewhat, make sure you call ua.update() and you can also use PR #122 of fake_useragents to only select a subset of the newest user agents which are more likely to work, though you will still get a few missed queries. There is a reason the people_also_ask package didn't bypass or work-around this limitation from google
I just started learning selenium with python
from selenium import webdriver
MY_PROFILE = "D:\\FIREFOX_PROFILE"
FFP = webdriver.FirefoxProfile(MY_PROFILE)
print(FFP.profile_dir)
# OUTPUT: C:\Users\ABC\AppData\Local\Temp\****\***
# But it should be OUTPUT: D:\FIREFOX_PROFILE
DRIVER = webdriver.Firefox(firefox_profile = FFP)
print(FFP.profile_dir)
# OUTPUT: C:\Users\ABC\AppData\Local\Temp\****\***
# But it should be OUTPUT: D:\FIREFOX_PROFILE
I want to save my profile somewhere so that I can use it later on.
I also tried creating RUN -> firefox.exe -p and creating a new profile (I can't use the created profile). Nothing works.
I am using:
Selenium Version: 2.53.6
Python Version: 3.4.4
Firefox Version: Various(49.0.2, 45, 38 etc)
I searched in Google but I can't solve it. Is there any way to save the profile?
You need to take help of os module in python
import os
there you get functions (like .getcwd() ) to described in Files and Directories.
then use,
p = webdriver.FirefoxProfile()
p.set_preference('browser.download.folderList', 2 )
p.set_preference('browser.download.manager.showWhenStarting', false)
p.set_preference('browser.download.dir', os.getcwd())
p.set_preference('browser.helperApps.neverAsk.saveToDisk', 'text/csv/xls')
driver = webdriver.Firefox(p)
in short you can do so,
profile.set_preference("browser.helperApps.neverAsk.openFile","text/csv,application/x-msexcel,application/excel,application/x-excel,application/vnd.ms-excel,image/png,image/jpeg,text/html,text/plain,application/msword,application/xml")
possible duplicate of Setting selenium to use custom profile, but it keeps opening with default
I have checked Google Search API's and it seems that they have not released any API for searching "Images". So, I was wondering if there exists a python script/library through which I can automate the "search by image feature".
This was annoying enough to figure out that I thought I'd throw a comment on the first python-related stackoverflow result for "script google image search". The most annoying part of all this is setting up your proper application and custom search engine (CSE) in Google's web UI, but once you have your api key and CSE, define them in your environment and do something like:
#!/usr/bin/env python
# save top 10 google image search results to current directory
# https://developers.google.com/custom-search/json-api/v1/using_rest
import requests
import os
import sys
import re
import shutil
url = 'https://www.googleapis.com/customsearch/v1?key={}&cx={}&searchType=image&q={}'
apiKey = os.environ['GOOGLE_IMAGE_APIKEY']
cx = os.environ['GOOGLE_CSE_ID']
q = sys.argv[1]
i = 1
for result in requests.get(url.format(apiKey, cx, q)).json()['items']:
link = result['link']
image = requests.get(link, stream=True)
if image.status_code == 200:
m = re.search(r'[^\.]+$', link)
filename = './{}-{}.{}'.format(q, i, m.group())
with open(filename, 'wb') as f:
image.raw.decode_content = True
shutil.copyfileobj(image.raw, f)
i += 1
There is no API available but you are can parse the page and imitate the browser, but I don't know how much data you need to parse because google may limit or block access.
You can imitate the browser by simply using urllib and setting correct headers, but if you think parsing complex web-pages may be difficult from python, you can directly use a headless browser like phontomjs, inside a browser it is trivial to get correct elements using javascript/DOM
Note before trying all this check google's TOS
You can try this:
https://developers.google.com/image-search/v1/jsondevguide#json_snippets_python
It's deprecated, but seems to work.
I have written a short python script that opens Google music in web view window. however I can't seem to find anything about getting webkit to use cookies so that I don't have to login every time I start it up.
Here's what I have:
#!/usr/bin/env python
import gtk, webkit
import ctypes
libgobject = ctypes.CDLL('/usr/lib/i386-linux-gnu/libgobject-2.0.so.0')
libwebkit = ctypes.CDLL('/usr/lib/libsoup-2.4.so.1')
libsoup = ctypes.CDLL('/usr/lib/libsoup-2.4.so.1')
libwebkit = ctypes.CDLL('/usr/lib/libwebkitgtk-1.0.so.0')
proxy_uri = libsoup.soup_uri_new('http://tcdproxy.tcd.ie:8080') #proxy urli
session = libwebkit.webkit_get_default_session()
libgobject.g_object_set(session, "proxy-uri", proxy_uri, None)
w = gtk.Window()
w.connect("destroy",w.destroy)
w.set_size_request(1000,600)
w.connect('delete-event', lambda w, event: gtk.main_quit())
s = gtk.ScrolledWindow()
v = webkit.WebView()
s.add(v)
w.add(s)
w.show_all()
v.open('http://music.google.com')
gtk.main()
Any help on this would be greatly appreciated,
thanks,
Richard
Worked it out, but it required learning more ctypes than I wanted -_-. Try this- I required different library paths, etc than you, so I'll just paste what's relevant.
#remove all cookiejars
generic_cookiejar_type = libgobject.g_type_from_name('SoupCookieJar')
libsoup.soup_session_remove_feature_by_type(session, generic_cookiejar_type)
#and replace with a new persistent jar
cookiejar = libsoup.soup_cookie_jar_text_new('/path/to/your/cookies.txt',False)
libsoup.soup_session_add_feature(session, cookiejar)
The code's pretty self explanatory. There's also a SoupCookieJarSqlite that you might prefer, though I'm sure the text file would be easier for development.
EDIT: actually, the cookie jar removal doesn't seem to be doing anything, so the appropriate snippet is
#add a new persistent cookie jar
cookiejar = libsoup.soup_cookie_jar_text_new('/path/to/your/cookies.txt',False)
libsoup.soup_session_add_feature(session, cookiejar)
I know its old question and have been looking for the answer all over the place. Finally came up on my own after some trial and error. Hope this helps others.
This is basically same answer from Matt, just using GIR introspection and feels more pythonish.
from gi.repository import Soup
cookiejar = Soup.CookieJarText.new("<Your cookie path>", False)
cookiejar.set_accept_policy(Soup.CookieJarAcceptPolicy.ALWAYS)
session = WebKit.get_default_session()
session.add_feature(cookiejar)
In the latest version i.e. GTK WebKit2 4.0, this has to be done in the following way:
import gi
gi.require_version('Soup', '2.4')
gi.require_version('WebKit2', '4.0')
from gi.repository import Soup
from gi.repository import WebKit2
browser = WebKit2.WebView()
website_data_manager = browser.get_website_data_manager()
cookie_manager = website_data_manager.get_cookie_manager()
cookie_manager.set_persistent_storage('PATH_TO_YOUR/cookie.txt', WebKit2.CookiePersistentStorage.TEXT)
cookie_manager.set_accept_policy(Soup.CookieJarAcceptPolicy.ALWAYS)
I have some JavaScript from a 3rd party vendor that is initiating an image request. I would like to figure out the URI of this image request.
I can load the page in my browser, and then monitor "Live HTTP Headers" or "Tamper Data" in order to figure out the image request URI, but I would prefer to create a command line process to do this.
My intuition is that it might be possible using python + qtwebkit, but perhaps there is a better way.
To clarify: I might have this (overly simplified code).
<script>
suffix = magicNumberFunctionIDontHaveAccessTo();
url = "http://foobar.com/function?parameter=" + suffix
img = document.createElement('img'); img.src=url; document.all.body.appendChild(img);
</script>
Then once the page is loaded, I can go figure out the url by sniffing the packets. But I can't just figure it out from the source, because I can't predict the outcome of magicNumberFunction...().
Any help would be muchly appreciated!
Thank you.
The simplest thing to do might be to use something like HtmlUnit and skip a real browser entirely. By using Rhino, it can evaluate JavaScript and likely be used to extract that URL out.
That said, if you can't get that working, try out Selenium RC and use the captureNetworkTraffic command (which requires the Selenium instant be started with an option of captureNetworkTraffic=true). This will launch Firefox with a proxy configured and then let you pull the request info back out as JSON/XML/plain text. Then you can parse that content and get what you want.
Try out the instant test tool that my company offers. If the data you're looking for is in our results (after you click View Details), you'll be able to get it from Selenium. I know, since I wrote the captureNetworkTraffic API for Selenium for my company, BrowserMob.
I would pick any one of the many http proxy servers written in Python -- probably one of the simplest ones at the very top of the list -- and tweak it to record all URLs requested (as well as proxy-serve them) e.g. appending them to a text file -- without loss of generality, call that text file 'XXX.txt'.
Now all you need is a script that: starts the proxy server in question; starts Firefox (or whatever) on your main desired URL with the proxy in question set as your proxy (see e.g. this SO question for how), though I'm sure other browsers would work just as well; waits a bit (e.g. until the proxy's XXX.txt file has not been altered for more than N seconds); reads XXX.txt to extract only the URLs you care about and record them wherever you wish; turns down the proxy and Firefox processes.
I think this will be much faster to put in place and make work correctly, for your specific requirements, than any more general solution based on qtwebkit, selenium, or other "automation kits".
Use Firebug Firefox plugin. It will show you all requests in real time and you can even debug the JS in your Browser or run it step-by-step.
Ultimately, I did it in python, using Selenium-RC. This solution requires the python files for selenium-rc, and you need to start the java server ("java -jar selenium-server.jar")
from selenium import selenium
import unittest
import lxml.html
class TestMyDomain(unittest.TestCase):
def setUp(self):
self.selenium = selenium("localhost", \
4444, "*firefox", "http://www.MyDomain.com")
self.selenium.start()
def test_mydomain(self):
htmldoc = open('site-list.html').read()
url_list = [link for (element, attribute,link,pos) in lxml.html.iterlinks(htmldoc)]
for url in url_list:
try:
sel = self.selenium
sel.open(url)
sel.select_window("null")
js_code = '''
myDomainWindow = this.browserbot.getUserWindow();
for(obj in myDomainWindow) {
/* This code grabs the OMNITURE tracking pixel img */
if ((obj.substring(0,4) == 's_i_') && (myDomainWindow[obj].src)) {
var ret = myDomainWindow[obj].src;
}
}
ret;
'''
omniture_url = sel.get_eval(js_code) #parse&process this however you want
except Exception, e:
print 'We ran into an error: %s' % (e,)
self.assertEqual("expectedValue", observedValue)
def tearDown(self):
self.selenium.stop()
if __name__ == "__main__":
unittest.main()
Why can't you just read suffix, or url for that matter? Is the image loaded in an iframe or in your page?
If it is loaded in your page, then this may be a dirty hack (substitute document.body for whatever element is considered):
var ac = document.body.appendChild;
var sources = [];
document.body.appendChild = function(child) {
if (/^img$/i.test(child.tagName)) {
sources.push(child.getAttribute('src'));
}
ac(child);
}