Splinter Iframe form filling - python

i am using splinter to do some automatic testing, i am stuck in filling a file input in an iframe using splinter with python.
This is that actual html form of the iframe
<iframe><html lang="en-US"><head>
<title>Website</title>
<meta charset="utf-8"></head><body><form> <input type="file" name="artwork" id="file-upload" accept=".png,.jpg"></form></body></html></iframe>
This is the actual python code i did
from splinter import Browser
browser = Browser()
browser.driver.maximize_window()
browser.driver.implicitly_wait(10)
browser.visit('https://website.com')
with browser.get_iframe(1) as iframe:
iframe.attach_file('artwork', 'C:\\Users\\design\\upload.png')
After execution i have this error
splinter.exceptions.ElementDoesNotExist: no elements could be found with name "artwork"
Can you help me i really don't know why it's not working

Related

Different html content with robot browser using selenium webdriver instead of human browser

I'm trying to parse webpage with python selenium webdriver.
I've found something starnge in html content. It is different when I use robot browser instead of when I'm getting same page with human browser.
For example just part of webpage that I get:
<p>
<label>
<span>
Some text 1
<br>
<i>header 1</i>
Some text 2
<br>
<i>header 2</i>
Some text 3
<br>
<i>header 3</i>
Some text 4
</span>
</label>
</>
In human browser I get it as is, but in robot browser I get it without one section, I missed header 2 and Some text 3.
I was trying to analize request headers in human browser and robot browser to find difference and I've found one. In human request headers there is not cookie. But in robot browser in request headers I can see this
cookie: _ga=GA1.2.153230535.1622710383; _gid=GA1.2.1454651548.1622710383; __gads=ID=fb2caae82787b530-2265cda036c80043:T=1622710436:RT=1622710436:S=ALNI_MZ0bzRzYOmpiZrGnBzbdMQl7UHCRw
I don't understand why it is so. Can anyone explain? How can server distinguish my robot browser and send different content instead of human browser?
I've solved my problem by imitating the mouse movement. So now before click on element, I use selenium webdriver.ActionChains for imitate mouse movement.
search_input = browser.driver.find_element_by_xpath('//input[#class="search_input"]')
sleep(0.5)
browser.action_chain.move_to_element(search_input)
sleep(0.5)
browser.action_chain.click(search_input)
sleep(0.5)
search_input.clear()
sleep(0.5)
Now all content that I got like in human browser

How to click a button in Tor Browser with Selenium and python

I use Tor Browser with Selenium to automate a click on a button.
File script.py
from tbselenium.tbdriver import TorBrowserDriver
with TorBrowserDriver("/home/user/Selenium/tor-browser_en-US/") as driver:
driver.get('https://www.example.com/form.html')
How do I manage to perform a click on this button (excerpt from the HTML file)?
<form method="post" id="IdA" action="https://example.com/action.php"><input id='valid' name='valid' value='012.23945765955' type="hidden"><button class="g-recaptcha" data-sitekey="XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX" data-callback="onSubmit" id="IdA" style="background:url(https://www.example.com/button.gif);width:190px;height:58px;border:none;cursor:pointer;display:none;" type="submit"></button></form>
I tried this, but it did not work:
driver.findElement(By.Id("IdA")).click()
I'm assuming you are trying to bypass a CAPTCHA.
You can do this one of two ways. You can click the button by using a selector. For example, an XPath selector for a button with class "g-recpatcha". You can also just execute JavaScript code on the page to call the onSubmit() function.
So two options are:
driver.find_element_by_xpath("//button[#class='g-recaptcha']").click()
driver.execute_script("onSubmit("" + captchaToken + "")")
See the reCAPTCHA callback on 2captcha API, Solving Captchas.

Scraping a website that requires authentication

I know this question might seem quite straight forward, but I have tried every suggestion and none has worked.
I want to build a Python script that checks my school website to see if new grades have been put up. However I cannot for the life of me figure out how to scrape it.
The website redirects to a different page to login. I have tried all the scripts and answers I could find but I am lost.
I use Python 3, the website is in a https://blah.schooldomate.state.edu.country/website/grades/summary.aspx
format
The username section contains the following:
<input class="txt" id="username" name="username" type="text" autocomplete="off" style="cursor: auto;">
The password is the name except it contains an onfocus HTML element.
One successfully authenticated, I am automatically redirected to the correct page.
I have tried:
using Python 2's cookielib and Mechanize
Using HTTPBasicAuth
Passing the information as a dict to a requests.get()
Trying out many different peoples code including answers I found on this site
You can try with requests:
http://docs.python-requests.org/en/master/
from the web site:
import requests
r = requests.get('https://api.github.com/user', auth=('user', 'pass'))
Maybe you can use Selenium library.
I let you my code example:
from selenium import webdriver
def loging():
browser = webdriver.Firefox()
browser.get("www.your_url.com")
#Edit the XPATH of Loging INPUT username
xpath_username = "//input[#class='username']"
#Edit the XPATH of Loging INPUT password
xpath_password = "//input[#class='password']"
#THIS will write the YOUR_USERNAME/pass in the xpath (Custom function)
click_xpath(browser, xpath_username, "YOUR_USERNAME")
click_xpath(browser, xpath_username, "YOUR_PASSWORD")
#THEN SCRAPE WHAT YOU NEED
#Here is the custom function
#If NO input, will only click on the element (on a button for example)
def click_xpath(self, browser, xpath, input="", time_wait=10):
try:
browser.implicitly_wait(time_wait)
wait = WebDriverWait(browser, time_wait)
search = wait.until(EC.element_to_be_clickable((By.XPATH, xpath)))
search.click()
sleep(1)
#Write in the element
if input:
search.send_keys(str(input) + Keys.RETURN)
return search
except Exception as e:
#print("ERROR-click_xpath: "+xpath)
return False

Canot click button with Python with Selenium

I am trying to click a button that brings up a dialogue box to select a file. inspecting the element it looks like its an input rather than a button. Either way I cannot click it with:
element = browser.find_element_by_id("fileupload")
element.click()
and
browser.find_element_by_id("fileupload").send_keys("\n")
Neither of which seem to work.
Here is what I see when I inspect that element on the page:
<span class="btn btn-success fileinput-button">
<span class="glyphicon glyphicon-upload"></span>
Select and Upload...
<input id="fileupload" name="upfile" accept=".xml" type="file">
</span>
Any assistance help or pointers would be appreciated!
Clicking on a file input usually triggers a file upload dialog. Since you cannot control it with selenium, you need to avoid the dialog being opened by sending keys to the input instead:
browser.find_element_by_id("fileupload").send_keys("path_to_the_file")
See also:
How to deal with file uploading in test automation using selenium or webdriver
How to upload file using Selenium WebDriver in Java
How to upload file ( picture ) with selenium, python

Python Selenium Run All Page Javascripts

I'm scraping my site which uses a Google custom search iframe. I am using Selenium to switch into the iframe, and output the data. I am using BeautifulSoup to parse the data, etc.
from bs4 import BeautifulSoup
from selenium import webdriver
import time
import html5lib
driver = webdriver.Firefox()
driver.get('http://myurl.com')
driver.execute_script()
time.sleep(4)
iframe = driver.find_elements_by_tag_name('iframe')[0]
driver.switch_to_default_content()
driver.switch_to_frame(iframe)
output = driver.page_source
soup = BeautifulSoup(output, "html5lib")
print soup
I am successfully getting into the iframe and getting 'some' of the data. At the very top of the data output, it talks about Javascript being enabled, and the page being reloaded, etc. The part of the page I'm looking for isn't there (from when I look at the source via developer tools). So, obviously some of it isn't loading.
So, my question - how do you get Selenium to load ALL page javascripts? Is it done automatically?
I see a lot of posts on SO about running an individual function, etc... but nothing about running all of the JS on the page.
Any help is appreciated.
Ahh, so it was in the tag that featured the "Javascript must be enabled" text.
I just posted a question on how to switch within the nested iframe here:
Python Selenum Swith into an iframe within an iframe

Categories