Log in to website using python and selenium - python

I'm trying to log in to http://sports.williamhill.com/bet/en-gb using python and selenium.
Here is what I've tried so far:
from selenium import webdriver
session = webdriver.Chrome()
session.get('https://sports.williamhill.com/bet/en-gb')
# REMOVE POP-UP
timezone_popup_ok_button = session.find_element_by_xpath('//a[#id="yesBtn"]')
timezone_popup_ok_button.click()
# FILL OUT FORMS
usr_field = session.find_element_by_xpath('//input[#value="Username"]')
usr_field.clear()
WebDriverWait(session, 10).until(EC.visibility_of(usr_field))
usr_field.send_keys('myUsername')
pwd_field = session.find_element_by_xpath('//input[#value="Password"]')
pwd_field.clear()
pwd_field.send_keys('myPassword')
login_button = session.find_element_by_xpath('//input[#id="signInBtn"]')
login_button.click()
I'm getting the following error.
selenium.common.exceptions.ElementNotVisibleException: Message: element not visible
when trying to execute
usr_field.send_keys('myUsername')
The usr_field element seems to be visible if I'm viewing it with the inspector tool, however I'm not 100% sure here.
I'm using this script (with some modifications) successfully on other sites, but this one is giving me a real headache and I can't seem to find the answer anywhere on the net.
Would appreciate if someone could help me out here!

The following code will resolve the issue.
from selenium import webdriver
session = webdriver.Chrome()
session.get('https://sports.williamhill.com/bet/en-gb')
# REMOVE POP-UP
timezone_popup_ok_button = session.find_element_by_xpath('//a[#id="yesBtn"]')
timezone_popup_ok_button.click()
# FILL OUT FORMS
user_element = session.find_element_by_name("tmp_username")
user_element.click()
actual_user_elm = session.find_element_by_name("username")
actual_user_elm.send_keys("myUsername")
password_element = session.find_element_by_id("tmp_password")
password_element.click()
actual_pass_element = session.find_element_by_name("password")
actual_pass_element.send_keys("myPassword")
login_button = session.find_element_by_xpath('//input[#id="signInBtn"]')
login_button.click()

Related

How do I make the driver navigate to new page in selenium python

I am trying to write a script to automate job applications on Linkedin using selenium and python.
The steps are simple:
open the LinkedIn page, enter id password and log in
open https://linkedin.com/jobs and enter the search keyword and location and click search(directly opening links like https://www.linkedin.com/jobs/search/?geoId=101452733&keywords=python&location=Australia get stuck as loading, probably due to lack of some post information from the previous page)
the click opens the job search page but this doesn't seem to update the driver as it still searches on the previous page.
import selenium
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from bs4 import BeautifulSoup
import pandas as pd
import yaml
driver = webdriver.Chrome("/usr/lib/chromium-browser/chromedriver")
url = "https://linkedin.com/"
driver.get(url)
content = driver.page_source
stream = open("details.yaml", 'r')
details = yaml.safe_load(stream)
def login():
username = driver.find_element_by_id("session_key")
password = driver.find_element_by_id("session_password")
username.send_keys(details["login_details"]["id"])
password.send_keys(details["login_details"]["password"])
driver.find_element_by_class_name("sign-in-form__submit-button").click()
def get_experience():
return "1%C22"
login()
jobs_url = f'https://www.linkedin.com/jobs/'
driver.get(jobs_url)
keyword = driver.find_element_by_xpath("//input[starts-with(#id, 'jobs-search-box-keyword-id-ember')]")
location = driver.find_element_by_xpath("//input[starts-with(#id, 'jobs-search-box-location-id-ember')]")
keyword.send_keys("python")
location.send_keys("Australia")
driver.find_element_by_xpath("//button[normalize-space()='Search']").click()
WebDriverWait(driver, 10)
# content = driver.page_source
# soup = BeautifulSoup(content)
# with open("a.html", 'w') as a:
# a.write(str(soup))
print(driver.current_url)
driver.current_url returns https://linkedin.com/jobs/ instead of https://www.linkedin.com/jobs/search/?geoId=101452733&keywords=python&location=Australia as it should. I have tried to print the content to a file, it is indeed from the previous jobs page and not from the search page. I have also tried to search elements from page like experience and easy apply button but the search results in a not found error.
I am not sure why this isn't working.
Any ideas? Thanks in Advance
UPDATE
It works if try to directly open something like https://www.linkedin.com/jobs/search/?f_AL=True&f_E=2&keywords=python&location=Australia but not https://www.linkedin.com/jobs/search/?f_AL=True&f_E=1%2C2&keywords=python&location=Australia
the difference in both these links is that one of them takes only one value for experience level while the other one takes two values. This means it's probably not a post values issue.
You are getting and printing the current URL immediately after clicking on the search button, before the page changed with the response received from the server.
This is why it outputs you with https://linkedin.com/jobs/ instead of something like https://www.linkedin.com/jobs/search/?geoId=101452733&keywords=python&location=Australia.
WebDriverWait(driver, 10) or wait = WebDriverWait(driver, 20) will not cause any kind of delay like time.sleep(10) does.
wait = WebDriverWait(driver, 20) only instantiates a wait object, instance of WebDriverWait module / class

How to fix NoSuchElementException on webdriver

I have tried to log in a portal of Wifi automatically using python. However, find_element_by_X gives errors.
I am using Chrome as a browser.
from selenium import webdriver
import time
driver = webdriver.Chrome()
driver.get('https://hinet.hiroshima-u.ac.jp/loginweb.html')
time.sleep(2)
#driver.find_element_by_css_selector('a.button').click()
username = driver.find_element_by_css_selector("input")
username.clear
#Enter HiroshimaU ID
username.send_keys('input_username')
password = driver.find_element_by_name('pwd')
password.clear
password.send_keys('input_userpassword')
This code should work, but it just gives me errors:
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"css selector","selector":"input"}
I have tried other methods, such as by_name or by_id. But none of them is working.
I am a very beginner so my question might not be clear, but I appreciate your help.
Edit(Oct 23, 2019):
I am sorry you cannot access the portal site.
I hope this screenshot may help.
Portal site
Input is not a valid cssSeletor.
You need top do something like below:
Example:
for https://www.google.com/ website,
cssSeletor of textFiled would be
input[name='q']
First, I couldn't get access to the site you mentioned, as it helps to see how the html formed.
Second based on that, I ask you to have a look at python selenium find_element_by_name
It shows how to access element by name as:
elem=browser.find_element_by_name("Email")
where you can see in the html the name tag of "Email"

selenium Unable to locate element using methods: cannot find elements by id, css_selector, xpath, link text

I am trying to scrape the data in this DB. I did ask a similar question about this previously, but my current question is specific/I am starting to understand the issue more.
So far, with selenium, I can type 22663 into the 'search by plant-based food' field, then click 'food-disease associations' underneath, and then click submit, as shown here:
it's the next page that I have the issue with, I cannot click 'Plant-Disease Associations'.
I have tried numerous ideas from other SO posts:
import sys
import pandas as pd
from bs4 import BeautifulSoup
import selenium
from selenium import webdriver
from selenium.webdriver.support.ui import Select
import csv
from selenium.webdriver.firefox.firefox_binary import FirefoxBinary
from selenium.webdriver.common.by import By
#binary = FirefoxBinary('/Users/kela/Desktop/scripts/scraping/geckodriver')
url = 'http://147.8.185.62/services/NutriChem-2.0/'
driver = webdriver.Firefox(executable_path='/Users/kela/Desktop/scripts/scraping/geckodriver')
driver.get(url)
#input the tax ID
element = driver.find_element_by_id("input_food_name")
element.send_keys("22663")
#click food-disease association
element = Select(driver.find_element_by_css_selector('[name=food_search_section]'))
element.select_by_value('food_disease')
#click submit
submit_xpath = '/html/body/form/p[2]/input[1]'
destination_page_link = driver.find_element_by_xpath(submit_xpath)
destination_page_link.click()
# this is where it goes wrong
#click plant-disease associations
#table_data = driver.find_elements_by_xpath('//td[#class="likeabutton"]')
#driver.find_element_by_link_text("plant-disease").click()
#driver.find_element_by_link_text("nutrichem12587_disease.tsv").click()
#driver.find_element_by_xpath("//div[contains(#onclick'nutrichem12587_disease.tsv']").click()
#values = []
#for i in table_data.find_element_by_tag_name('Plant-Disease associations'):
# values.append(i.text)
#print(value)
#span = table_data.find_element_by_tag_name('Plant-Disease associations')
#print(span)
#select = Select(driver.find_element_by_xpath("/html/body/table/tbody/tr/td[3]"))
#select.click()
#submit_xpath = '/html/body/table/tbody/tr/td[3]/div/span'
#submit_xpath = '/html/body/table/tbody/tr/td[3]'
#destination_page_link = driver.find_element_by_xpath(submit_xpath)
#destination_page_link.click()
#element = driver.find_element_by_xpath("//select[#name='plant-disease']")
#element.select_by_value('Plant-Disease associations')
#xpath2 = '/html/body/table/tbody/tr/td[3]/div'
#destination_page_link = driver.find_element_by_xpath(xpath2)
#destination_page_link.click()
#xpath2 = '/html/body/table/tbody/tr/td[3]/div/span'
#destination_page_link = driver.find_element_by_xpath(xpath2)
#destination_page_link.click()
I've commented out all the lines that I've tried and don't work. You can see I've tried multiple options as suggested on different SO posts, I'm aware that there are a lot of similar questions out there, but none of the solutions seem to work for me; all the errors are basically the same, 'cannot find element' (e.g. selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: nutrichem12587_disease.tsv)
Can someone please help me click on the 'Plant-Disease association' button. I'm wondering, is it because the page that I'm trying to click on is .php?
It is inside a frame. You need to switch to that
driver.find_element_by_css_selector('[value="Submit"]').click()
driver.switch_to.frame(driver.find_element_by_css_selector('frame'))
driver.find_element_by_css_selector('[onclick*="plant-disease"]').click()

Selenium: difference between chrome and PhantomJS?-python

I want to do web scraping for Bing's search results. Basically, I am using selenium, the idea is to using selenium to click 'Next' automatedly and scrap the URLs of search results of each page. I made it run with chrome browser on my Ubuntu:
from selenium import web driver
import os
class bingURL(object):
def __init__(self):
self.driver=webdriver.Chrome(os.path.expanduser('./chromedriver'))
def get_urls(self,url):
driver=self.driver
driver.get(url)
elems = driver.find_elements_by_xpath("//a[#href]")
href=[]
for elem in elems:
link=elem.get_attribute("href")
try:
if 'bing.com' not in link and 'http' in link and 'microsoft.com' not in link and 'smashboards.com' not in link:
href.append(link)
except:
pass
return list(set(href))
def search_urls(self,keyword,pagenum):
driver=self.driver
searchurl=self.lookup(keyword) ### url of first page of google search
driver.get(searchurl)
results=self.get_urls(searchurl)
for i in range(pagenum):
driver.find_elements_by_class_name("sb_pagN")[0].click() # click 'Next' of bing search result
time.sleep(5) # wait to load page
current_url=driver.current_url
#print(current_url)
#print(self.get_urls(current_url))
results[0:0]=self.get_urls(current_url)
driver.quit()
return results
def lookup(self,query):
return "https://www.bing.com/search?q="+query
if __name__ == "__main__":
g=bingURL()
result=g.search_urls('Stackoverflow is good',10)
it works perfectly, when I run the code, it launches a Chrome browser, and I can saw it go to the next page automatically, and get URLs for 10 pages of searching results.
However, my goal is to run these codes on AWS successfully. The original codes failed with error 'Chrome failed to start'. After google, it seems I need to use a headless browser like PhantomJS on AWS. Thus I installed PhantomJS, and change the def __init__(self): to:
def __init__(self):
self.driver=webdriver.PhantomJS()
However, it cannot click 'next' anymore, and cannot scrap URLs using the old code. The error message is:
File ".../SEARCH_BING_MODULE.py", line 70, in search_urls
driver.find_elements_by_class_name("sb_pagN")[0].click()
IndexError: list index out of range
It looks like change the browser completely change the rules. How should I modify the more original code to make it work again? or how to scrap Bing search results' URLs using selenium+PhantomJS?
Thanks for your help!
Yes, You can perform all operations as per of your all 3 point using headless browser. Don't use HTMLUnit as it have many configuration issue.
PhamtomJS was another approach for headless browser but PhantomJs is having bug these days because of poorly maintenance of it.
You can use chromedriver itself for headless jobs.
You just need to pass one option in chromedriver as below:-
chromeOptions.addArguments("--headless");
Full code will appear like this :-
System.setProperty("webdriver.chrome.driver","D:\\Workspace\\JmeterWebdriverProject\\src\\lib\\chromedriver.exe");
ChromeOptions chromeOptions = new ChromeOptions();
chromeOptions.addArguments("--headless");
chromeOptions.addArguments("--start-maximized");
WebDriver driver = new ChromeDriver(chromeOptions);
driver.get("https://www.google.co.in/");
Hope it will help you :)

Find table elements to fill forms selenium python

My code so far is:
from selenium import webdriver
driver = webdriver.Chrome()
driver.get('http://moodle.tau.ac.il/')
driver.find_element_by_xpath("id('page-content')//form[#id='login']// \
input[#type='submit']").click()
Now I'm trying to fill up the login form and I succeeded to find the division
that follows id= content, easy to see in the image:
The following code line I used:
elem = driver.find_element_by_xpath("id('content'))
but it doesn't recognize anything in it and I cant get any further, what should I do to locate the input element?
It doesn't recognize anything because it is in an iframe. Therefore, you first have to switch to the iframe and then search the login form.
Switch to the iframe:
frame = driver.find_element_by_id('credentials')
driver.switch_to.frame(frame)
Or:
driver.switch_to.frame('credentials')

Categories