For some reasons, all I want is to get from the input field what I have just written IN THE INPUT FIELD just to check.
from selenium import webdriver
import os
xpath_user = '//*[#id="login-username"]'
user = 'user#yahoo.com'
dir_path = os.path.dirname(os.path.realpath(__file__))
chromedriver = dir_path + "/chromedriver.exe"
driver = webdriver.Chrome(chromedriver)
driver.implicitly_wait(3)
driver.get('https:\\www.yahoo.com')
driver.find_element_by_xpath(xpath_user).send_keys(user)
element = driver.find_element_by_xpath(xpath_user).text
print(element)
if element == 'user#yahoo.com':
print("Good")
In this example, the output is '', but I want the actual 'user#yahoo.com', but I don't know if it is even possible because 'user#yahoo.com' doesn't appear in the html form of the page. Maybe I am missing something or there is a work around. I'll be glad if someone could help me.
Note that my experience with python is limited.
Try driver.find_element_by_xpath(xpath_user).get_attribute("value")
The text property is for text within the tags of an element.
Related
I'm trying to make searching for temporary apartments a bit easier on myself, but a website with listings for these apartments requires me to select a suggestion from their drop down list before I can click on submit. No matter how complete the entry in the search box might be.
The ultimate hope here is that I can get forward to the search results and then extract contact information from each listing. I was able to extract the data I need from a listing using Beautiful soup and Requests, but I had to paste in the URL for that specific listing into my code. I didn't get that far. If anyone has a suggestion on how to perhaps circumvent the landing page to get to the relevant listings, please let me know.
I tried just splicing the town name and the state name into the address bar by looking at how it's written after a successful search but that didn't work.
The site is Mein Monteurzimmer.
Here is my code:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.select import Select
driver = webdriver.Firefox()
webpage = r"https://mein-monteurzimmer.de"
print('Prosim vnesi zeljeno mesto') #Please enter the town to search
searchterm = input()
driver.get(webpage)
sbox = driver.find_element_by_xpath("/html/body/main/cpagearea/section/div[2]/div/section[1]/div/div[1]/section/form/div/input")
sbox.send_keys(searchterm)
ddown = driver.find_element_by_xpath("/html/body/main/cpagearea/section/div[2]/div/section[1]/div/div[1]/section/form/div")
ddown.select_by_value(1)
webdriver.wait(2)
#select = driver.find_element_by_xpath("/html/body/main/cpagearea/section/div[2]/div/section[1]/div/div[1]/section/form/div")
submit = driver.find_element_by_xpath("/html/body/main/cpagearea/section/div[2]/div/section[1]/div/div[1]/section/form/button")
submit.click
When I inspect the search box I can't find anything related to the suggestions until I enter a text. Then I can't click on the HTML code because that dismisses the suggestions. It's quite frustrating.
Here's a screenshot:
So I'm blindly trying to select something.
The error here is:
AttributeError: 'FirefoxWebElement' object has no attribute 'select_by_value'
I tried something with select, but that doesn't work with the way I tried this.
I am stumped and the solutions I could find were specific for other sites like Google or Amazon and I couldn't make sense if it.
Does anyone know how I could make this work?
Here's the code for getting information out of a listing, which I'll have to expand on to get the other data:
import bs4, requests
def getMonteurAddress(MonteurUrl):
res = requests.get(MonteurUrl)
res.raise_for_status()
soup = bs4.BeautifulSoup(res.text, 'html.parser')
elems = soup.select('section.c:nth-child(4) > div:nth-child(2) > div:nth-child(2) > dl:nth-child(1) > dd:nth-child(2)')
return elems[0].text.strip()
address = getMonteurAddress('https://mein-monteurzimmer.de/105742/monteurzimmer/deggendorf-monteurzimmer-deggendorf-pensionfelix%40googlemailcom')
print('Naslov je ' + address) #print call to see if it gets the right data
As you can see once you type in, there is a list of divs creating. Now you need to get the a valid locator for these divs. To get the locator for these created divs you need to inspect elements in debug pause mode ( F12--> Source Tab --> F8).
Try below code to select first matching address as you typed.
sbox = driver.find_element_by_xpath("//input[#placeholder='Adresse, PLZ oder Ort eingeben']")
sbox.send_keys(searchterm)
addessXpath = "//div[contains(text(),'"+searchterm+"')]"
driver.find_element_by_xpath(addessXpath).click()
Note : If there are more than one matching address , first one will be selected.
I'm trying to make a temporary email generator using 20-minute mail, but I can seem to print the text from my XPath. I started python 2 months ago and have been getting really good answers with my other questions. any response is appreciated.
code:
from selenium import webdriver
from time import sleep
PATH = "C:\Program Files (x86)\chromedriver.exe"
driver = webdriver.Chrome(PATH)
driver.get("http://www.20minutemail.com/")
sleep(1)
createMail = driver.find_element_by_xpath("//*[#id=\"headerwrap\"]/header/div[2]/div/div/input[2]")
createMail.click()
sleep(3)
email = driver.find_element_by_xpath("//*[#id=\"userTempMail\"]/text()")
print(email)
I've got similar problems when I tried to get some kind of attribute using xpath. I'm still not sure why. I worked arround it using the WebElement attribute. Try this:
email = driver.find_element_by_xpath("//*[#id=\"userTempMail\"]).text
Also, if you want to optimize your code you change sleep(time) for WebDriverWait(driver, time).until(some_condition). This'll stop halting your code as soon as some_condition is met. More on this here: https://selenium-python.readthedocs.io/waits.html#explicit-waits
I changed it to
email = driver.find_element_by_xpath("//*[#id=\"userTempMail\"]")
(taking the /text() out so it knows its just html)
then doing
print(email.text)
to get the inner text out.
I am new to Python and Selenium coding, but I think I figured it out, tryed to build some exmaples for myself to learn from them, I got 2 questions,
First of all for some reason my code is stopping after my Input it does not going for the yalla() Function for some reason,
yallaurl = str(input('Your URL + ' + ""))
browser = webdriver.Chrome()
browser.get(yallaurl)
browser.maximize_window()
yalla()
Other then this the other Question is about browser.find_element_by_xpath so After I go to an html file and click Copy xpath I am getting something like this:
/html/body/table[2]/tbody/tr/td/form/table[4]/tbody/tr[2]/td/table/tbody/tr[2]/td[2]
So how is the line of code is working? is this legit?
def yalla():
sleep(2)
count = len(browser.find_elements_by_class_name('flyingCart'))
email = browser.find_element_by_xpath('/html/body/table[2]/tbody/tr/td/form/table[4]/tbody/tr[2]/td/table/tbody/tr[2]/td[2]')
for x in range(2, count):
itemdesc[x] = browser.find_element_by_xpath(
"/html/body/table[2]/tbody/tr/td/form/table[1]/tbody/tr[2]/td[2]/table/tbody/tr[x]/td[2]/a[1]/text()")
priceper[x] = browser.find_element_by_xpath(
"/html/body/table[2]/tbody/tr/td/form/table[1]/tbody/tr[2]/td[2]/table/tbody/tr[x]/td[5]/text()")
amount[x] = browser.find_element_by_xpath(
"/html/body/table[2]/tbody/tr/td/form/table[1]/tbody/tr[2]/td[2]/table/tbody/tr[x]/td[6]")
browser.navigate().to('https://www.greeninvoice.co.il/app/documents/new#type=100')
checklogininvoice()
Yes, your code will run just fine and is legit but not recommended.
As described, the absolute path works fine, but would break if the HTML was changed only slightly
Reference: https://selenium-python.readthedocs.io/locating-elements.html
Firstly, this code is confusing:
yallaurl = str(input('Your URL + ' + ""))
This is essentially equavilent to:
yallaurl = input('Your URL: ')
Yes, this code is correct:
browser.find_element_by_xpath('/html/body/table[2]/tbody/tr/td/form/table[4]/tbody/tr[2]/td/table/tbody/tr[2]/td[2]')
Please refer to the docs for proper usage.
Here is the suggested use of this method:
from selenium.webdriver.common.by import By
driver.find_element(By.XPATH, '/html/body/table[2]/tbody/tr/td/form/table[4]/tbody/tr[2]/td/table/tbody/tr[2]/td[2]')
This code will return an object of the element you have selected. To print the HTML of the element itself, this should work:
print(element.get_attribute('outerHTML'))
For further information on page objects, please refer to this page of the docs.
Since you have not provided the code for your 'yalla' function, it is hard to diagnose the problem there.
I am new to Selenium/Firefox. My goal is to go to my URL, fill in basic input, select a few items, let browser change the content and download a PDF from there. Ideally, I would love to do it repeatedly later by looping a number of new items. As a first step, I manage to get the browser to work and change content once. But I am stuck in getting the content out as find_elements_by_tag_name() seem to get me something funny rather than some usual HTML tag like what Beautifulsoup .find_all() would do. Appreciate very much any help here.
Here is my code:
from selenium import webdriver
from selenium.webdriver.support.ui import Select
url ='http://www.hkexnews.hk/listedco/listconews/advancedsearch/search_active_main.aspx'
browser = webdriver.Firefox(executable_path = 'C:\Program Files\Mozilla
Firefox\geckodriver.exe')
browser.get(url)
StockElem = browser.find_element_by_id('ctl00_txt_stock_code')
StockElem.send_keys('00772')
StockElem.click()
select = Select(browser.find_element_by_id('ctl00_sel_tier_1'))
select.select_by_value('3')
select = Select(browser.find_element_by_id('ctl00_sel_tier_2'))
select.select_by_value('153')
select = Select(browser.find_element_by_id('ctl00_sel_DateOfReleaseFrom_d'))
select.select_by_value('01')
select = Select(browser.find_element_by_id('ctl00_sel_DateOfReleaseFrom_m'))
select.select_by_value('01')
select = Select(browser.find_element_by_id('ctl00_sel_DateOfReleaseFrom_y'))
select.select_by_value('2000')
# select the search button
browser.execute_script("document.forms[0].submit()")
element = browser.find_elements_by_tag_name("a")
print(element)
After clicking on the Search button -- you have 5 links to download PDF files.
You should find those links by CSS selector: .news.
Then go through the list of links by index and click on each link to Download:
elements[0].click() -- by clicking on the first link.
I am very new to web scraping with Python. In the web page, which I am trying to scrape, I can enter string 'ABC' in the text box and click search. This gives me the details of 'ABC', but under the same URL. There is no change in url. I am trying to scrape the result details information.
I have worked till the "search" click. But I do not know how to capture the results of the search (details of search string 'ABC'). Please suggest how could I achieve it.
from selenium import webdriver
import webbrowser
new = 2 # open in a new tab, if possible
path_to_chromedriver = 'C:/Tech-stuffs/chromedriver/chromedriver.exe' # change path as needed
browser = webdriver.Chrome(executable_path = path_to_chromedriver)
url = 'https://www.federalreserve.gov/apps/mdrm/data-dictionary'
browser.get(url)
browser.find_element_by_xpath('//*[#id="form0"]/table/tbody/tr[2]/td/label[2]').click()
browser.find_element_by_xpath("//select[#id='SelectedReportForm']/option[#value='1']").click()
browser.find_element_by_xpath('//*[#id="Search"]').click()
Use find_elements_by_xpath() to locate the xpath which entails all of the search results. Then iterate through them using a for loop and print each result's text. That should, at the bare minimum, get what you want.
results = browser.find_elements_by_xpath('//table//tr')
for result in results:
print "%s\n" % result.text