Gather Insagram Username using selenium python - python

i'm trying to save usernames who liked my post in Instagram
here is my code :
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.action_chains import ActionChains
from time import *
chrome_options = Options()
chrome_options.add_argument("--user-data-dir=chrome-data")
browser = webdriver.Chrome(options=chrome_options)
browser.get('https://www.instagram.com/p/CMAVjR5CmOx/')
continue_link = browser.find_element_by_partial_link_text('others')
continue_link.click()
elems = browser.find_elements_by_css_selector(".FPmhX.notranslate.MBL3Z")
links = [elem.get_attribute('title') for elem in elems]
WebElement = browser.find_element_by_xpath('//*[ contains (text(), ‘#’ ) ]')
WebElement.click()
browser.execute_script("arguments[0].scrollIntoView();", WebElement)
the problem is, it just save 11 first username and it can't scroll down to load all of them
i tried some codes to scroll down but it scroll main page while i want "followers list" to be scrolled
main screen
can anyone help me with this scrolling down?
i tried for "send.key" and "browser.execute_script("arguments[0].scrollIntoView();",Element)" but nothing usefull

Try this solution. I'm using it right now so pretty sure it work.
# try to find the locator of the div element which contain the slider
element = browser.find_element_by_xpath('//div/parent::ul/parent::div')
# increase the distance from the top element each time one pixel
verical_ordinate = 100
for i in range(0, 50):
print(verical_ordinate)
browser.execute_script("arguments[0].scrollTop = arguments[1]", element,
verical_ordinate)
verical_ordinate += 100
time.sleep(1)

For scrolling, I usually use this code and it always works.
driver.execute_script("arguments[0].scrollIntoView();",driver.find_element_by_xpath("your_xpath_selector"))
First, rename WebElement to your custom name, for example to web_element as your name is confusing.
Second, make sure this locator is correct. Unfortunately, I do not have an Instagram account and cannot verify this locator.

Related

Cannot click on an xpath selected object Selenium (Python)

I am trying to click to an object that I select with Xpath, but there seems to be problem that I could not located the element. I am trying to click accept on the page's "Terms of Use" button. The code I have written is as
driver.get(link)
accept_button = driver.find_element_by_xpath('//*[#id="terms-ok"]')
accept_button.click()
prov = driver.find_element_by_id("province-region")
prov.click()
Here is the HTML code I have:
And I am getting a "NoSuchElementException". My goal is to click this "Kabul Ediyorum" button at the bottom of the HTML code. I started to think that we have some restrictions on the tasks we could do on that page. Any ideas ?
Not really sure what the issue might be.
But you could try the following:
Try to locate the element by its visible text
accept_button = driver.find_element_by_xpath("//*[text()='Kabul Ediyorum']").click()
Try with ActionChains
For that you need to import ActionChains
from selenium.webdriver.common.action_chains import ActionChains
accept_button = driver.find_element_by_xpath("//*[text()='Kabul Ediyorum']")
actions = ActionChains(driver)
actions.click(on_element=accept_button).perform()
Also make sure you have implicitly wait
# implicitly wait
driver.implicitly_wait(10)
Or explicit wait
element = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, "//*[text()='Kabul Ediyorum']"))).click()
Hope this helped!

Use selenium to choose dropdown value from multiple selects that have the same xpath

I'm trying to scrape data by python from this e-commerce site
Because it requires to select the shipping location first to access the data and the 3 selects have the same xpath so I use the code below
city = browser.find_element(By.XPATH,"(//select[not(#id) and not(#class)])[1]")
citydd = Select(city)
citydd.select_by_value('01') # Hanoi
time.sleep(1)
district = browser.find_element(By.XPATH,"(//select[not(#id) and not (#class)])[2]")
districtdd = Select(district)
districtdd.select_by_value('0101') # Ba Dinh
time.sleep(1)
ward = browser.find_element(By.XPATH,"(//select[not(#id) and not (#class)])[3]")
warddd = Select(ward)
warddd.select_by_value('010104') # Cong Vi
browser.find_element(By.XPATH,"//div[text()='Xác nhận']").click() # Xac nhan
It returns me this error
NoSuchElementException: Message: no such element: Unable to locate element: {"method":"xpath","selector":"(//select[not(#id) and not(#class)])[1]"}
May I know how to bypass this situation?
There is the ability to select better xpaths. You can use a relative xpaths using the label of associated select
//label[contains(text(),'Tỉnh/Thành phố')]/following-sibling::div/select
//label[contains(text(),'Quận/Huyện')]/following-sibling::div/select
//label[contains(text(),'Phường/Xã')]/following-sibling::div/select
This is the middle one identified as unique using the above:
If you're still getting no such error with these xpaths - please ensure you include explicit or implicit waits
Selenium's default wait strategy is the "the page has loaded". Most often in modern pages, the page loads, THEN scripts run which get more data or display a modal (like the popup on the image). Those async calls cause fails as nosuchelements in selenium.
Let me know if you need more information on sycnhronisation.
This is what i have tried -
from selenium.webdriver.support.ui import Select
from selenium.webdriver.support.wait import WebDriverWait
from time import sleep
from selenium import webdriver
driver = webdriver.Chrome()
wait = WebDriverWait(driver, 10)
driver.get('https://vinmart.com/')
FirstDropDown = Select(driver.find_element_by_xpath("(//select)[1]"))
FirstDropDown.select_by_index(1)
sleep(2)
SecondDropDown = Select(driver.find_element_by_xpath("(//select)[2]"))
SecondDropDown.select_by_index(1)
sleep(2)
ThirdDropDown = Select(driver.find_element_by_xpath("(//select)[3]"))
ThirdDropDown.select_by_index(1)
I have used sleep() because it will take time to populated data in the dropdown as per pervious dropdown selection.
Please mark it as answer if it resolves your problem.

Selenium / Python: Selecting a link from a long list after finding the correct location

The company has a list of 100+ sites that I am trying to use Selenium webdriver to automatically take a user into that site. I am fairly new to programming so please forgive me if my question is worded poorly.. But, I am trying to take the name of a site such as "Alpharetta - Cemex" in the example below from the user and find it in this long list and then select that link. Through testing I am pretty sure the element I need to click is the h3 class that also holds the name of the site under the data-hmi-name
Website Code Example:
I have tried to use the below and it never seems to work..
driver.find_element_by_css_selector("h3.tru-card-head-text uk-text-center[data-hmi-name='Alpharetta - Cemex']").click()
#For this one I tried to select the h3 class by searching for all those elements that has the name Alpharetta - Cemex
or
**theCards = main.find_elements_by_tag_name("h3")** #I tried both of these declarations for theCards
**#theCards = main.find_elements_by_class_name("tru-card-wrapper")**
#then used the loop below. This obviously didn't work and it just returns an error that card.text doesn't actually exist
for card in theCards:
#title = card.find_elements_by_tag_name("h3")
print(card.text)
if(card.text == theSite):
card.click()
Any help or guidance would be so appreciated! I am new to programming in Python and if you can explain what I am doing wrong I'd be forever thankful!
If you want to click a single link (e.g. Alpharetta - Cemex) , you can try like below:
theSite = "Alpharetta - Cemex" #You can store user inputted site Name here
linkXpath = "//a[h3[contains(text(),'"+theSite +"']]"
WebDriverWait(driver, 30).until(EC.element_to_be_clickable((By.XPATH, linkXpath))).click() #This will wait for element to be clickable before it clicks
In case above is not working. If your Link is not in screen / not visible. You can use java script to first scroll to element and the click like below:
ele = WebDriverWait(driver, 30).until(EC.presence_of_element_located((By.XPATH, linkXpath)))
driver.execute_script("arguments[0].scrollIntoView();", ele )
driver.execute_script("arguments[0].click();", ele )
You need to Import:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.wait import WebDriverWait

How to scroll down a web page to verify a specific element?

I'm learning how to use selenium and I'm stuck on figuring out how to scroll down in a website to verify an element exists.
I tried using the methods that was found in this question
Scrolling to element using webdriver?
but selenium won't scroll down the page. Instead it'll give me an error
"selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: element"
Heres the codes I am using
moveToElement:
element = driver.find_element_by_xpath('xpath')
actions = ActionChains(driver)
actions.move_to_element(element).perform()
Scrolling into View
element = driver.find_element_by_xpath('xpath')
driver.execute_script("arguments[1].scrollIntoView();", element)
The whole code:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.action_chains import ActionChains
driver = webdriver.Firefox()
driver.get("https://www.linkedin.com/")
element =
driver.find_element_by_xpath('/html/body/div/main/div/div[1]/div/h1/img')
element = driver.find_element_by_xpath('//*[#id="login-email"]')
element.send_keys('')
element = driver.find_element_by_xpath('//*[#id="login-password"]')
element.send_keys('')
element = driver.find_element_by_xpath('//*[#id="login-submit"]')
element.click();
element = driver.find_element_by_xpath('')
actions = ActionChains(driver)
actions.move_to_element(element).perform()
There are two aspects to your is problem
Whether the element exist?
Whether the element displayed on the page?
Whether the element exist?
It may happen that the element exists on the page[i.e. it is part of the DOM] but it is not available right away for further selenium action becuase it is not visible[hidden], it's only gets visible after some actions or it may get displayed on scroll down the page.
Your code is throwing an exception here-
element = driver.find_element_by_xpath('xpath')
as WebDiver not able to find the element using mentioned xpath. Once you fix this, you can moe forward the next part.
Whether the element displayed on the page?
Once you fix the above issue, you should check whether the element is being displayed or not. If it's not displayed and avialble on the scroll then you can use code like
if !element.is_displayed():
driver.execute_script("arguments[1].scrollIntoView();", element)
Perfer using Action Class for very specific mouse action.
Update:-
If you are application using lazy loading and the element you are trying to find is available on scroll the you can try something like this -
You have to import exception like -
from selenium.common.exceptions import NoSuchElementException
and the create new recursion function which would scroll if element not found, something like this -
def search_element():
try:
elem = driver.find_element_by_xpath("your")
return elem
except NosSuchElementException:
driver.execute_script("window.scrollTo(0,Math.max(document.documentElement.scrollHeight,document.body.scrollHeight,document.documentElement.clientHeight));")
search_element()
I am not sure that having recurion for finding element here is good idea, also I have naver worked on the python so you need to lookout for the syntax
Amm may be this might help.Just send page down key and if you are sure that element exists definitely then this would work
from selenium.webdriver.common.keys import Keys
from selenium.webdriver import ActionChains
import time
while True:
ActionChains(driver).send_keys(Keys.PAGE_DOWN).perform()
time.sleep(2) #2 seconds
try:
element = driver.find_element_by_xpath("your_element_xpath")
break
except:
continue

Selenium Python: How to scroll a div

I need to flip through a number of pages of a list on the left side of the page here. To do that firstly I need to scroll down a specific div section and then to click on the next button. But the scrolling works perfect in one case (url_ok) and doesn't work in another (url_trouble). Don't have any ideas why.
The code I'm testing:
from selenium import webdriver
import time
driverPath = "C:/Program Files (x86)/Google/Chrome/Application/chromedriver.exe"
driver = webdriver.Chrome(driverPath)
url_trouble = 'https://2gis.ru/moscow/search/%D0%BF%D0%B0%D1%80%D0%BA%D0%BE%D0%B2%D0%BA%D0%B0%20/tab/geo?queryState=center%2F37.644653%2C55.827709%2Fzoom%2F12'
url_ok = 'https://2gis.ru/moscow/search/%D0%BF%D0%B0%D1%80%D0%BA%D0%BE%D0%B2%D0%BA%D0%B0%20/tab/firms?queryState=center%2F37.644653%2C55.827805%2Fzoom%2F12'
def click(url, driver):
driver.get(url)
driver.maximize_window()
time.sleep(5)
next_link_data = driver.find_element_by_css_selector("div.pagination__arrow._right")
next_link_data.location_once_scrolled_into_view
next_link_data.click()
click(url_trouble, driver) # it doesn't work
click(url_ok, driver) # it works
So the question is how to scroll the url_trouble to the bottom?
Thanks a lot for advance!
Because i guess two "div.pagination__arrow._right" are in page.
use more specific selector
try this one --> "#module-1-13-1-1-2-2 > div.pagination__arrow._right"
in
driver.find_element_by_css_selector("#module-1-13-1-1-2-2 > div.pagination__arrow._right")

Categories