Scrape Links/Href CSS - python

The following code scrapes names, company and location of users on LinkedIn.
I want the link/Href per user
The code requires log in credentials for LinkedIn, you can use fake account if skeptical.
Or you can just look at the code/screenshot, anything helps.
import time
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from webdriver_manager.chrome import ChromeDriverManager
productlinks=[]
test1=[]
options = Options()
driver = webdriver.Chrome(ChromeDriverManager().install())
url = "https://www.linkedin.com/uas/login?session_redirect=https%3A%2F%2Fwww%2Elinkedin%2Ecom%2Fsearch%2Fresults%2Fpeople%2F%3FcurrentCompany%3D%255B%25221252860%2522%255D%26geoUrn%3D%255B%2522103644278%2522%255D%26keywords%3Dsales%26origin%3DFACETED_SEARCH%26page%3D2&fromSignIn=true&trk=cold_join_sign_in"
driver.get(url)
time.sleep(2)
username = driver.find_element_by_id('username')
username.send_keys('jazizi#lifesciencedynamics.com')
password = driver.find_element_by_id('password')
password.send_keys('Theboss3!')
password.submit()
element1 = driver.find_elements_by_class_name("name actor-name")
title=[t.text for t in element1]
print(title)

First, the worst thing you can do in web scrape is get element by class. Because in web development, we normally use class for almost any style decorating. Try xpath or id instead.
The second thing I noticed in your code is: you find element by class and the parameter inside is multi class name name actor-name. I haven't read the code nor try running it so I don't understand how it works at this moment. But you should be aware of it, because in web development, class="name actor-name" and class="actor-name name" are almost the same (I did say almost, this is the second time I am mentioning it), but in web scraping it will be entirely different.

I think this will be better using BeautifulSoup, but if you post more code about the page source it will be easyer to help you
with bs4 you can get all html structure of an element and then maybe with regex get the href attribute

Related

How can I access the same website twice without losing the settings, using Selenium?

I access a website, login and then instead of going through the process of finding and writing into the website's search field, I thought I'd simply re-access the website through a URL with the search query I want.
The problem is that when I access the website with the second "driver.get" (last line of code in the code below), it's as though it forgets that I logged in previously; as though it was a totally new session that I opened.
I have this code structure:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.chrome.service import Service
path = Service("C://chromedriver.exe")
driver = webdriver.Chrome(service=path)
driver.get('https://testwebsite.com/')
login_email_button = driver.find_element(By.XPATH,'XXXXX')
login_email_button.click()
username = driver.find_element(By.ID, 'email')
password = driver.find_element(By.ID, 'password')
username.send_keys('myuser')
password.send_keys('mypassword')
driver.get('https://testwebsite.com/search?=televisions')
when you do
driver.get('https://testwebsite.com/search?=televisions')
you're opening new session with no cookie or data of previous session. You can try to duplicate tab instead, to keep you logged in. You can do with:
Driver.execute_script
url = driver.current_url
driver.execute_script(f"window.open('{url}');")
driver.switch_to.window(window_handles[1])
# if you want give a name to tab, pass it as second param like
driver.execute_script(f"window.open('{url}', 'second_tab_name');")
driver.switch_to.window('second_tab_name')
remember to use the switch if you want go back to the main tab

Delete dynamic elements from HTML with Selenium and Python

I've used BeautifulSoup to find a specific div class in the page's HTML. I want to check if this div has a span class inside it. If the div has the span class, I want to maintain it on the page's code, but if it doesn't, I want to delete it, maybe using Selenium.
For that I have two lists selecting the elements (div and span). I tried to check if one list is inside the other, and that kind of worked. But how can one delete that found element from the page's source code?
Edit
I've edited the code after a few conversations in the commentaries section. With help, I was able to implement code to remove elements executing javascript.
The code is running with no errors, but nothing is being deleted from the page.
# Import required module
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from webdriver_manager.chrome import ChromeDriverManager
import time
# Option to launch browser in incognito
options = Options()
options.add_argument("--incognito")
#options.add_argument("--headless")
# Using chrome driver
driver = webdriver.Chrome(ChromeDriverManager().install(), options=options)
# Web page url request
driver.get('https://www.facebook.com/ads/library/?active_status=all&ad_type=all&country=BR&q=frete%20gr%C3%A1tis%20aproveite&sort_data[direction]=desc&sort_data[mode]=relevancy_monthly_grouped&search_type=keyword_unordered&media_type=all')
driver.maximize_window()
time.sleep(10)
driver.execute_script("""
for(let div of document.querySelectorAll('div._99s5')){
let match = div.innerText.match(/(\d+) ads? use this creative and text/)
let numAds = match ? parseInt(match[1]) : 0
if(numAds < 10){
div.querySelector(".tp-logo")?.remove()
}
}
""")
Since you're deleting them in javascript anyway:
driver.execute_script("""
for(let div of document.querySelectorAll('div._99s5')){
let match = div.innerText.match(/(\d+) ads? use this creative and text/)
let numAds = match ? parseInt(match[1]) : 0
if(numAds < 10){
div.querySelector(".tp-logo")?.remove()
}
}
""")
Note: Question and comments reads a bit confusing so it would be great to improve it a bit. Assuming you like to decompose() some elements, the reason why or what to do after this action is not clear. So this answer will only point out an apporache.
To decompose() the elements that do not contains ads use this creative and text just negate your selection and iterate the ResultSet:
for e in soup.select('div._99s5:has(:not(:-soup-contains("ads use this creative and text")))'):
e.decompose()
Now these elements will no longer be included in your soup and you could process it for your needs.

How do I make the driver navigate to new page in selenium python

I am trying to write a script to automate job applications on Linkedin using selenium and python.
The steps are simple:
open the LinkedIn page, enter id password and log in
open https://linkedin.com/jobs and enter the search keyword and location and click search(directly opening links like https://www.linkedin.com/jobs/search/?geoId=101452733&keywords=python&location=Australia get stuck as loading, probably due to lack of some post information from the previous page)
the click opens the job search page but this doesn't seem to update the driver as it still searches on the previous page.
import selenium
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from bs4 import BeautifulSoup
import pandas as pd
import yaml
driver = webdriver.Chrome("/usr/lib/chromium-browser/chromedriver")
url = "https://linkedin.com/"
driver.get(url)
content = driver.page_source
stream = open("details.yaml", 'r')
details = yaml.safe_load(stream)
def login():
username = driver.find_element_by_id("session_key")
password = driver.find_element_by_id("session_password")
username.send_keys(details["login_details"]["id"])
password.send_keys(details["login_details"]["password"])
driver.find_element_by_class_name("sign-in-form__submit-button").click()
def get_experience():
return "1%C22"
login()
jobs_url = f'https://www.linkedin.com/jobs/'
driver.get(jobs_url)
keyword = driver.find_element_by_xpath("//input[starts-with(#id, 'jobs-search-box-keyword-id-ember')]")
location = driver.find_element_by_xpath("//input[starts-with(#id, 'jobs-search-box-location-id-ember')]")
keyword.send_keys("python")
location.send_keys("Australia")
driver.find_element_by_xpath("//button[normalize-space()='Search']").click()
WebDriverWait(driver, 10)
# content = driver.page_source
# soup = BeautifulSoup(content)
# with open("a.html", 'w') as a:
# a.write(str(soup))
print(driver.current_url)
driver.current_url returns https://linkedin.com/jobs/ instead of https://www.linkedin.com/jobs/search/?geoId=101452733&keywords=python&location=Australia as it should. I have tried to print the content to a file, it is indeed from the previous jobs page and not from the search page. I have also tried to search elements from page like experience and easy apply button but the search results in a not found error.
I am not sure why this isn't working.
Any ideas? Thanks in Advance
UPDATE
It works if try to directly open something like https://www.linkedin.com/jobs/search/?f_AL=True&f_E=2&keywords=python&location=Australia but not https://www.linkedin.com/jobs/search/?f_AL=True&f_E=1%2C2&keywords=python&location=Australia
the difference in both these links is that one of them takes only one value for experience level while the other one takes two values. This means it's probably not a post values issue.
You are getting and printing the current URL immediately after clicking on the search button, before the page changed with the response received from the server.
This is why it outputs you with https://linkedin.com/jobs/ instead of something like https://www.linkedin.com/jobs/search/?geoId=101452733&keywords=python&location=Australia.
WebDriverWait(driver, 10) or wait = WebDriverWait(driver, 20) will not cause any kind of delay like time.sleep(10) does.
wait = WebDriverWait(driver, 20) only instantiates a wait object, instance of WebDriverWait module / class

Web Scraping: how to extract this kind of div tag?

I am looking at a tag :
.
When I write a code,
message = soup.find("div", {"class": "text-msg-container"})
it gave me none. What are _ngcontent-vex-c62 and data-e2e-text-message-content tags? Do I need to include them too? How should I write them to get the div tag?
You can't because the div isn't there when you send a GET request to get the page code.
That page is built using Angular framework which produce SPA(Single Page Application) which means you can't scrape data from it when you send a GET request because the data isn't there.
The data is being generated by Javascript code which needs to run first to add the required data to the webpage.
You need to use another way that allows Javascript code to run first then you try to get the data you want.
If you want to find class text-msg-container, try Selenium. It will find any locator easily.
import unittest
from selenium import webdriver
class PythonSearch(unittest.TestCase):
def setUp(self):
self.driver = webdriver.Firefox()
def test_search(self):
driver = self.driver
driver.get("http://www.yoursite.com")
elem = driver.find_element_by_css_selector(".text-msg-container")
def tearDown(self):
self.driver.close()
if __name__ == "__main__":
unittest.main()
Use driver = webdriver.Chrome('/path/to/chromedriver') if you are testing Chrome. Look here for more info https://chromedriver.chromium.org/getting-started .
Getting started for Selenium https://selenium-python.readthedocs.io/getting-started.html#simple-usage
try this please
message = soup.find("div", _class="text-msg-container")
i hope that works
from selenium import webdriver
path = "C:/chromedriver.exe" ### path to downloaded chromedriver on your
#pc change this directory or put the same location C:
driver = webdriver.Chrome(path) ## your browser change it if you are not using chrome
driver.get("website link")
out = driver.find_element_by_class_name("text-msg-container")
print(out.text)

Get html of inspect element source with selenium

I'm working in selenium with Chrome.
The webpage I'm accessing updates dynamically.
I need the html that shows the results, I can access it when I do 'inspect element'.
I don't get how I need to access that html from my code. I always get the original html.
I tried this: Get HTML Source of WebElement in Selenium WebDriver using Python
browser.get('http://bijsluiters.fagg-afmps.be/?localeValue=nl')
searchform = browser.find_element_by_class_name('iceInpTxt')
searchform.send_keys('cefuroxim')
button = browser.find_element_by_class_name('iceCmdBtn').click()
element = browser.find_element_by_class_name('contentContainer')
html = element.get_attribute('innerHTML')
browser.close()
print(html)
It seems that it's working after some delay. If I were you I should try to experiment with the delay time.
from selenium import webdriver
import time
browser = webdriver.Chrome()
browser.get('http://bijsluiters.fagg-afmps.be/?localeValue=nl')
searchform = browser.find_element_by_class_name('iceInpTxt')
searchform.send_keys('cefuroxim')
button = browser.find_element_by_class_name('iceCmdBtn').click()
time.sleep(10)
element = browser.find_element_by_class_name('contentContainer')
html = element.get_attribute('innerHTML')
browser.close()
print(html)
Addition: a nicer way is to let the script proceed when an element is available (because of time it takes with JS (for example) before a specific element has been added to the DOM). The element to look for in your example is table with id iceDatTbl (for what I could find after a quick look).

Categories