Scraping GIS coordinates from a non-traditional map using selenium? - python

I'm trying to scrape a website with real estate publications. Each publication looks like this:
https://www.portalinmobiliario.com/venta/casa/providencia-metropolitana/5427357-francisco-bilbao-amapolas-uda#position=5&type=item&tracking_id=cedfbb41-ce47-455d-af9f-825614199c5e
I have been able to extract all the information I need except the coordinates (GIS) of the publications. The maps appear to be pasted (not linked). Does anyone know how to do this?
please help
Im using selenium/python3.

Im using Selenium & Chrome.
This is the list of publications:
https://www.portalinmobiliario.com/venta/casa/propiedades-usadas/las-condes-metropolitana
If you click any property of that list, it will take to the page were the maps are displayed. I'm using a loop to go through all of them (one at a time).
The code is a bit long, but so far i have been mostly using find_element_by_class_name and find_element_by_xpath to find and extract the information. I tried using them for the map but I dont know where to find the coordinates.

Related

Python Financial Chart Scraping

Right now I'm trying to scrape the dividend yield from a chart using the following code.
df = pd.read_html('https://www.macrotrends.net/stocks/charts/BMO/Bank-of-Montreal/dividend-yield-history')
df = df[0].dropna()
But the code wont pick up the chart's data.
Any suggestions on pulling it from the website?
Here is the specific link I'm trying to use: https://www.macrotrends.net/stocks/charts/BMO/Bank-of-Montreal/dividend-yield-history
I've used the code for picking up the book values but the objects they're using for the dividends and book values must be different.
Maybe I could use Beautiful Soup?
Sadly that website is rendered dynamically, so there's nothing in the html pandas is getting to scrape from. (The chart is loaded after the page). Scraping manually isn't going to help you here, because the data isn't there. (It's fetched after the page is loaded.)
You can either find an api which provides the data (best, quite possible given the content), work out where the page is fetching its data from and see if you can get it directly (better if possible), or use something like selenium to control a real browser, render the page, get the html, and then use that.

Web Page To Image With Python?

Any idea on how to convert a webpage like we would see shown on a browser into an RGBA image with something in python?
I am not looking for the other solutions I have seen that either use scikit or other to pull a .png from a webpage. Nor am I looking for a beautiful soup like solution where I can access specific data from a webpage.
I am seeking a solution that renders the webpage into a pixel buffer that I can then manipulate with something like numpy / cv2. Is this possible?
One of the simple solutions of taking screenshots would be using the Selenium package.
See this example: https://pythonbasics.org/selenium-screenshot/#Take-screenshot-of-full-page-with-Python-Selenium

Clicking multiple <span> elements with Selenium Python

I'm new to using Selenium, and I am having trouble figuring out how to click through all iterations of a specific element. To clarify, I can't even get it to click through one as it's a dropdown but is defined as an element.
I am trying to scrape fanduel; when clicking on a specific game you are presented with a bunch of main title bets and in order to get the information I need to click the dropdowns to get to that information. There is also another drop down that states, "See More" which is a similar problem, but assuming this gets fixed I'm assuming I will be able to figure that out.
So far, I have tried to use:
find_element_by_class_name()
find_element_by_css_selector()
I have also used them in the sense of elements, and tried to loop through and click on each index of the list, but that did not work.
If there are any ideas, they would be much appreciated.
FYI: I am using beautiful soup to scrape the website for the information, I figured Selenium would be helpful making the information that isn't currently accessible, accessible.
This image shows the dropdowns that I am trying to access, in this case the dropdown 'Win Margin'. The HTML code is shown to the left of it.
This also shows that there are multiple dropdowns, varying in amount based off the game.
You can also try using action chains from selenium
menu = driver.find_element_by_css_selector(".nav")
hidden_submenu = driver.find_element_by_css_selector(".nav # submenu1")
ActionChains(driver).move_to_element(menu).click(hidden_submenu).perform()
Source: here

Selenium Webscraping for some reason data only brings back partial instead of everything. Not sure if any dynamic data is in background

Python and Selenium beginner here. I'm trying to scrape the title of the sections of an Udemy class. I've tried using the find_elements_by_class_name and others but for some reason only brings back partial data.
page I'm scraping: https://www.udemy.com/selenium-webdriver-with-python3/
1) I want to get the title of the sections. They are the bold titles.
2) I want to get the title of the subsections.
from selenium import webdriver
driver = webdriver.Chrome()
url = 'https://www.udemy.com/selenium-webdriver-with-python3/'
driver.get(url)
main_titles = driver.find_elements_by_class_name("lecture-title-text")
sub_titles = driver.find_elements_by_class_name("title")
Problem
1) Using main_titles, I got the length to be only 10. It only goes from Introduction to Modules. Working With Files and ones after all don't come out. However, the class names are exactly the same. Not sure why it's not. Modules / WorkingWithFiles is basically the cutoff point. The elements in the inspection also looks different at this point. They all have same span class tag but not sure why only partial is being returned
<span class="lecture-title-text">
Element Inspection between Modules title and WorkingWithFiles title
At this point the webscrape breaks down. Not sure why.
2) Using sub_titles, I got length to be 58 items but when I print them out, I only get the top two:
Introduction
How to reach me anytime and ask questions? *** MUST WATCH ***
After this, it's all blank lines. Not sure why it's only pulling the top two and not the rest when all the tags have
<div class='title'>
Maybe I could try using BeautifulSoup but currently I'm trying to get better using Selenium. Is there a dynamic content throwing off the selenium scrape or am I not scraping it in a proper way?
Thank you guys for the input. Sorry for the long post. I wanted to make sure I describe the problem correctly.
The reason why your only getting the first 10 sections is because only the first ten courses are shown. You might be logged in on your browser, so when you go to check it out, it shows every section. But for me and your scraper it's only showing the first 10. You'll need to click that .section-container--more-sections button before looking for the titles.
As for the weird case of the titles not being scraped properly: It's because when a element is hidden text attribute will always be undefined, which is why it only works for the first section. I'd try using the WebElement.get_attribute('textContent') to scrape the text.
Ok I've went through the suggestions in the comments and have solved it. I'm writing it here in case anyone in future wants to see how solution went.
1) Using suggestions, I made a command to click on the '24 more sections' to expand the tab and then scrape it, which worked perfectly!
driver.find_element_by_class_name("js-load-more").click()
titles = driver.find_elements_by_class_name("lecture-title-text")
for each in titles:
print (each.text)
This pulled all 34 section titles.
2) Using Matt's suggestion, I found the WebElement and used get_attribute('textContent') to pull out the text data. There were bunch of spaces so I used split() to get strings only.
sub_titles = driver.find_elements_by_class_name("title")
for each in sub_titles:
print (each.get_attribute('textContent').strip())
This pulled all 210 subsection titles!

Extracting info from dynamic page element in Python without "clicking" to make visible?

For the life of me I can't think of a better title...
I have a Python WebDriver-based scraper that goes to Google, enters a local search such as chiropractors+new york+ny, which, after clicking on More chiropractors+New York+NY, ends up on a page like this
The goal of the scraper is to grab the phone number and full address (including suite# etc.) of each of the 20 results on such a results page. In order to do so, I need to have WebDriver click 20 entries needs to be clicked on the bring up an overlay over the Google Map:
This is mighty slow. Were it not having to trigger each of these overlays, I would be able to do everything up to that point with the much faster lxml, by going straight to the ultimate URL of the results page and then extracting via XPath. But I appear to be stuck with not being able to get data from the overlay without first clicking on the link that brings up the overlay.
Is there a way to get the data out of this page element without having to click the associated links?

Categories