Python Selenium - Find all Table Values with xpath - python

So I have a table:
<table>
<tr>
<th>Firstname</th>
<th>Lastname</th>
<th>Age</th>
</tr>
<tr>
<td>Jill</td>
<td>Smith</td>
<td>50</td>
</tr>
<tr>
<td>Eve</td>
<td>Jackson</td>
<td>94</td>
</tr>
</table>
Here's my code to find the a random element inside
table = driver.find_element_by_xpath("//*[text()='Smith']")
I think I can get the parent which is the <tr> that <td> Smith is in by doing this
parent = table.parent
But my goal is to find the <table> and print out all the values in them such as...
Firstname, Lastname, Age, Jill, Smith......
I'm not really sure how to go about doing this since the table has no Class and Id.

You can try with the below snippet.
nodes = driver.find_elements_by_xpath("//table//tr/*")
for node in nodes:
print(node.text)

Related

Is it possible to add a new <td> instance to a <tr> row with bs4?

I want to edit a table of an .htm file, which roughly looks like this:
<table>
<tr>
<td>
parameter A
</td>
<td>
value A
</td>
<tr/>
<tr>
<td>
parameter B
</td>
<td>
value B
</td>
<tr/>
...
</table>
I made a preformatted template in Word, which has nicely formatted style="" attributes. I insert parameter values into the appropreatte tds from a poorly formatted .html file (This is the output from a scientific program). My job is to automate the creation of html tables so that they can be used in a paper, basically.
This works fine, while the template has empty td instances in a tr. But when I try create additional tds inside a tr (over which I iterate), I get stuck. The .append and .append_after methods for the rows just overwrite existing td instances. I need to create new tds, since I want to create the number of columns dynamically and I need to iterate over a number of up to 5 unformatted input .html files.
from bs4 import BeautifulSoup
with open('template.htm') as template:
template = BeautifulSoup(template)
template = template.find('table')
lines_template = template.findAll('tr')
for line in lines_template:
newtd = line.findAll('td')[-1]
newtd['control_string'] = 'this_is_new'
line.append(newtd)
=> No new tds. The last one is just overwritten. No new column was created.
I want to copy and paste the last td in a row, because it will have the correct style="" for that row. Is it possible to just copy a bs4.element with all the formatting and add it as the last td in a tr? If not, what module/approach should I use?
Thanks in advance.
You can copy the attributes by assigning to the attrs:
data = '''<table>
<tr>
<td style="color:red;">
parameter A
</td>
<td style="color:blue;">
value A
</td>
</tr>
<tr>
<td style="color:red;">
parameter B
</td>
<td style="color:blue;">
value B
</td>
</tr>
</table>'''
from bs4 import BeautifulSoup
soup = BeautifulSoup(data, 'lxml')
for i, tr in enumerate(soup.select('tr'), 1):
tds = tr.select('td')
new_td = soup.new_tag('td', attrs=tds[-1].attrs)
new_td.append('This is data for row {}'.format(i))
tr.append(new_td)
print(soup.table.prettify())
Prints:
<table>
<tr>
<td style="color:red;">
parameter A
</td>
<td style="color:blue;">
value A
</td>
<td style="color:blue;">
This is data for row 1
</td>
</tr>
<tr>
<td style="color:red;">
parameter B
</td>
<td style="color:blue;">
value B
</td>
<td style="color:blue;">
This is data for row 2
</td>
</tr>
</table>

How to extract items line by line

I need some help here using beautifulsoup4 to extract data from my inventory webpage.
The webpage was written in the following format: name of the item, followed by a table listing the multiple rows of details for that particular inventory.
I am interested in getting the item name, actual qty and expiry date.
How do I go about doing it given such HTML structure (see appended)?
<div style="font-weight: bold">Item X</div>
<table cellspacing="0" cellpadding="0" class="table table-striped report-table" style="width: 800px">
<thead>
<tr>
<th> </th>
<th>Supplier</th>
<th>Packing</th>
<th>Default Qty</th>
<th>Expensive</th>
<th>Reorder Point</th>
<th>Actual Qty</th>
<th>Expiry Date</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Company 1</td>
<td>3.8 L</td>
<td>
4
</td>
<td>
No
</td>
<td>2130.00</td>
<td>350.00</td>
<td>31-05-2019</td>
</tr>
<tr>
<td>2</td>
<td>Company 1</td>
<td>3.8 L</td>
<td>
4
</td>
<td>
No
</td>
<td>2130.00</td>
<td>15200.00</td>
<td>31-05-2019</td>
</tr>
<tr>
<td>3</td>
<td>Company 1</td>
<td>3.8 L</td>
<td>
4
</td>
<td>
No
</td>
<td>2130.00</td>
<td>210.00</td>
<td>31-05-2019</td>
</tr>
<tr>
<td colspan="5"> </td>
<td>Total Qty 15760.00</td>
<td> </td>
</tr>
</tbody>
</table>
<div style="font-weight: bold">Item Y</div>
<table cellspacing="0" cellpadding="0" class="table table-striped report-table" style="width: 800px">
<thead>
<tr>
<th> </th>
<th>Supplier</th>
<th>Packing</th>
<th>Default Qty</th>
<th>Expensive</th>
<th>Reorder Point</th>
<th>Actual Qty</th>
<th>Expiry Date</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Company 2</td>
<td>50X10's</td>
<td>
10
</td>
<td>
Yes
</td>
<td>1090.00</td>
<td>271.00</td>
<td>31-01-2020</td>
</tr>
<tr>
<td>2</td>
<td>Company 2</td>
<td>50X10's</td>
<td>
10
</td>
<td>
Yes
</td>
<td>1090.00</td>
<td>500.00</td>
<td>31-01-2020</td>
</tr>
<tr>
<td>3</td>
<td>Company 2</td>
<td>50X10's</td>
<td>
10
</td>
<td>
Yes
</td>
<td>1090.00</td>
<td>69.00</td>
<td>31-01-2020</td>
</tr>
<tr>
<td>4</td>
<td>Company 2</td>
<td>50X10's</td>
<td>
10
</td>
<td>
Yes
</td>
<td>1090.00</td>
<td>475.00</td>
<td>01-01-2020</td>
</tr>
<tr>
<td colspan="5"> </td>
<td>Total Qty 1315.00</td>
<td> </td>
</tr>
</tbody>
</table>
Here is one way to do it. The idea is to iterate over the items - div elements with bold substring inside the style attribute. Then, for every item, get the next table sibling using find_next_sibling() and parse the row data into a dictionary for convenient access by a header name:
from bs4 import BeautifulSoup
data = """your HTML here"""
soup = BeautifulSoup(data, "lxml")
for item in soup.select("div[style*=bold]"):
item_name = item.get_text()
table = item.find_next_sibling("table")
headers = [th.get_text(strip=True) for th in table('th')]
for row in table('tr')[1:-1]:
row_data = dict(zip(headers, [td.get_text(strip=True) for td in row('td')]))
print(item_name, row_data['Actual Qty'], row_data['Expiry Date'])
print("-----")
Prints:
Item X 350.00 31-05-2019
Item X 15200.00 31-05-2019
Item X 210.00 31-05-2019
-----
Item Y 271.00 31-01-2020
Item Y 500.00 31-01-2020
Item Y 69.00 31-01-2020
Item Y 475.00 01-01-2020
-----
One solution is to iterate through each row tag i.e. <tr> and then just figure out what the column cell at each index represents and access columns that way. To do so, you can use the find_all method in BeautifulSoup, which will return a list of all elements with the tag given.
Example:
from bs4 import BeautifulSoup
html_doc = YOUR HTML HERE
soup = BeautifulSoup(html_doc, 'html.parser')
for row in soup.find_all("tr"):
cells = row.find_all("td")
if len(cells) == 0:
#This is the header row
else:
#If you want to access the text of the Default Quantity column for example
default_qty = cells[3].text
Note that in the case that the tr tag is actually the header row, then there will not be td tags (there will only be th tags), so in this case len(cells)==0
You can select all divs and walk through to find the next table.
If you go over the rows of the table with the exception of the last row, you can extract text from specific cells and build your inventory list.
soup = BeautifulSoup(markup, "html5lib")
inventory = []
for itemdiv in soup.select('div'):
table = itemdiv.find_next('table')
for supply_row in table.tbody.select('tr')[:-1]:
sn, supplier, _, actual_qty, _, _, _, exp = supply_row.select('td')
item = map(lambda node: node.text.strip(), [sn, supplier, actual_qty, exp])
item[1:1] = [itemdiv.text]
inventory.append(item)
print(inventory)
You can use the csv library to write the inventory like so:
import csv
with open('some.csv', 'wb') as f:
writer = csv.writer(f, delimiter="|")
writer.writerow(('S/N', 'Item', 'Supplier', 'Actual Qty', 'Expiry Date'))
writer.writerows(inventory)

Get all table elements in Python using Selenium

I have a webpage which looks like this:
<table class="data" width="100%" cellpadding="0" cellspacing="0">
<tbody>
<tr>
<th>1</th>
<th>2</th>
<th>3 by</th>
</tr>
<tr>
<td width="10%">5120432</td>
<td width="70%">INTERESTED_SITE1/</td>
<td width="20%">foo2</td>
</tr>
<tr class="alt">
<td width="10%">5120431</td>
<td width="70%">INTERESTED_SITE2</td>
<td width="20%">foo2</td>
</tr>
</tbody>
</table>
I want to put those two sites somewhere (interested_site1 and interested_site2). I tried doing something like this:
chrome = webdriver.Chrome(chrome_path)
chrome.get("fooSite")
time.sleep(.5)
alert = chrome.find_element_by_xpath("/div/table/tbody/tr[2]/td[2]").text
print (alert)
But I can't find the first site. If I can't do this in a for loop, I don't mind getting every link separately. How can I get to that link?
It would be easier to use a CSS query:
driver.find_element_by_css_selector("td:nth-child(2)")
You can use an XPath expression to deal with this by looping over each row.
XPath expression: html/body/table/tbody/tr[i]/td[2]
Get the number of rows by,
totals_rows = chrome.find_elements_by_xpath("html/body/table/tbody/tr")
total_rows_length = len(totals_rows)
for (row in totals_rows):
count = 1
site = "html/body/table/tbody/tr["+counter+]+"/td[2]"
print("site name is:" + chrome.find_element_by_xpath(site).text)
site += 1
Basically, loop through each row and get the value in the second column (td[2]).

Python 3 BeautifulSoup4 Selecting specific <td> tags from each <tr>

I am scraping from an HTML table in this format:
<table>
<tr>
<th>Name</th>
<th>Date</th>
<th>Number</th>
<th>Address</th>
</tr>
<tr> 1
<td> Name-1 </td>
<td> Date-1 </td>
<td> Number-1 </td>
<td> Address-1 </td>
</tr>
<tr> 2
<td> Name-2 </td>
<td> Date-2 </td>
<td> Number-2 </td>
<td> Address-2 </td>
</tr>
</table>
It is the only table on that page. I want to store each TD tag with it's corresponding TH tag info to make a list, then eventually have it saved to a CSV. The actual info isn't saved with a -number, that's just to illustrate. The data has hundreds of table rows all with the same set of data formatted in this way in the table.
Basically, I'd want to make the 'name' be the 1st TD cell in each TR row, the date be the 2nd, and so on.
I can't seem to find a way to do this with Python3 and BeautifulSoup4, I know there's a way, I'm just too new.
Thank you all for your help, I am learning a lot as I go.
Assuming the data is uniform, the following basic example should work:
table_rows = soup.find_all("tr") #list of all <tr> tags
for row in table_rows:
cells = row.find_all("td") #list of all <td> tags within a row
if not cells: #skip rows without td elements
continue
name, date, number, address = cells #unpack list of <td> tags into separate variables
I have a similar issue. The script from sytech is working. Though, for instance, a table with 100 rows, my code will first show row 15 instead of the first row that appears in the html, then display row 16, row 17...row 100, row 1, row 2. using Clive's code above, I would get the following:
[<td> Name-15 </td>, <td> Date-15 </td>,<td> Number-15 </td>, <td> Address-15 </td>] [<td> Name-16 </td>, <td> Date-16 </td>,<td> Number-16 </td>, <td> Address-16 </td>] [<td> Name-16 </td>, <td> Date-16 </td>,<td> Number-16 </td>, <td> Address-16 </td>] etc... [<td> Name-100 </td>, <td> Date-100 </td>,<td> Number-100 </td>, <td> Address-100 </td>] [<td> Name-1 </td>, <td> Date-1 </td>,<td> Number-1 </td>, <td> Address-1 </td>]
Any idea why it wouldn't start with the first row?
Apologies if this is formatted badly, and thank you for the help!

Get table with maximum number of rows in a page using BeautifulSoup

Can anyone tell me how i can get the table in a HTML page which has a the most rows? I'm using BeautifulSoup.
There is one little problem though. Sometimes, there seems to be one table nested inside another.
<table>
<tr>
<td>
<table>
<tr>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
</tr>
</table>
<td>
</tr>
</table>
When the table.findAll('tr') code executes, it would count all the child rows for the table and the rows for the nested table under it. The parent table has just one row but the nested table has three and I would consider that to be the largest table. Below is the code that I'm using to dig out the largest table currently but it doesn't take the aforementioned scenario into consideration.
soup = BeautifulSoup(html)
#Get the largest table
largest_table = None
max_rows = 0
for table in soup.findAll('table'):
number_of_rows = len(table.findAll('tr'))
if number_of_rows > max_rows:
largest_table = table
max_rows = number_of_rows
I'm really lost with this. Any help guys?
Thanks in advance
Calculate number_of_rows like that:
number_of_rows = len(table.findAll(lambda tag: tag.name == 'tr' and tag.findParent('table') == table))

Categories