BeautifulSoup scraping table id with python - python

I'm new to scraping, and am learning to use BeautifulSoup but I'm having trouble scraping a table. For the HTML I'm trying to parse:
<table id="ctl00_mainContent_DataList1" cellspacing="0" > style="width:80%;border-collapse:collapse;"> == $0
<tbody>
<tr><td><table width="90%" cellpadding="5" cellspacing="0">...</table></td></tr>
<tr><td><table width="90%" cellpadding="5" cellspacing="0">...</table></td></tr>
<tr><td><table width="90%" cellpadding="5" cellspacing="0">...</table></td></tr>
<tr><td><table width="90%" cellpadding="5" cellspacing="0">...</table></td></tr>
...
My code:
from urllib.request import urlopen
from bs4 import BeautifulSoup
quote_page = 'https://www.bcdental.org/yourdentalhealth/findadentist.aspx'
page = urlopen(quote_page)
soup = BeautifulSoup(page, 'html.parser')
table = soup.find('table', id="ctl00_mainContent_DataList1")
rows = table.findAll('tr')
I get AttributeError: 'NoneType' object has no attribute 'findAll'. I'm using python 3.6 and jupyter notebook for this in case that matters.
EDIT:
The table data that I'm trying to parse only shows up on the page after requesting a search (In the city field, select Burnaby, and hit search). The table ctl00_mainContent_DataList1 is the list of dentists that shows up after the search is submitted.

First: I use requests because it is easier to work with cookies, headers, etc.
Page is generated by ASP.net and it sends values __VIEWSTATE, __VIEWSTATEGENERATOR, __EVENTVALIDATION which you have to send in POST request too.
You have to load page using GET and then you can get those values.
You can also use request.Session() to get cookies which can be needed too.
Next you have to copy values and add parameters from form and send it using POST.
In code I put only parameters which are always send.
'526' is code for Vancouver. Other codes you can find in <select> tag.
If you want other options then you may have to add other parameters.
ie. ctl00$mainContent$chkUndr4Ref: on
is for Children: 3 & Under - Diagnose & Refer
EDIT: because inside <tr> is <table> so find_all('tr') returns too many elements (external tr and internal tr) and and laterfind_all('td')give the sametdmany times. I changedfind_all('tr')intofind_all('table')` and it should stop duplicate data.
import requests
from bs4 import BeautifulSoup
url = 'https://www.bcdental.org/yourdentalhealth/findadentist.aspx'
# --- session ---
s = requests.Session() # to automatically copy cookies
#s.headers.update({'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:57.0) Gecko/20100101 Firefox/57.0'})
# --- GET request ---
# get page to get cookies and params
response = s.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
# --- set params ---
params = {
# session - copy from GET request
#'EktronClientManager': '',
#'__VIEWSTATE': '',
#'__VIEWSTATEGENERATOR': '',
#'__EVENTVALIDATION': '',
# main options
'ctl00$terms': '',
'ctl00$mainContent$drpCity': '526',
'ctl00$mainContent$txtPostalCode': '',
'ctl00$mainContent$drpSpecialty': 'GP',
'ctl00$mainContent$drpLanguage': '0',
'ctl00$mainContent$drpSedation': '0',
'ctl00$mainContent$btnSearch': '+Search+',
# other options
#'ctl00$mainContent$chkUndr4Ref': 'on',
}
# copy from GET request
for key in ['EktronClientManager', '__VIEWSTATE', '__VIEWSTATEGENERATOR', '__EVENTVALIDATION']:
value = soup.find('input', id=key)['value']
params[key] = value
#print(key, ':', value)
# --- POST request ---
# get page with table - using params
response = s.post(url, data=params)#, headers={'Referer': url})
soup = BeautifulSoup(response.text, 'html.parser')
# --- data ---
table = soup.find('table', id='ctl00_mainContent_DataList1')
if not table:
print('no table')
#table = soup.find_all('table')
#print('count:', len(table))
#print(response.text)
else:
for row in table.find_all('table'):
for column in row.find_all('td'):
text = ', '.join(x.strip() for x in column.text.split('\n') if x.strip()).strip()
print(text)
print('-----')
Part of result:
Map
Dr. Kashyap Vora, 6145 Fraser Street, Vancouver V5W 2Z9
604 321 1869, www.voradental.ca
-----
Map
Dr. Niloufar Shirzad, Harbour Centre DentalL19 - 555 Hastings Street West, Vancouver V6B 4N6
604 669 1195, www.harbourcentredental.com
-----
Map
Dr. Janice Brennan, 902 - 805 Broadway West, Vancouver V5Z 1K1
604 872 2525
-----
Map
Dr. Rosemary Chang, 1240 Kingsway, Vancouver V5V 3E1
604 873 1211
-----
Map
Dr. Mersedeh Shahabaldine, 3641 Broadway West, Vancouver V6R 2B8
604 734 2114, www.westkitsdental.com
-----

Related

'NoneType' object has no attribute 'text' can't get it working [duplicate]

This question already has answers here:
Why do I get AttributeError: 'NoneType' object has no attribute 'something'?
(11 answers)
Closed 2 years ago.
I have some code to get the new products from a supplier. Well I just started the code. But now I get a NoneType on the following line: voorraad = result.find('span', {'class': 'po_blok_stoc'}).text.
It is the same as the other classes so it should be working, right?
this is the full code:
# import libraries
from urllib.request import urlopen as uReq
from bs4 import BeautifulSoup
# specify the url
url = "https://www.erotischegroothandel.nl/nieuweproducten/"
# Connect to the website and return the html to the variable ‘page’
uClient = uReq(url)
page_html = uClient.read()
uClient.close()
# parse the html using beautiful soup and store in variable `soup`
soup = BeautifulSoup(page_html, 'html.parser')
results = soup.find_all('div', {'class': 'po_blok'})
records = []
for result in results:
titel = result.find('span', {'class': 'po_blok_titl'}).text
staat = result.find('span', {'class': 'po_blok_nieu'}).text
voorraad = result.find('span', {'class': 'po_blok_stoc'}).text
records.append((titel, staat, voorraad))
print(records)
This is the html were I get the info from:
<div class="po_blok">
Klant worden
Al klant? Klik hier om in te loggen
<a href="/massage/massageolie-siliconen/nuru_play_body2body_massage_gel__4l_40589.html">
<img src="https://cdn.edc.nl/250/NGR04000R.jpg" alt="productnaam">
<span class="po_blok_nieu">Nieuw</span>
<span class="po_blok_titl">Nuru Play Body2Body Massage Gel – 4L</span>
<span class="po_blok_stoc">Voorradig</span>
</a>
</div>
The reason is that many of those elements are None. The error is for those elements, so we handle it ..
# import libraries
from urllib.request import urlopen as uReq
from bs4 import BeautifulSoup
# specify the url
url = "https://www.erotischegroothandel.nl/nieuweproducten/"
# Connect to the website and return the html to the variable ‘page’
uClient = uReq(url)
page_html = uClient.read()
uClient.close()
# parse the html using beautiful soup and store in variable `soup`
soup = BeautifulSoup(page_html, 'html.parser')
results = soup.find_all('div', {'class': 'po_blok'})
records = []
for result in results:
titel = result.find('span', {'class': 'po_blok_titl'}).text
staat = result.find('span', {'class': 'po_blok_nieu'}).text
voorraad = result.find('span', {'class': 'po_blok_stoc'})
if voorraad:
records.append((titel, staat, voorraad.text))
for record in records:
print(record)
Output:-
('Nuru Play Body2Body Massage Gel – 4L', 'Nieuw', 'Voorradig')
('Nuru Play Body2Body Massage Gel – 335 ml', 'Nieuw', 'Voorradig')
('Nuru Glow Body2Body Massage Gel – 335 ml', 'Nieuw', 'Voorradig')
('P-Trigasm Prostaat Vibrator met Roterende Kralen', 'Nieuw', 'Voorradig')
('Gaia Eco Vibrator - Roze', 'Nieuw', 'Voorradig')
('Gaia Eco Vibrator - Blauw', 'Nieuw', 'Voorradig')
('Zachte Gladde Anaal Dildo', 'Nieuw', 'Voorradig') .etc

Extract data from a specific cell in a table using BeautifulSoup?

I'm trying to extract the triage waiting times for a specific hospital to feed into other applications. Data from ALL local hospitals is available from: https://www.health.wa.gov.au/emergencyactivity/EDdata/edsv/
Here is the progress I have made so far:
import requests
from bs4 import BeautifulSoup
URL = 'https://www.health.wa.gov.au/emergencyactivity/EDdata/edsv/'
headers = {
"User-Agent": 'Mozilla/5.0 (X11; Linux x86_64; rv:76.0) Gecko/20100101 Firefox/76.0'
}
page = requests.get(URL, headers=headers)
soup = BeautifulSoup(page.content, 'html.parser')
table_rows = soup.find_all('tr')
for tr in table_rows:
td = tr.find_all('td')
row = [i.text for i in td]
print(row)
I want to extract only the triage time for Sir Charles Gairdner Hospital but have no clue how to do that. Any help would be much appreciated!
You are almost there. Try something like this:
from bs4 import Tag
table_rows = soup.select('tr td')
for tr in table_rows:
if tr.text == 'Sir Charles Gairdner Hospital':
for ns in tr.next_siblings:
if isinstance(ns,Tag):
print(ns.text)
Another alternative:
table = soup.select('table')[0]
for row in table:
if isinstance(row,Tag):
tds = row.select('td')
if len(tds)>0 and tds[0].text=='Sir Charles Gairdner Hospital':
for td in tds:
print(td.text)
Output:
73
5
36
Edit:
To print just the triage waiting time for that location, use:
for tr in table_rows:
if tr.text == 'Sir Charles Gairdner Hospital':
print(tr.next_sibling.text) #note: it's "next_sibling", not "siblings" this time

How to select value from dropdown item using requests in Python 3?

I want to scrape data from the website https://xlnindia.gov.in/frm_G_Cold_S_Query.aspx. I have to select the State as Delhi, District as Adarsh Nagar (4) & click on Search button, and scrape all the information.
So far I tried using the given below code as
import requests
from bs4 import BeautifulSoup
Error was coming as 'HTTPS 443 SSL', which I ressolved using 'verify = False
resp = requests.get('https://xlnindia.gov.in/frm_G_Cold_S_Query.aspx',verify=False)
soup = BeautifulSoup(resp.text,"lxml")
dictinfo = {i['name']:i.get('value','') for i in soup.select('input[name]')}
dictinfo['ddlState']='Delhi'
dictinfo['ddldistrict']='Adarsh Nagar (4)'
dictinfo['__EVENTTARGET']='btnSearch'
dictinfo = {k:(None,str(v)) for k,v in dictinfo.items()}
r=requests.post('https://xlnindia.gov.in/frm_G_Cold_S_Query.aspx',verify=False,files=dictinfo)
r
Error: Response [500]
soup2
Error:
Invalid postback or callback
argument. Event validation is enabled using <pages
enableEventValidation="true"/> in configuration or <%# Page
EnableEventValidation="true" %> in a page. For security purposes,
this feature verifies that arguments to postback or callback events
originate from the server control that originally rendered them. If
the data is valid and expected, use the
ClientScriptManager.RegisterForEventValidation method in order to
register the postback or callback data for validation.
Can someone please help me to scrape it or get it done.
(I can only use REQUEST & BEAUTIFULSOUP library, no SELENIUM, MECHANIZE,etc. libraries. )
Try the script below to get the tabular results meant to be populated choosing two dropdown items as you stated above from that webpage. Turn out that you have to make two subsequent post requests to populate the results.
import requests
from bs4 import BeautifulSoup
import urllib3
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
url = 'https://xlnindia.gov.in/frm_G_Cold_S_Query.aspx'
with requests.Session() as s:
s.headers['User-Agent'] = 'Mozilla/5.0'
resp = s.get(url,verify=False)
soup = BeautifulSoup(resp.text,"lxml")
dictinfo = {i['name']:i.get('value','') for i in soup.select('input[name]')}
dictinfo['ddlState'] = 'DL'
res = s.post(url,data=dictinfo)
soup_obj = BeautifulSoup(res.text,"lxml")
payload = {i['name']:i.get('value','') for i in soup_obj.select('input[name]')}
payload['ddldistrict'] = 'ADN'
r = s.post(url,data=payload)
sauce = BeautifulSoup(r.text,"lxml")
for items in sauce.select("#dgDisplay tr"):
data = [item.get_text(strip=True) for item in items.select("td")]
print(data)
Output you may see in the console like:
['Firm Name', 'City', 'Licences', 'Reg. Pharmacists / Comp. Person']
['A ONE MEDICOS', 'DELHI-251/1, GALI NO.1, KH, NO, 739/251/1, NEAR HIMACHAL BHAWAN,SARAI PIPAL THALA, VILLAGE AZAD PUR,', 'R - 2', 'virender kumar, DPH, [22295-17/10/2013]']
['AAROGYAM', 'DELHI-PVT. SHOP NO. 1, GF, 121,VILLAGE BHAROLA', 'R - 2', 'avinesh bhadoriya, DPH, [27033-]']
['ABCO INDIA', 'DELHI-SHOP NO-452/22,BHUSHAN BHAWAN RING ROAD,FLYOVER AZAD PUR', 'W - 2', 'sanjay dubey , SSC, [C-P-03/01/1997]']
['ADARSH MEDICOS', 'DELHI-NORTHERN SIDE B-107, GALI NO. 1,,MAJLIS PARK, VILLAGE BHAROLA,', 'R - 2', 'dilip kumar, BPH, [28036-11/01/2018]']

python mechanize check dates/time for an exam from a website

I am trying to check the dates/times availability for an exam using Python mechanize and send someone an email if a particular date/time becomes available in the result (result page screenshot attached)
import mechanize
from BeautifulSoup import BeautifulSoup
URL = "http://secure.dre.ca.gov/PublicASP/CurrentExams.asp"
br = mechanize.Browser()
response = br.open(URL)
# there are some errors in doctype and hence filtering the page content a bit
response.set_data(response.get_data()[200:])
br.set_response(response)
br.select_form(name="entry_form")
# select Oakland for the 1st set of checkboxes
for i in range(0, len(br.find_control(type="checkbox",name="cb_examSites").items)):
if i ==2:
br.find_control(type="checkbox",name="cb_examSites").items[i].selected =True
# select salesperson for the 2nd set of checkboxes
for i in range(0, len(br.find_control(type="checkbox",name="cb_examTypes").items)):
if i ==1:
br.find_control(type="checkbox",name="cb_examTypes").items[i].selected =True
reponse = br.submit()
print reponse.read()
I am able to get the response but for some reason the data within my table is missing
here are the buttons from the initial html page
<input type="submit" value="Get Exam List" name="B1">
<input type="button" value="Clear" name="B2" onclick="clear_entries()">
<input type="hidden" name="action" value="GO">
one part of the output (submit response) where the actual data is lying
<table summary="California Exams Scheduling" class="General_list" width="100%" cellspacing="0"> <EVERTHING INBETWEEN IS MISSING HERE>
</table>
All the data within the table is missing. I have provided a screenshot of the table element from chrome browser.
Can someone please tell me what could be wrong ?
Can someone please tell me how to get the date/time out of the response (assuming I have to use BeautifulSoup) and so has to be something on these lines. I am trying to find out if a particular date I have in mind (say March 8th) in the response shows up a Begin Time of 1:30 pm..screenshot attached
soup = BeautifulSoup(response.read())
print soup.find(name="table")
update - looks like my issue might be related to this question and am trying my options . I tried the below as per one of the answers but cannot see any tr elements in the data (though can see this in the page source when I check it manually)
soup.findAll('table')[0].findAll('tr')
Update - Modfied this to use selenium, will try and do further at some point soon
from selenium.common.exceptions import NoSuchElementException
from selenium.webdriver.common.keys import Keys
from bs4 import BeautifulSoup
import urllib3
myURL = "http://secure.dre.ca.gov/PublicASP/CurrentExams.asp"
browser = webdriver.Firefox() # Get local session of firefox
browser.get(myURL) # Load page
element = browser.find_element_by_id("Checkbox5")
element.click()
element = browser.find_element_by_id("Checkbox13")
element.click()
element = browser.find_element_by_name("B1")
element.click()
5 years later, maybe this can help someone. I took your problem as a training exercise. I completed it using the Requests package. (I use python 3.9)
The code below is in two parts:
the request to retrieve the data injected into the table after the POST request.
## the request part
url = "https://secure.dre.ca.gov/PublicASP/CurrentExams.asp"
headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:88.0) Gecko/20100101 Firefox/88.0"}
params = {
"cb_examSites": [
"'Fresno'",
"'Los+Angeles'",
"'SF/Oakland'",
"'Sacramento'",
"'San+Diego'"
],
"cb_examTypes": [
"'Broker'",
"'Salesperson'"
],
"B1": "Get+Exam+List",
"action": "GO"
}
s = rq.Session()
r = s.get(url, headers=headers)
s.headers.update({"Cookie": "%s=%s" % (r.cookies.keys()[0], r.cookies.values()[0])})
r2 = s.post(url=url, data=params)
soup = bs(r2.content, "lxml") # contain data you want
Parsing the response (a lot of ways to do it mine is maybe a bit stuffy)
table = soup.find_all("table", class_="General_list")[0]
titles = [el.text for el in table.find_all("strong")]
def beetweenBr(soupx):
final_str = []
for br in soupx.findAll('br'):
next_s = br.nextSibling
if not (next_s and isinstance(next_s,NavigableString)):
continue
next2_s = next_s.nextSibling
if next2_s and isinstance(next2_s,Tag) and next2_s.name == 'br':
text = str(next_s).strip()
if text:
final_str.append(next_s.strip())
return "\n".join(final_str)
d = {}
trs = table.find_all("tr")
for tr in trs:
tr_text = tr.text
if tr_text in titles:
curr_title = tr_text
splitx = curr_title.split(" - ")
area, job = splitx[0].split(" ")[0], splitx[1].split(" ")[0]
if not job in d.keys():
d[job] = {}
if not area in d[job].keys():
d[job][area] = []
continue
if (not tr_text in titles) & (tr_text != "DateBegin TimeLocationScheduledCapacity"):
tds = tr.find_all("td")
sub = []
for itd, td in enumerate(tds):
if itd == 2:
sub.append(beetweenBr(td))
else :
sub.append(td.text)
d[job][area].append(sub)
"d" contain data u want. I didn't go as far as sending an email yet.

Scraping content from AJAX onclick pop-up

I'm attempting to scape information from this page using Python: https://j2c-com.com/Euronaval14/catalogueWeb/catalogue.php?lang=gb. I'm specifically interested in the pop-up that occurs when you click on an individual exhibitor's name. The challenging part is it uses a lot of JavaScript to make AJAX calls to load the data.
I've examined the network calls when clicking on an exhibitor and it appears that the AJAX call goes to this URL (for the first exhibitor in the list, "A.I.A.D. and MOD ITALY"): https://j2c-com.com/Euronaval14/catalogueWeb/ajaxSociete.php?cle=D000365D000365&rnd=0.005115277832373977
I understand where the cle parameter comes from (the id with the <span> tag), however, what I don't quite get is where the rnd parameter is derived. Is it simply just a random number? I tried supplying a random number with each request but the html returned is missing the actual content of the pop-up.
This leads me to believe that either the rnd attribute isn't a random number, or I need some type of cookie present in order for the actual data to come through in the request.
Here's my code so far, I'm using Requests and BeautifulSoup to parse the html:
import random
import decimal
import requests
from bs4 import BeautifulSoup
#base_url = 'https://j2c-com.com/Euronaval14/catalogueWeb/catalogue.php?lang=gb'
base_url = 'https://j2c-com.com/Euronaval14/catalogueWeb/cataloguerecherche.php?listeFavoris=&typeRecherche=1&typeRechSociete=&typeSociete=&typeMarque=&typeDescriptif=&typeActivite=&choixSociete=&choixPays=&choixActivite=&choixAgent=&choixPavillon=&choixZoneExpo=&langue=gb&rnd=0.1410133063327521'
def generate_random_number(i,d):
"Produce a random between 0 and 1, with 16 decimal digits"
return str(decimal.Decimal('%d.%d' % (random.randint(0,i),random.randint(0,d))))
r = requests.get(base_url)
soup = BeautifulSoup(r.text)
table = soup.find('table', {'id':'tableResultat'})
trs = table.findAll('tr')
for tr in trs:
span = tr.find('span')
cle = span.get('id')
url = 'https://j2c-com.com/Euronaval14/catalogueWeb/ajaxSociete.php?cle=' + cle + '&rnd=' + generate_random_number(0,9999999999999999)
pop = requests.post(url)
print url
print pop.text
break
Can you help me understand how I can successfully capture the pop-up data, or what I'm doing wrong? Thanks in advance!
It is not about the rnd parameter. It is completely random and filled up by Math.random() js function.
As you've suspected, it is about cookies. PHPSESSID cookie is critical to be brought with every following request. Just start a requests.Session() and use it for every request you make:
The Session object allows you to persist certain parameters across
requests. It also persists cookies across all requests made from the
Session instance.
...
# start session
session = requests.Session()
r = session.get(base_url)
soup = BeautifulSoup(r.text)
table = soup.find('table', {'id':'tableResultat'})
trs = table.findAll('tr')
for tr in trs:
span = tr.find('span')
cle = span.get('id')
url = 'https://j2c-com.com/Euronaval14/catalogueWeb/ajaxSociete.php?cle=' + cle + '&rnd=' + generate_random_number(0,9999999999999999)
pop = session.post(url) # <-- the POST request here contains cookies returned by the first GET call
print url
print pop.text
break
It prints (see the HTML is filled up with the required data):
https://j2c-com.com/Euronaval14/catalogueWeb/ajaxSociete.php?cle=D000365D000365&rnd=0.1625497943120751
<table class='divAdresse'>
<tr>
<td class='ficheAdresse' valign='top'>Via Nazionale 54<br>IT-00184 - Roma<br><img
src='../../intranetJ2C/images/flags/IT.gif' style='margin-right:5px;'>ITALY<br><br>Phone: +39 06 488
0247 | Fax: +39 06 482 74 76<br><br>Website: <a href='http://www.aiad.it' target='_new'>www.aiad.it</a></td>
</tr>
</table>
<br>
<b class="divMarque">Contact:</b><br>
<font class="ficheAdresse"> Carlo Festucci - Secretary General<br>
c.festucci#aiad.it</font>
<br><br>
<div id='divTexte' class='ficheTexte'></div>
UPD.
The reason you were not getting the results for other exhibitors in the table is difficult to explain, but the main point here is to simulate all the consequent ajax requests being called under the hood when you click on the row in the browser:
import random
import decimal
import requests
from bs4 import BeautifulSoup
base_url = 'https://j2c-com.com/Euronaval14/catalogueWeb/cataloguerecherche.php?listeFavoris=&typeRecherche=1&typeRechSociete=&typeSociete=&typeMarque=&typeDescriptif=&typeActivite=&choixSociete=&choixPays=&choixActivite=&choixAgent=&choixPavillon=&choixZoneExpo=&langue=gb&rnd=0.1410133063327521'
fiche_url = 'https://j2c-com.com/Euronaval14/catalogueWeb/fiche.php'
reload_url = 'https://j2c-com.com/Euronaval14/catalogueWeb/reload.php'
data_url = 'https://j2c-com.com/Euronaval14/catalogueWeb/ajaxSociete.php'
def generate_random_number(i,d):
"Produce a random between 0 and 1, with 16 decimal digits"
return str(decimal.Decimal('%d.%d' % (random.randint(0, i),random.randint(0, d))))
# start session
session = requests.Session()
r = session.get(base_url)
soup = BeautifulSoup(r.content)
for span in soup.select('table#tableResultat tr span'):
cle = span.get('id')
session.post(reload_url)
session.post(fiche_url, data={'page': 'page:catalogue',
'pasFavori': '1',
'listeFavoris': '',
'cle': cle,
'stand': '',
'rnd': generate_random_number(0, 9999999999999999)})
session.post(reload_url)
pop = session.post(data_url, data={'cle': cle,
'rnd': generate_random_number(0, 9999999999999999)})
print pop.text
Prints:
<table class='divAdresse'><tr><td class='ficheAdresse' valign='top'>Via Nazionale 54<br>IT-00184 - Roma<br><img src='../../intranetJ2C/images/flags/IT.gif' style='margin-right:5px;'>ITALY<br><br>Phone: +39 06 488 0247 | Fax: +39 06 482 74 76<br><br>Website: Contact:</b><br><font class="ficheAdresse"> Carlo Festucci - Secretary General<br><a href="mailto:c.festucci#aiad.it">c.festucci#aiad.it</font><br><br><div id='divTexte' class='ficheTexte'></div>
<table class='divAdresse'><tr><td class='ficheAdresse' valign='top'>An der Faehre 2<br>27809 - Lemwerder<br><img src='../../intranetJ2C/images/flags/DE.gif' style='margin-right:5px;'>GERMANY<br><br>Phone: +49 421 673 30 | Fax: +49 421 673 3115<br><br>Website: <a href='http://www.abeking.com' target='_new'>www.abeking.com</a></td></tr></table><br><b class="divMarque">Contact:</b><br><font class="ficheAdresse"> Thomas Haake - Sales Director Navy</font><br><br><div id='divTexte' class='ficheTexte'></div>
<table class='divAdresse'><tr><td class='ficheAdresse' valign='top'>Mohamed Bin Khalifa Street (street 15)<br>PO Box 107241<br>107241 - Abu Dhabi<br><img src='../../intranetJ2C/images/flags/AE.gif' style='margin-right:5px;'>UNITED ARAB EMIRATES<br><br>Phone: +971 2 445 5551 | Fax: +971 2 445 0644</td></tr></table><br><b class="divMarque">Contact:</b><br><font class="ficheAdresse"> Pierre Baz - Business Development<br>pierre.baz#abudhabimar.com</font><br><br><div id='divTexte' class='ficheTexte'></div>
...

Categories