Hello I'm trying to learn how to web scrape so I started by trying to web scrape my school menu.
Ive come into a problem were I can't get the menu items under a span class but instead get the the word within the same line of the span class "show".
here is a short amount of the html text I am trying to work with
from bs4 import BeautifulSoup
from selenium import webdriver
driver = webdriver.Chrome(executable_path=chromedriver.exe')#changed this
driver.get('https://housing.ucdavis.edu/dining/menus/dining-commons/tercero/')
results = []
content = driver.page_source
soups = BeautifulSoup(content, 'html.parser')
element=soups.findAll('span',class_ = 'collapsible-heading-status')
for span in element:
print(span.text)
I have tried to make it into span.span.text but that wouldn't return me anything so can some one give me some pointer on how to extract the info under the collapsible-heading-status class.
Yummy waffles - As mentioned they are gone, but to get your goal an approach would be to select the names via css selectors using the adjacent sibling combinator:
for e in soup.select('.collapsible-heading-status + span'):
print(e.text)
or with find_next_sibling():
for e in soup.find_all('span',class_ = 'collapsible-heading-status'):
print(e.find_next_sibling('span').text)
Example
To get the whole information for each in a structured way you could use:
from selenium import webdriver
from webdriver_manager.chrome import ChromeDriverManager
from bs4 import BeautifulSoup
driver = webdriver.Chrome(ChromeDriverManager().install())
driver.get("https://housing.ucdavis.edu/dining/menus/dining-commons/tercero/")
soup = BeautifulSoup(driver.page_source, 'html.parser')
data = []
for e in soup.select('.nutrition'):
d = {
'meal':e.find_previous('h4').text,
'title':e.find_previous('h5').text,
'name':e.find_previous('span').text,
'description': e.p.text
}
d.update({n.text:n.find_next().text.strip(': ') for n in e.select('h6')})
data.append(d)
data
Output
[{'meal': 'Breakfast',
'title': 'Fresh Inspirations',
'name': 'Vanilla Chia Seed Pudding with Blueberrries',
'description': 'Vanilla chia seed pudding with blueberries, shredded coconut, and toasted almonds',
'Serving Size': '1 serving',
'Calories': '392.93',
'Fat (g)': '36.34',
'Carbohydrates (g)': '17.91',
'Protein (g)': '4.59',
'Allergens': 'Tree Nuts/Coconut',
'Ingredients': 'Coconut milk, chia seeds, beet sugar, imitation vanilla (water, vanillin, caramel color, propylene glycol, ethyl vanillin, potassium sorbate), blueberries, shredded sweetened coconut (desiccated coconut processed with sugar, water, propylene glycol, salt, sodium metabisulfite), blanched slivered almonds'},
{'meal': 'Breakfast',
'title': 'Fresh Inspirations',
'name': 'Housemade Granola',
'description': 'Crunchy and sweet granola made with mixed nuts and old fashioned rolled oats',
'Serving Size': '1/2 cup',
'Calories': '360.18',
'Fat (g)': '17.33',
'Carbohydrates (g)': '47.13',
'Protein (g)': '8.03',
'Allergens': 'Gluten/Wheat/Dairy/Peanuts/Tree Nuts',
'Ingredients': 'Old fashioned rolled oats (per manufacturer, may contain wheat/gluten), sunflower seeds, seedless raisins, unsalted butter, pure clover honey, peanut-free mixed nuts (cashews, almonds, sunflower oil and/or cottonseed oil, pecans, hazelnuts, dried Brazil nuts, salt), light brown beet sugar, molasses'},...]
Related
I'm trying to get Chips names from this Target market link and trying to get all 28 chips automatically in first page. I wrote this code. Opens the link, scrolls down (to fetch the names and pictures) and tries to get the names;
import time
from selenium import webdriver
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.common.keys import Keys
from webdriver_manager.chrome import ChromeDriverManager as CM
options = webdriver.ChromeOptions()
options.add_argument("--log-level=3")
mobile_emulation = {
"userAgent": 'Mozilla/5.0 (Linux; Android 4.0.3; HTC One X Build/IML74K) AppleWebKit/535.19 (KHTML, like Gecko) Chrome/83.0.1025.133 Mobile Safari/535.19'
}
options.add_experimental_option("mobileEmulation", mobile_emulation)
bot = webdriver.Chrome(executable_path=CM().install(), options=options)
bot.get('https://www.target.com/c/chips-snacks-grocery/-/N-5xsy7')
bot.set_window_size(500, 950)
time.sleep(5)
for i in range(0,3):
ActionChains(bot).send_keys(Keys.END).perform()
time.sleep(1)
product_names = bot.find_elements_by_class_name('Link-sc-1khjl8b-0 styles__StyledTitleLink-mkgs8k-5 kdCHb inccCG h-display-block h-text-bold h-text-bs flex-grow-one')
hrefList = []
for e in product_names:
hrefList.append(e.get_attribute('href'))
for href in hrefList:
print(href)
When I inspect the names from browser, the common part of all chips is having Link-sc-1khjl8b-0 styles__StyledTitleLink-mkgs8k-5 kdCHb inccCG h-display-block h-text-bold h-text-bs flex-grow-one class name. So as you see I added find_elements_by_class_name('Link-sc-1khjl8b-0 styles__StyledTitleLink-mkgs8k-5 kdCHb inccCG h-display-block h-text-bold h-text-bs flex-grow-one') line. But it gives null result. What is wrong? Can you help me? Solution can be selenium or bs4 doesnt matter.
You can get all that data from the api as long as you feed in the correct key.
import requests
url = 'https://redsky.target.com/redsky_aggregations/v1/web/plp_search_v1'
payload = {
'key': 'ff457966e64d5e877fdbad070f276d18ecec4a01',
'category': '5xsy7',
'channel': 'WEB',
'count': '28',
'default_purchasability_filter': 'true',
'include_sponsored': 'true',
'offset': '0',
'page': '/c/5xsy7',
'platform': 'desktop',
'pricing_store_id': '1771',
'scheduled_delivery_store_id': '1771',
'store_ids': '1771,1768,1113,3374,1792',
'useragent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.212 Safari/537.36',
'visitor_id': '0179C80AE1090201B5D5C1D895ADEA6C'}
jsonData = requests.get(url, params=payload).json()
for each in jsonData['data']['search']['products']:
title = each['item']['product_description']['title']
buy_url = each['item']['enrichment']['buy_url']
image_url = each['item']['enrichment']['images']['primary_image_url']
print(title)
Output:
Ruffles Cheddar & Sour Cream Potato Chips - 2.5oz
Doritos 3D Crunch Chili Cheese Nacho - 6oz
Hippeas Vegan White Cheddar Organic Chickpea Puffs - 5oz
PopCorners Spicy Queso - 7oz
Doritos 3D Crunch Spicy Ranch - 6oz
Pringles Snack Stacks Variety Pack Potato Crisps Chips - 12.9oz/18ct
Frito-Lay Variety Pack Flavor Mix - 18ct
Doritos Nacho Cheese Chips - 9.75oz
Hippeas Nacho Vibes Organic Chickpea Puffs - 5oz
Tostitos Scoops Tortilla Chips -10oz
Ripple Potato Chips Party Size - 13.5oz - Market Pantry™
Ritz Crisp & Thins Cream Cheese & Onion Potato And Wheat Chips - 7.1oz
Pringles Sour Cream & Onion Potato Crisps Chips - 5.5oz
Original Potato Chips Party Size - 15.25oz - Market Pantry™
Organic White Corn Tortilla Chips - 12oz - Good & Gather™
Sensible Portions Sea Salt Garden Veggie Straws - 7oz
Traditional Kettle Chips - 8oz - Good & Gather™
Lay's Classic Potato Chips - 8oz
Cheetos Crunchy Flamin Hot - 8.5oz
Sweet Potato Kettle Chips - 7oz - Good & Gather™
SunChips Harvest Cheddar Flavored Wholegrain Snacks - 7oz
Frito-Lay Variety Pack Classic Mix - 18ct
Doritos Cool Ranch Chips - 10.5oz
Lay's Wavy Original Potato Chips - 7.75oz
Frito-Lay Variety Pack Family Fun Mix - 18ct
Cheetos Jumbo Puffs - 8.5oz
Frito-Lay Fun Times Mix Variety Pack - 28ct
Doritos Nacho Cheese Flavored Tortilla Chips - 15.5oz
Lay's Barbecue Flavored Potato Chips - 7.75oz
SunChips Garden Salsa Flavored Wholegrain Snacks - 7oz
Pringles Snack Stacks Variety Pack Potato Crisps Chips - 12.9oz/18ct
Frito-Lay Variety Pack Doritos & Cheetos Mix - 18ct
This also works:
product_names = bot.find_elements_by_xpath("//li[#data-test='list-entry-product-card']")
hrefList = []
for e in product_names:
print(e.find_element_by_css_selector("a").get_attribute("href"))
Try instead
product_names = bot.find_elements_by_css_selector('Link-sc-1khjl8b-0.styles__StyledTitleLink-mkgs8k-5.kdCHb.inccCG.h-display-block.h-text-bold.h-text-bs.flex-grow-one')
When using find_elements_by_class_name(), spaces in the class name are not handled properly.
Except that selector doesn't work for me, I need to use '.Link-sc-1khjl8b-0.ItemLink-sc-1eyz3ng-0.kdCHb.dtKueh'
I am writing a program to iterate through a recipe website, the Woks of Life, and extract each recipe and store it in a CSV file. I have managed to extract the links for storage purpose, but I am having trouble extracting the elements on the page. The website link is https://thewoksoflife.com/baked-white-pepper-chicken-wings/. The elements that I am trying to reach are the name, cook time, ingredients, calories, instructions, etc.
def parse_recipe(link):
#hardcoded link for now until i get it working
page = requests.get("https://thewoksoflife.com/baked-white-pepper-chicken-wings/")
soup = BeautifulSoup(page.content, 'html.parser')
for i in soup.findAll("script", {"class": "yoast-schema-graph yoast-schema-graph--main"}):
print(i.get("name")) #should print "Baked White Pepper Chicken Wings" but prints "None"
For reference, when I print(i), I get:
<script class="yoast-schema-graph yoast-schema-graph--main" type="application/ld+json">
{"#context":"https://schema.org","#graph":
[{"#type":"Organization","#id":"https://thewoksoflife.com/#organization","name":"The Woks of
Life","url":"https://thewoksoflife.com/","sameAs":
["https://www.facebook.com/thewoksoflife","https://twitter.com/thewoksoflife"],"logo":
{"#type":"ImageObject","#id":"https://thewoksoflife.com/#logo","url":"https://thewoksoflife.com/wp-
content/uploads/2019/05/Temporary-Logo-e1556728319201.png","width":365,"height":364,"caption":"The
Woks of Life"},"image":{"#id":"https://thewoksoflife.com/#logo"}}{"#type":"WebSite","#id":"https://thewoksoflife.com/#website","url":"https://thewoksoflife.com/","name":
"The Woks of Life","description":"a culinary genealogy","publisher":
{"#id":"https://thewoksoflife.com/#organization"},"potentialAction":
{"#type":"SearchAction","target":"https://thewoksoflife.com/?s={search_term_string}","query-
input":"required name=search_term_string"}},
{"#type":"ImageObject","#id":"https://thewoksoflife.com/baked-white-pepper-chicken-
wings/#primaryimage","url":"https://thewoksoflife.com/wp-content/uploads/2019/11/white-pepper-
chicken-wings-9.jpg","width":600,"height":836,"caption":"Crispy Baked White Pepper Chicken Wings,
thewoksoflife.com"},{"#type":"WebPage","#id":"https://thewoksoflife.com/baked-white-pepper-
chicken-wings/#webpage","url":"https://thewoksoflife.com/baked-white-pepper-chicken-
wings/","inLanguage":"en-US","name":"Baked White Pepper Chicken Wings | The Woks of
Life", .................. #continues onwards
I am trying to access the "name" (as well as other similarly unaccessable elements) located at the end of the code snippet above, but am unable to do so.
Any help would be appreciated!
The data is in JSON format, so after locating the <script> tag, you can parse it with JSON module. For exemple:
import json
import requests
from bs4 import BeautifulSoup
url = 'https://thewoksoflife.com/baked-white-pepper-chicken-wings/'
soup = BeautifulSoup(requests.get(url).text, 'html.parser')
data = json.loads( soup.select_one('script.yoast-schema-graph.yoast-schema-graph--main').text )
# print(json.dumps(data, indent=4)) # <-- uncomment this to print all data
recipe = next((g for g in data['#graph'] if g.get('#type', '') == 'Recipe'), None)
if recipe:
print('Name =', recipe['name'])
print('Cook Time =', recipe['cookTime'])
print('Ingredients =', recipe['recipeIngredient'])
# ... etc.
Prints:
Name = Baked White Pepper Chicken Wings
Cook Time = PT40M
Ingredients = ['3 pounds whole chicken wings ((about 14 wings))', '1-2 tablespoons white pepper powder ((divided))', '2 teaspoons salt ((divided))', '1 teaspoon Sichuan peppercorn powder ((optional))', '2 teaspoons vegetable oil ((plus more for brushing))', '1/2 cup all purpose flour', '1/4 cup cornstarch']
I try to print all the titles on nytimes.com. I used requests and beautifulsoup module. But I got empty brackets in the end. The return result is [ ]. How can I fix this problem?
import requests
from bs4 import BeautifulSoup
url = "https://www.nytimes.com/"
r = requests.get(url)
text = r.text
soup = BeautifulSoup(text, "html.parser")
title = soup.find_all("span", "balanceHeadline")
print(title)
I am assuming that you are trying to retrieve the headlines of nytimes. Doing title = soup.find_all("span", {'class':'balancedHeadline'}) will not get you your results. The <span> tag found using the element selector is often misleading. What you have to do is to look into the source code of the page and find the tags wrapped around the title.
For nytimes its a little tricky because the headlines are wrapped in the <script> tag with a lot of junk inside. Hence what you can do is to "clean" it first and deserialize the string by convertinng it into a python dictionary object.
import requests
from bs4 import BeautifulSoup
import json
url = "https://www.nytimes.com/"
r = requests.get(url)
r_html = r.text
soup = BeautifulSoup(r_html, "html.parser")
scripts = soup.find_all('script')
for script in scripts:
if 'preloadedData' in script.text:
jsonStr = script.text
jsonStr = jsonStr.split('=', 1)[1].strip() # remove "window.__preloadedData = "
jsonStr = jsonStr.rsplit(';', 1)[0] # remove trailing ;
jsonStr = json.loads(jsonStr)
for key,value in jsonStr['initialState'].items():
try:
if value['promotionalHeadline'] != "":
print(value['promotionalHeadline'])
except:
continue
outputs
Jeffrey Epstein Autopsy Results Conclude He Hanged Himself
Trump and Netanyahu Put Bipartisan Support for Israel at Risk
Congresswoman Rejects Israel’s Offer of a West Bank Visit
In Tlaib’s Ancestral Village, a Grandmother Weathers a Global Political Storm
Cathay Chief’s Resignation Shows China’s Power Over Hong Kong Unrest
Trump Administration Approves Fighter Jet Sales to Taiwan
Peace Road Map for Afghanistan Will Let Taliban Negotiate Women’s Rights
Debate Flares Over Afghanistan as Trump Considers Troop Withdrawal
In El Paso, Hundreds Show Up to Mourn a Woman They Didn’t Know
Is Slavery’s Legacy in the Power Dynamics of Sports?
Listen: ‘Modern Love’ Podcast
‘The Interpreter’
If You Think Trump Is Helping Israel, You’re a Fool
First They Came for the Black Feminists
How Women Can Escape the Likability Trap
With Trump as President, the World Is Spiraling Into Chaos
To Understand Hong Kong, Don’t Think About Tiananmen
The Abrupt End of My Big-Girl Summer
From Trump Boom to Trump Gloom
What Are Trump and Netanyahu Afraid Of?
King Bibi Bows Before a Tweet
Ebola Could Be Eradicated — But Only if the World Works Together
The Online Mob Came for Me. What Happened to the Reckoning?
A German TV Star Takes On Bullies
Why Is Hollywood So Scared of Climate Change?
Solving Medical Mysteries With Your Help: Now on Netflix
title = soup.find_all("span", "balanceHeadline")
replace it with
title = soup.find_all("span", {'class':'balanceHeadline'})
I'm trying to parse a news article from a sports website from there html feed, I tried using following code, I am getting 'key error'
code that I tried:
def get_cric_info_articles():
cricinfo_article_link = "http://www.espncricinfo.com/ci/content/story/news.html"
r = requests.get(cricinfo_article_link)
cricinfo_article_html = r.text
soup = BeautifulSoup(cricinfo_article_html, "html.parser")
# print(soup.prettify())
cric_info_items = soup.find_all("h2",
{"class": "story-title"})
cricinfo_article_dict = {}
for div in cric_info_items:
cricinfo_article_dict[div.find('a')['story-title']] = div.find('a')['href']
return cricinfo_article_dict
error message:
KeyError: 'story-title'
The value you looking for is inside a tag
import requests
from bs4 import BeautifulSoup
def get_cric_info_articles():
cricinfo_article_link = "http://www.espncricinfo.com/ci/content/story/news.html"
r = requests.get(cricinfo_article_link)
cricinfo_article_html = r.text
soup = BeautifulSoup(cricinfo_article_html, "html.parser")
# print(soup.prettify())
cric_info_items = soup.find_all("h2",
{"class": "story-title"})
cricinfo_article_dict = {}
for div in cric_info_items:
cricinfo_article_dict[div.find('a').string] = div.find('a')['href']
return cricinfo_article_dict
print(get_cric_info_articles())
Output:
{'Bell-Drummond leads MCC in curtain-raiser': '/ci/content/story/1135157.html', 'Scotland pick Brad Wheal, Chris Sole for World Cup qualifiers': '/scotland/content/story/1135152.html', 'Newlands working to be water independent': '/southafrica/content/story/1135120.html', 'Scorchers bow out after Hurricanes pile up 210': '/australia/content/story/1135117.html', "'Strong evidence' of corruption in Ajman All Stars League - ICC ": '/ci/content/story/1135108.html', 'Du Plessis 120 powers South Africa to 269': '/south-africa-v-india-2018/content/story/1135099.html', "Plan is to expose India's middle, lower order - Harris": '/australia/content/story/1135091.html', 'Top order, King fire Scorchers into WBBL final': '/australia/content/story/1135084.html', 'Technical change brings prolific run for Mominul': '/bangladesh/content/story/1135077.html', 'Dhananjaya, Mendis lead strong Sri Lanka reply': '/bangladesh/content/story/1135075.html'}
div.find('a')['story-title'] won't give you the story title, since the <a> tag doesn't have that attribute. That's the reason you're getting KeyError.
Use .text for the <a> tag as the title is located here: <a> ... </a>.
for h2 in cric_info_items:
cricinfo_article_dict[h2.find('a').text] = h2.find('a')['href']
for item in cricinfo_article_dict.items():
print(item)
Output:
('Bell-Drummond leads MCC in curtain-raiser', '/ci/content/story/1135157.html')
('Scotland pick Brad Wheal, Chris Sole for World Cup qualifiers', '/scotland/content/story/1135152.html')
('Newlands working to be water independent', '/southafrica/content/story/1135120.html')
('Scorchers bow out after Hurricanes pile up 210', '/australia/content/story/1135117.html')
("'Strong evidence' of corruption in Ajman All Stars League - ICC ", '/ci/content/story/1135108.html')
('Du Plessis 120 powers South Africa to 269', '/south-africa-v-india-2018/content/story/1135099.html')
("Plan is to expose India's middle, lower order - Harris", '/australia/content/story/1135091.html')
('Top order, King fire Scorchers into WBBL final', '/australia/content/story/1135084.html')
('Technical change brings prolific run for Mominul', '/bangladesh/content/story/1135077.html')
('Dhananjaya, Mendis lead strong Sri Lanka reply', '/bangladesh/content/story/1135075.html')
Also, calling the same method (here, h2.find('a')) multiple times is not a good idea as it will take more time. In this case, it won't show any difference in run time as the <h2> tag has only one child - <a>. But in other cases where the parent has many children, it is a good idea to save the tag you find in a variable and then use it. Something like this:
a = h2.find('a')
cricinfo_article_dict[a.text] = a['href']
EDIT:
To get the title, link, and the image link, you can create a list of dictionaries for individual news items with title, link, image as its items.
Try this:
cricinfo_article_list = []
for item in cric_info_items:
item_dict = {}
item_title = item.find('h2', {'class': 'story-title'}).find('a')
item_dict['title'] = item_title.text
item_dict['link'] = item_title['href']
item_dict['image'] = item.find('img', {'class': 'img-full'})['src']
cricinfo_article_list.append(item_dict)
for item in cricinfo_article_list:
print(item)
Output:
{'title': 'Bell-Drummond leads MCC in curtain-raiser', 'link': '/ci/content/story/1135157.html', 'image': 'http://p.imgci.com/db/PICTURES/CMS/265700/265759.4.jpg'}
{'title': 'Scotland pick Brad Wheal, Chris Sole for World Cup qualifiers', 'link': '/scotland/content/story/1135152.html', 'image': 'http://p.imgci.com/db/PICTURES/CMS/251500/251571.4.jpg'}
{'title': 'Newlands working to be water independent', 'link': '/southafrica/content/story/1135120.html', 'image': 'http://p.imgci.com/db/PICTURES/CMS/271700/271734.5.jpg'}
{'title': 'Scorchers bow out after Hurricanes pile up 210', 'link': '/australia/content/story/1135117.html', 'image': 'http://p.imgci.com/db/PICTURES/CMS/271700/271769.5.jpg'}
{'title': "'Strong evidence' of corruption in Ajman All Stars League - ICC ", 'link': '/ci/content/story/1135108.html', 'image': 'http://p.imgci.com/db/PICTURES/CMS/263200/263221.4.jpg'}
{'title': 'Du Plessis 120 powers South Africa to 269', 'link': '/south-africa-v-india-2018/content/story/1135099.html', 'image': 'http://p.imgci.com/db/PICTURES/CMS/272700/272763.5.jpg'}
{'title': "Plan is to expose India's middle, lower order - Harris", 'link': '/australia/content/story/1135091.html', 'image': 'http://p.imgci.com/db/PICTURES/CMS/272300/272388.5.jpg'}
{'title': 'Top order, King fire Scorchers into WBBL final', 'link': '/australia/content/story/1135084.html', 'image': 'http://p.imgci.com/db/PICTURES/CMS/272700/272734.5.jpg'}
{'title': 'Technical change brings prolific run for Mominul', 'link': '/bangladesh/content/story/1135077.html', 'image': 'http://p.imgci.com/db/PICTURES/CMS/272700/272718.5.jpg'}
{'title': 'Dhananjaya, Mendis lead strong Sri Lanka reply', 'link': '/bangladesh/content/story/1135075.html', 'image': 'http://p.imgci.com/db/PICTURES/CMS/270600/270602.2.jpg'}
For some reason I am unable to extract the table from this simple html table.
from bs4 import BeautifulSoup
import requests
def main():
html_doc = requests.get(
'http://www.wolfson.cam.ac.uk/old-site/cgi/catering-menu?week=0;style=/0,vertical')
soup = BeautifulSoup(html_doc.text, 'html.parser')
table = soup.find('table')
print table
if __name__ == '__main__':
main()
I have the table, but I cannot understand the beautifulsoup documentation well enough to know how to extract the data. The data are in tr tags.
The website shows a simple HTML food menu.
I would like to output the day of the week and the menu for that day:
Monday:
Lunch: some_lunch, Supper: some_food
Tuesday:
Lunch: some_lunch, Supper: some_supper
and so on for all the days of the week. 'Formal Hall' can be ignored.
How can I iterate over the tr tags so that I can create this output?
I normally don't provide direct solutions. You should've tried some code and if you face any issue then post it here. But anyways, this is what I've written and it should help in giving you a head start.
soup = BeautifulSoup(r.content)
rows = soup.findAll("tr")
for i in xrange(1,8):
row = rows[i]
print row.find("th").text
for j in xrange(0,2):
print rows[0].findAll("th")[j+1].text.strip(), ": ",
td = row.findAll("td")[j]
for p in td.findAll("p"):
print p.text, ",",
print
print
Output will look something like this:
Monday
Lunch: Leek and Potato Soup, Spaghetti Bolognese with Garlic Bread, Red Pepper and Chickpea Stroganoff with Brown Rice, Chicken Goujons with Garlic Mayonnaise Dip, Vegetable Grills with Sweet Chilli Sauce, Coffee and Walnut Sponge with Custard,
Supper: Leek and Potato Soup, Breaded Haddock with Lemon and Tartare Sauce, Vegetable Samosa with Lentil Dahl, Chilli Beef Wraps, Steamed Strawberry Sponge with Custard,
Tuesday
Lunch: Tomato and Basil Soup, Pan-fried Harrisa Spiced Chicken with Roasted Vegetables, Vegetarian Spaghetti Bolognese with Garlic Bread, Jacket Potato with Various Fillings, Apple and Plum Pie with Custard,
Supper: Tomato and Basil Soup, Lamb Tagine with Fruit Couscous, Vegetable Biryani with Naan Bread, Pan-fried Turkey Escalope, Raspberry Shortbread,