How to convert a file rss to xml in python? - python
I need to convert the page cnn rss (http://rss.cnn.com/rss/edition.rss) to XML file. I need to filter with the tag: title, link and pubDate and then export to csv file the result.
I am tried a code but not work because the result omit the pubDate.
I use this code:
# Python code to illustrate parsing of XML files
# importing the required modules
import csv
import requests
import xml.etree.ElementTree as ET
def loadRSS():
# url of rss feed
url = 'http://rss.cnn.com/rss/edition.rss'
# creating HTTP response object from given url
resp = requests.get(url)
# saving the xml file
with open('topnewsfeed.xml', 'wb') as f:
f.write(resp.content)
def parseXML(xmlfile):
# create element tree object
tree = ET.parse(xmlfile)
# get root element
root = tree.getroot()
# create empty list for news items
newsitems = []
# iterate news items
for item in root.findall('./channel/item'):
# empty news dictionary
news = {}
# append news dictionary to news items list
newsitems.append(news)
# return news items list
return newsitems
def savetoCSV(newsitems, filename):
# specifying the fields for csv file
fields = ['title', 'pubDate', 'description', 'link', 'media']
# writing to csv file
with open(filename, 'w') as csvfile:
# creating a csv dict writer object
writer = csv.DictWriter(csvfile, fieldnames=fields)
# writing headers (field names)
writer.writeheader()
# writing data rows
writer.writerows(newsitems)
def main():
# load rss from web to update existing xml file
loadRSS()
# parse xml file
newsitems = parseXML('topnewsfeed.xml')
# store news items in a csv file
savetoCSV(newsitems, 'topnews.csv')
if __name__ == "__main__":
# calling main function
main()
i tryed to configure the parameters and the result is this:
CNN show the rss as web format not as xml, for example reddit:
http://www.reddit.com/.rss
http://rss.cnn.com/rss/edition.rss
any idea of how obtain this information?
The XML entry for the RSS feed you mentioned is pubdate, not pubDate with a capital D.
If the issue is that pubdate isn't being included, that might be part of the problem.
Related
Iterate through a html file and extract data to CSV file
I have searched high and low for a solution, but non have quite fit what I need to do. I have an html page that is saved, as a file, lets call it sample.html and I need to extract recurring json data from it. An example file is as follows: I need to get the info from these files regulary, so the amount of objects change every time, an object would be considered as "{"SpecificIdent":2588,"SpecificNum":29,"Meter":0,"Power":0.0,"WPower":null,"SNumber":"","isI":false}" I need to get each of the values to a CSV file, with column headings being SpecificIdent, SpecificNum, Meter, Power, WPower, Snumber, isI. The associated data would be the rows from each. I apologize if this is a basic question in Python, but I am pretty new to it and cannot fathom the best way to do this. Any assistance would be greatly appreciated. Kind regards A <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">[{"SpecificIdent":2588,"SpecificNum":29,"Meter":0,"Power":0.0,"WPower":null,"SNumber":"","isI":false},{"SpecificIdent":3716,"SpecificNum":39,"Meter":1835,"Power":11240.0,"WPower":null,"SNumber":"0703-403548","isI":false},{"SpecificIdent":6364,"SpecificNum":27,"Meter":7768,"Power":29969.0,"WPower":null,"SNumber":"467419","isI":false},{"SpecificIdent":6583,"SpecificNum":51,"Meter":7027,"Power":36968.0,"WPower":null,"SNumber":"JE1449-521248","isI":false},{"SpecificIdent":6612,"SpecificNum":57,"Meter":12828,"Power":53918.0,"WPower":null,"SNumber":"JE1509-534327","isI":false},{"SpecificIdent":7139,"SpecificNum":305,"Meter":6264,"Power":33101.0,"WPower":null,"SNumber":"JE1449-521204","isI":false},{"SpecificIdent":7551,"SpecificNum":116,"Meter":0,"Power":21569.0,"WPower":null,"SNumber":"JE1449-521252","isI":false},{"SpecificIdent":7643,"SpecificNum":56,"Meter":7752,"Power":40501.0,"WPower":null,"SNumber":"JE1449-521200","isI":false},{"SpecificIdent":8653,"SpecificNum":49,"Meter":0,"Power":0.0,"WPower":null,"SNumber":"","isI":false},{"SpecificIdent":9733,"SpecificNum":142,"Meter":0,"Power":0.0,"WPower":null,"SNumber":"","isI":false},{"SpecificIdent":10999,"SpecificNum":20,"Meter":7723,"Power":6987.0,"WPower":null,"SNumber":"JE1608-625534","isI":false},{"SpecificIdent":12086,"SpecificNum":24,"Meter":0,"Power":0.0,"WPower":null,"SNumber":"","isI":false},{"SpecificIdent":14590,"SpecificNum":35,"Meter":394,"Power":10941.0,"WPower":null,"SNumber":"BN1905-944799","isI":false},{"SpecificIdent":14954,"SpecificNum":100,"Meter":0,"Power":0.0,"WPower":null,"SNumber":"517163","isI":false},{"SpecificIdent":14995,"SpecificNum":58,"Meter":0,"Power":38789.0,"WPower":null,"SNumber":"JE1444-511511","isI":false},{"SpecificIdent":15245,"SpecificNum":26,"Meter":0,"Power":0.0,"WPower":null,"SNumber":"430149","isI":false},{"SpecificIdent":18824,"SpecificNum":55,"Meter":8236,"Power":31358.0,"WPower":null,"SNumber":"0703-310839","isI":false},{"SpecificIdent":20745,"SpecificNum":41,"Meter":0,"Power":60963.0,"WPower":null,"SNumber":"JE1447-517260","isI":false},{"SpecificIdent":31584,"SpecificNum":11,"Meter":0,"Power":3696.0,"WPower":null,"SNumber":"467154","isI":false},{"SpecificIdent":32051,"SpecificNum":40,"Meter":7870,"Power":13057.0,"WPower":null,"SNumber":"JE1608-625593","isI":false},{"SpecificIdent":32263,"SpecificNum":4,"Meter":0,"Power":0.0,"WPower":null,"SNumber":"","isI":false},{"SpecificIdent":33137,"SpecificNum":132,"Meter":5996,"Power":26650.0,"WPower":null,"SNumber":"459051","isI":false},{"SpecificIdent":33481,"SpecificNum":144,"Meter":4228,"Power":16136.0,"WPower":null,"SNumber":"JE1603-617807","isI":false},{"SpecificIdent":33915,"SpecificNum":145,"Meter":5647,"Power":3157.0,"WPower":null,"SNumber":"JE1518-549610","isI":false},{"SpecificIdent":36051,"SpecificNum":119,"Meter":2923,"Power":12249.0,"WPower":null,"SNumber":"135493","isI":false},{"SpecificIdent":37398,"SpecificNum":21,"Meter":58,"Power":5540.0,"WPower":null,"SNumber":"BN1925-982761","isI":false},{"SpecificIdent":39024,"SpecificNum":50,"Meter":7217,"Power":38987.0,"WPower":null,"SNumber":"JE1445-511599","isI":false},{"SpecificIdent":39072,"SpecificNum":59,"Meter":5965,"Power":32942.0,"WPower":null,"SNumber":"JE1449-521199","isI":false},{"SpecificIdent":40601,"SpecificNum":9,"Meter":0,"Power":59655.0,"WPower":null,"SNumber":"JE1447-517150","isI":false},{"SpecificIdent":40712,"SpecificNum":37,"Meter":0,"Power":5715.0,"WPower":null,"SNumber":"JE1502-525840","isI":false},{"SpecificIdent":41596,"SpecificNum":53,"Meter":8803,"Power":60669.0,"WPower":null,"SNumber":"JE1503-527155","isI":false},{"SpecificIdent":50276,"SpecificNum":30,"Meter":2573,"Power":4625.0,"WPower":null,"SNumber":"JE1545-606334","isI":false},{"SpecificIdent":51712,"SpecificNum":69,"Meter":0,"Power":0.0,"WPower":null,"SNumber":"","isI":false},{"SpecificIdent":56140,"SpecificNum":10,"Meter":5169,"Power":26659.0,"WPower":null,"SNumber":"JE1547-609024","isI":false},{"SpecificIdent":56362,"SpecificNum":6,"Meter":0,"Power":0.0,"WPower":null,"SNumber":"","isI":false},{"SpecificIdent":58892,"SpecificNum":113,"Meter":0,"Power":0.0,"WPower":null,"SNumber":"","isI":false},{"SpecificIdent":65168,"SpecificNum":5,"Meter":12739,"Power":55833.0,"WPower":null,"SNumber":"JE1449-521284","isI":false},{"SpecificIdent":65255,"SpecificNum":60,"Meter":5121,"Power":27784.0,"WPower":null,"SNumber":"JE1449-521196","isI":false},{"SpecificIdent":65665,"SpecificNum":47,"Meter":11793,"Power":47576.0,"WPower":null,"SNumber":"JE1509-534315","isI":false},{"SpecificIdent":65842,"SpecificNum":8,"Meter":10783,"Power":46428.0,"WPower":null,"SNumber":"JE1509-534401","isI":false},{"SpecificIdent":65901,"SpecificNum":22,"Meter":0,"Power":0.0,"WPower":null,"SNumber":"","isI":false},{"SpecificIdent":65920,"SpecificNum":17,"Meter":9316,"Power":38242.0,"WPower":null,"SNumber":"JE1509-534360","isI":false},{"SpecificIdent":66119,"SpecificNum":43,"Meter":12072,"Power":52157.0,"WPower":null,"SNumber":"JE1449-521259","isI":false},{"SpecificIdent":70018,"SpecificNum":34,"Meter":11172,"Power":49706.0,"WPower":null,"SNumber":"JE1449-521285","isI":false},{"SpecificIdent":71388,"SpecificNum":54,"Meter":6947,"Power":36000.0,"WPower":null,"SNumber":"JE1445-512406","isI":false},{"SpecificIdent":71892,"SpecificNum":36,"Meter":15398,"Power":63691.0,"WPower":null,"SNumber":"JE1447-517256","isI":false},{"SpecificIdent":72600,"SpecificNum":38,"Meter":14813,"Power":62641.0,"WPower":null,"SNumber":"JE1447-517189","isI":false},{"SpecificIdent":73645,"SpecificNum":2,"Meter":0,"Power":0.0,"WPower":null,"SNumber":"","isI":false},{"SpecificIdent":77208,"SpecificNum":28,"Meter":0,"Power":0.0,"WPower":null,"SNumber":"","isI":false},{"SpecificIdent":77892,"SpecificNum":15,"Meter":0,"Power":0.0,"WPower":null,"SNumber":"","isI":false},{"SpecificIdent":78513,"SpecificNum":31,"Meter":6711,"Power":36461.0,"WPower":null,"SNumber":"JE1445-511601","isI":false},{"SpecificIdent":79531,"SpecificNum":18,"Meter":0,"Power":0.0,"WPower":null,"SNumber":"","isI":false}]</pre></body></html> I have tried examples from bs4, jsontoxml, and others, but I am sure there is a simple way to iterate and extract this?
I would harness python's standard library following way import csv import json from html.parser import HTMLParser class MyHTMLParser(HTMLParser): def handle_data(self, data): if data.strip(): self.data = data parser = MyHTMLParser() with open("sample.html","r") as f: parser.feed(f.read()) with open('sample.csv', 'w', newline='') as csvfile: fieldnames = ['SpecificIdent', 'SpecificNum', 'Meter', 'Power', 'WPower', 'SNumber', 'isI'] writer = csv.DictWriter(csvfile, fieldnames=fieldnames) writer.writeheader() writer.writerows(json.loads(parser.data)) which creates file starting with following lines SpecificIdent,SpecificNum,Meter,Power,WPower,SNumber,isI 2588,29,0,0.0,,,False 3716,39,1835,11240.0,,0703-403548,False 6364,27,7768,29969.0,,467419,False 6583,51,7027,36968.0,,JE1449-521248,False 6612,57,12828,53918.0,,JE1509-534327,False 7139,305,6264,33101.0,,JE1449-521204,False 7551,116,0,21569.0,,JE1449-521252,False 7643,56,7752,40501.0,,JE1449-521200,False 8653,49,0,0.0,,,False Disclaimer: this assumes JSON array you want is last text element which is not empty (i.e. contain at least 1 non-whitespace character).
There is a python library, called BeautifulSoup, that you could utilize to parse the whole HTML file: # pip install bs4 from bs4 import BeautifulSoup html = BeautifulSoup(your-html) From here on, you can perform any actions upon the html. In your case, you just need to find the <pre> element, and get its contents. This can be achieved easily: pre = html.body.find('pre') text = pre.text Finally, you need to parse the text, which it seems is JSON. You can do with Python's internal json library: import json result = json.loads(text) Now, we need to convert this to a CSV file. This could be done, using the csv library: import csv with open('GFG', 'w') as f: writer = csv.DictWriter(f, fieldnames=[ "SpecificIdent", "SpecificNum", "Meter", "Power", "WPower", "SNumber", "isI" ]) writer.writeheader() writer.writerows(result) Finally, your code should look something like this: from bs4 import BeautifulSoup import json import csv with open('raw.html', 'r') as f: raw = f.read() html = BeautifulSoup(raw) pre = html.body.find('pre') text = pre.text result = json.loads(text) with open('result.csv', 'w') as f: writer = csv.DictWriter(f, fieldnames=[ "SpecificIdent", "SpecificNum", "Meter", "Power", "WPower", "SNumber", "isI" ]) writer.writeheader() writer.writerows(result)
How to append multiple value in csv file header with python
This is my code and I can't append value in 'Title, Ingredients, instructions, nutrients, Image, link' from recipe_scrapers import scrape_me import requests from recipe_scrapers import scrape_html from csv import writer with open('recipe.csv', 'w', encoding='utf8', newline='') as file: #create new CSV file and write header that name Title ,Ingredients,instructions,nutrients,Image,link. thewriter = writer(file) header = ['Title', 'Ingredients', 'Instructions', 'Nutrition_Facts','image','links'] thewriter.writerow(header) url = "https://www.allrecipes.com/recipe/220751/quick-chicken-piccata/" html = requests.get(url).content scraper = scrape_html(html=html, org_url=url) for scrap in scraper: #this loop add Title ,Ingredients,instructions,nutrients,Image,link value . info = ['title, Ingredients, instructions, nutrients,Image,link'] thewriter.writerow(info) Title = scraper.title() Ingredients = scraper.ingredients() instructions = scraper.instructions() nutrients = scraper.nutrients() Image = scraper.image() link = scraper.links() print(scrap) How I can solve this code
There are a number of problems with your code. Firstly, your indentation is off. You are creating thewriter variable in a different code block and then trying to access it in a different code block. To fix this, you will have to indent all the code below your with open statement to the same level. Secondly, according to the recipe-scrapers doc, scraper is an AllRecipesCurated object that cannot be iterated, so your line: for scrap in scraper: makes no sense since your trying to iterate over a non-iterable object and will give you an error. Finally, these two lines: info = ['title, Ingredients, instructions, nutrients,Image,link'] thewriter.writerow(info) mean that you will always have the heading written into your file, not the data you get from the calling the URL. You should instead make it point to the data you extract from the url: thewriter.writerow([scraper.title(), scraper.ingredients(), scraper.instructions(), scraper.nutrients(), scraper.image(), scraper.links()]) Here is the full code fixed. You should be able to get the correct results using it: import requests from recipe_scrapers import scrape_html from csv import writer with open('recipe.csv', 'w', encoding='utf8', newline='') as file: # create new CSV file and write header that name Title ,Ingredients,instructions,nutrients,Image,link. thewriter = writer(file) header = ['Title', 'Ingredients', 'Instructions', 'Nutrition_Facts', 'image', 'links'] thewriter.writerow(header) url = "https://www.allrecipes.com/recipe/220751/quick-chicken-piccata/" html = requests.get(url).content scraper = scrape_html(html=html, org_url=url) thewriter.writerow([scraper.title(), scraper.ingredients(), scraper.instructions(), scraper.nutrients(), scraper.image(), scraper.links()])
Read JSON file correctly
I am trying to read a JSON file (BioRelEx dataset: https://github.com/YerevaNN/BioRelEx/releases/tag/1.0alpha7) in Python. The JSON file is a list of objects, one per sentence. This is how I try to do it: def _read(self, file_path): with open(cached_path(file_path), "r") as data_file: for line in data_file.readlines(): if not line: continue items = json.loads(lines) text = items["text"] label = items.get("label") My code is failing on items = json.loads(line). It looks like the data is not formatted as the code expects it to be, but how can I change it? Thanks in advance for your time! Best, Julia
With json.load() you don't need to read each line, you can do either of these: import json def open_json(path): with open(path, 'r') as file: return json.load(file) data = open_json('./1.0alpha7.dev.json') Or, even cooler, you can GET request the json from GitHub import json import requests url = 'https://github.com/YerevaNN/BioRelEx/releases/download/1.0alpha7/1.0alpha7.dev.json' response = requests.get(url) data = response.json() These will both give the same output. data variable will be a list of dictionaries that you can iterate over in a for loop and do your further processing.
Your code is reading one line at a time and parsing each line individually as JSON. Unless the creator of the file created the file in this format (which given it has a .json extension is unlikely) then that won't work, as JSON does not use line breaks to indicate end of an object. Load the whole file content as JSON instead, then process the resulting items in the array. def _read(self, file_path): with open(cached_path(file_path), "r") as data_file: data = json.load(data_file) for item in data: text = item["text"] label appears to be buried in item["interaction"]
Using a text file with JSON data and use it in Tkinter
I have used an API using python, to call out to "NewsAPI" to get all the latest news that I need and I have actually save it into a text file called "NewsAPI.txt". My code is: import json import requests def newsAPI(): url = ('https://newsapi.org/v2/everything?' #API URL 'q=procurement AND tender&' #keywords on procurement AND tender 'sortBy=popularity&' #Sort them by popularity 'apiKey=***') #Personal API key # GET response = requests.get(url) #storing the output into variable "results" results = response.json() # save the JSON output into a txt file for future usage with open("NewsAPI.txt", "w") as text_file: json.dump(results, text_file) After calling json.dump, it gets saved into the "NewsAPI.txt" file as I have mentioned. But I'm having trouble putting it into a treeview in Tkinter, or am I using the wrong widget to display them? Output data:
Read data from api and populate .csv bug
I am trying to write a script (Python 2.7.11, Windows 10) to collect data from an API and append it to a csv file. The API I want to use returns data in json. It limits the # of displayed records though, and pages them. So there is a max number of records you can get with a single query, and then you have to run another query, changing the page number. The API informs you about the nr of pages a dataset is divided to. Let's assume that the max # of records per page is 100 and the nr of pages is 2. My script: import json import urllib2 import csv url = "https://some_api_address?page=" limit = "&limit=100" myfile = open('C:\Python27\myscripts\somefile.csv', 'ab') def api_iterate(): for i in xrange(1, 2, 1): parse_url = url,(i),limit json_page = urllib2.urlopen(parse_url) data = json.load(json_page) for item in data['someobject']: print item ['some_item1'], ['some_item2'], ['some_item3'] f = csv.writer(myfile) for row in data: f.writerow([str(row)]) This does not seem to work, i.e. it creates a csv file, but the file is not populated. There is obviously something wrong with either the part of the script which builds the address for the query OR the part dealing with reading json OR the part dealing with writing query to csv. Or all of them. I have tried using other resources and tutorials, but at some point I got stuck and I would appreciate your assistance.
The url you have given provides a link to the next page as one of the objects. You can use this to iterate automatically over all of the pages. The script below gets each page, extracts two of the entries from the Dataobject array and writes them to an output.csv file: import json import urllib2 import csv def api_iterate(myfile): url = "https://api-v3.mojepanstwo.pl/dane/krs_osoby" csv_myfile = csv.writer(myfile) cols = ['id', 'url'] csv_myfile.writerow(cols) # Write a header while True: print url json_page = urllib2.urlopen(url) data = json.load(json_page) json_page.close() for data_object in data['Dataobject']: csv_myfile.writerow([data_object[col] for col in cols]) try: url = data['Links']['next'] # Get the next url except KeyError as e: break with open(r'e:\python temp\output.csv', 'wb') as myfile: api_iterate(myfile) This will give you an output file looking something like: id,url 1347854,https://api-v3.mojepanstwo.pl/dane/krs_osoby/1347854 1296239,https://api-v3.mojepanstwo.pl/dane/krs_osoby/1296239 705217,https://api-v3.mojepanstwo.pl/dane/krs_osoby/705217 802970,https://api-v3.mojepanstwo.pl/dane/krs_osoby/802970