pandas/dask csv multiple line read - python

I have CSV this way:
name,sku,description
Bryce Jones,lay-raise-best-end,"Art community floor adult your single type. Per back community former stock thing."
John Robinson,cup-return-guess,Produce successful hot tree past action young song. Himself then tax eye little last state vote. Country down list that speech economy leave.
Theresa Taylor,step-onto,"**Choice should lead budget task. Author best mention.
Often stuff professional today allow after door instead. Model seat fear evidence. Now sing opportunity feeling no season show.**"
that whole multi-line is value of description column of 3rd row
But when
df = ddf.read_csv(
file_path,blocksize=2000,engine="python",encoding='utf-8-sig',quotechar='"',delimiter='[,]',quoting=csv.QUOTE_MINIMAL
)
I use the above code it reads this way
['Bryce Jones', 'lay-raise-best-end', '"Art community floor adult your single type. Per back community former stock thing."']
['John Robinson', 'cup-return-guess', 'Produce successful hot tree past action young song. Himself then tax eye little last state vote. Country down list that speech economy leave.']
['Theresa Taylor', 'step-onto', '"Choice should lead budget task. Author best mention.']
['Often stuff professional today allow after door instead. Model seat fear evidence. Now sing opportunity feeling no season show."', None, None]
How to do this?

1
You can use double linebreak between rows and single linebreak inside texts and pandas will understand. So, csv will be-
name,sku,description
Bryce Jones,lay-raise-best-end,"Art community floor adult your single type. Per back community former stock thing."
John Robinson,cup-return-guess,Produce successful hot tree past action young song. Himself then tax eye little last state vote. Country down list that speech economy leave.
Theresa Taylor,step-onto,"Choice should lead budget task. Author best mention.
Often stuff professional today allow after door instead. Model seat fear evidence. Now sing opportunity feeling no season show."
And here is how you read it.
df = pd.read_csv(filepath) # you can keep other parameters if you want
output is,
name sku \
0 Bryce Jones lay-raise-best-end
1 John Robinson cup-return-guess
2 Theresa Taylor step-onto
description
0 Art community floor adult your single type. Pe...
1 Produce successful hot tree past action young ...
2 Choice should lead budget task. Author best me...
2
Use \n where you need linebreaks.
name,sku,description
Bryce Jones,lay-raise-best-end,"Art community floor adult your single type. Per back community former stock thing."
John Robinson,cup-return-guess,Produce successful hot tree past action young song. Himself then tax eye little last state vote. Country down list that speech economy leave.
Theresa Taylor,step-onto,"Choice should lead budget task. Author best mention.\nOften stuff professional today allow after door instead. Model seat fear evidence. Now sing opportunity feeling no season show."
While reading, use codecs library of python.
import codecs
df = pd.read_csv('../../data/stack.csv')
print(codecs.decode(df.iloc[2,2], 'unicode_escape'))
Output:
Choice should lead budget task. Author best mention.
Often stuff professional today allow after door instead. Model seat fear evidence. Now sing opportunity feeling no season show.
We had to use codecs.decode() because pandas escapes the character \ with \\. And decoding undo that. Without print() function, you will not see the linebreak though.

Related

Write a program to replace the following list of key phrases with underscore in between them in given text:

list_of_keyphrases = ['Prince Charles', 'Prince William', 'Meghan Markle', 'United Kingdom', 'North America', 'Duke and Duchess of Sussex', 'Queen Elizabeth II']
text = 'On January 8, Prince Harry and Meghan Markle, the Duke and Duchess of Sussex, unveiled their controversial plan to walk away from royal roles. We intend to step back as ‘senior’ members of the royal family and work to become financially independent while continuing to fully support her majesty the queen, they said in a joint statement. We now plan to balance our time between the United Kingdom and North America, continuing to honor our duty to the Queen, the commonwealth and our patronages. This geographic balance will enable us to raise our son with an appreciation for the royal tradition into which he was born, while also providing our family with the space to focus on the next chapter, including the launch of our new charitable entity, the statement added. Apparently, the announcement on the Sussex Royal Instagram page blindsided the Queen and other family members who had no idea it was coming, it sent tabloids into overdrive. Meanwhile, the Queen summoned Senior Royals to an emergency summit to discuss the future of the Duke and Duchess of Sussex. Billed as the Sandringham summit, the meeting took place at the Queen's estate in Norfolk and involved Queen Elizabeth II, Harry his father, Prince Charles and his brother Prince William, with Meghan Markle reportedly joining the Discussions by phone from Canada. Soon after, the queen released a statement, that said, My family and I are entirely supportive of Harry and Meghan Markle desire to create a new life as a young family. Although we would have preferred them to remain full-time working members of the Royal family, we respect and understand their wish to live a more independent life as a family while remaining a valued part of my family.'
import re
for i in list_of_keyphrases:
if i in text:
text=text.replace(i,"_")
print(text)
Output:
On January 8, Prince Harry and _, the _,unveiled their controversial plan to walk away from royal roles. We intend to step back as‘senior’ members of the royal family and work to become financially independent whilecontinuing to fully support her majesty the queen, they said in a joint statement. We nowplan to balance our time between the _ and _, continuing tohonor our duty to the Queen, the commonwealth and our patronages. This geographicbalance will enable us to raise our son with an appreciation for the royal tradition into whichhe was born, while also providing our family with the space to focus on the next chapter,including the launch of our new charitable entity, the statement added. Apparently, theannouncement on the Sussex Royal Instagram page blindsided the Queen and other familymembers who had no idea it was coming, it sent tabloids into overdrive. Meanwhile, theQueen summoned Senior Royals to an emergency summit to discuss the future of the Dukeand Duchess of Sussex. Billed as the Sandringham summit, the meeting took place at theQueen's estate in Norfolk and involved _, Harry his father, _and his brother _, with _ reportedly joining the discussions byphone from Canada. Soon after, the queen released a statement, that said, My family and Iare entirely supportive of Harry and _ desire to create a new life as a youngfamily. Although we would have preferred them to remain full-time working members ofthe Royal family, we respect and understand their wish to live a more independent life as afamily while remaining a valued part of my family.
I want output with names of the list which are present in the string to be replaced with underscores...
You imported regular expressions library but never used.
re.sub() function lets you change strings the way you want in this question.
re.sub(substringYouAreWannaChange,ConvertedTo,OriginalText)
You can use regular expressions for the first parameter but in this case you can use this.
for i in list_of_keyphrases:
text = re.sub(i, "_", text)
print(text)
will probably work. Output goes like this:
On January 8, Prince Harry and _, the _, unveiled their controversial
plan to walk away from royal roles. We...
However, if you don't want to use re library you can simply use replace method.
in this case change to:
for i in list_of_keyphrases:
text = text.replace(i,"_")

How to look up information from a website with python

I want to know if its possible to write a code in python that will allow me to look up information from an online source and add it too my code as a dictionary. (I want to use this so I have a dictionary consisting of all the spells listed on the harry potter wiki as the key and their descriptions as associated values (https://harrypotter.fandom.com/wiki/List_of_spells))
I am beginning python student and really don't know how to start, I guess I could copy the information as a text file and manipulate it from there but I really want it too change should the online source change etc.
You can grab the wiki data here and parse it:
https://harrypotter.fandom.com/wiki/List_of_spells?action=edit
It looks like the spells follow the same format making it easy for parsing. They are separated by a new line so you can split the data by \n, and parse each line out. There seems to be two different type of spells, ones that start with '|' and others that start with ':', so you have to parse differently for the type. Does that help you get started?
===''[[Water-Making Spell|Aguamenti]]'' (Water-Making Spell)===
[[File:Aguamenti.gif|235px|thumb]]
{{spell sum
|t=Charm, Conjuration
|p=AH-gwah-MEN-tee
|d=Produces a clean, drinkable jet of water from the wand tip.
|sm=Used by [[Fleur Delacour]] in [[1994]] to extinguish her skirt, which had caught flame during a fight against a [[dragon]]. [[Harry Potter|Harry]] used this spell twice in [[1997]], both on the same night; once to attempt to provide a drink for [[Albus Dumbledore|Dumbledore]], then again to help douse [[Rubeus Hagrid|Hagrid]]'s hut after it was set aflame by [[Thorfinn Rowle]], who used the [[Fire-Making Spell]].
|e=Possibly a hybrid of [[Latin]] words ''aqua'', which means "water", and ''mentis'', which means "mind".}}
===''[[Alarte Ascendare]]''===
[[File:Alarte Ascendare.gif|250px|thumb]]
{{spell sum
|t=Charm
|p=a-LAR-tay a-SEN-der-ay
|d=Shoots the target high into the air.
|sm=Used by [[Gilderoy Lockhart]] in [[Harry Potter and the Chamber of Secrets (film)|1992]]
|e=''Ascendere'' is a [[Latin]] infinitive meaning "to go up,""to climb," "to embark," "to rise(figuratively);" this is the origin of the English word "ascend".}}
===([[Albus Dumbledore's forceful spell|Albus Dumbledore's Forceful Spell]])===
:'''Type:''' Spell
:'''Description:''' This spell was, supposedly, quite powerful as when it was cast, [[Tom Riddle|the opponent]] was forced to conjure a [[Silver shield spell|silver shield]] to deflect it.
:'''Seen/Mentioned:''' This incantation was used only once throughout the series, and that was by Dumbledore in the [[British Ministry of Magic|Ministry of Magic]], immediately following the [[Battle of the Department of Mysteries]] on [[17 June]], [[1996]], while he duelled Voldemort.
===''[[Unlocking Charm|Alohomora]]'' (Unlocking Charm)===
[[File:Unlocking charm1.gif|235px|thumb]]
:'''Type:''' Charm
:'''Pronunciation:''' ah-LOH-ho-MOR-ah
:'''Description:''' Unlocks doors and other objects. It can also unlock doors that have been sealed with a [[Locking Spell]], although it is possible to bewitch doors to become unaffected by this spell.
:'''Seen/Mentioned:''' Used by [[Hermione Granger]] in [[1991]] to allow [[Trio|her and her friends]] to access the [[Third-floor corridor]]] at [[Hogwarts School of Witchcraft and Wizardry|her school]], which was at the time forbidden; she used it again two years later to free [[Sirius Black|Sirius]]'s cell in [[Filius Flitwick's office|her teacher's prison room]].
:'''Etymology:''' The incantation is derived from the West African Sidiki dialect used in geomancy; it means "friendly to thieves", as stated by [[J. K. Rowling|the author]] in testimony during a court case.<ref name=alomohoracourt>{{cite web |url=http://cyberlaw.stanford.edu/files/blogs/Trial%20Transcript%20Day%201.txt |title=Warner Bros Entertainment, Inc. and J.K. Rowling v. RDR Books (Transcript) |author=United States District Court, Southern District of New York |date=April 14, 2008 |publisher=Stanford Law School |access-date=October 1, 2015 |quote=Alohomora is a Sidiki word from West Africa, and it is a term used in geomancy. It is a figure -- the figure alohomora means in Sidiki "favorable to thieves." Which is obviously a very appropriate meaning for a spell that enables you to unlock a locked door by magic.}}</ref>
:'''Notes:''' Whilst in the first book, when the spell is cast the lock or door must be tapped once, in the fifth, [[Miriam Strout|a healer]] simply points her wand at the door to cast it, and on {{PM}} the wand motion is seen as a backward 'S'.

NLTK - Python extract names from csv

i've got a CSV which contains article's text in different raws.
Like we have column 1:
Hello i am John
Tom has got a Dog
... more text.
I'm trying the extract the first names and surname from those text and i was able to do that if i copy and paste the single text in the code.
But i don't know how to read the csv in the code and then it has to processes the different texts in the raws extracting name and surname.
Here is my code working with the text in it:
import operator,collections,heapq
import csv
import pandas
import json
import nltk
from nameparser.parser import HumanName
def get_human_names(text):
tokens = nltk.tokenize.word_tokenize(text)
pos = nltk.pos_tag(tokens)
sentt = nltk.ne_chunk(pos, binary = False)
person_list = []
person = []
name = ""
for subtree in sentt.subtrees(filter=lambda t: t.label() == 'PERSON'):
for leaf in subtree.leaves():
person.append(leaf[0])
if len(person) > 1: #avoid grabbing lone surnames
for part in person:
name += part + ' '
if name[:-1] not in person_list:
person_list.append(name[:-1])
name = ''
person = []
return (person_list)
text = """
M.F. Husain, Untitled, 1973, oil on canvas, 182 x 122 cm. Courtesy the Pundole Family Collection
In her essay ‘Worlding Asia: A Conceptual Framework for the First Delhi Biennale’, Arshiya Lokhandwala explores Gayatri Spivak’s provocation of ‘worlding’, which has been defined as imperialism’s epistemic violence of inscribing meaning upon a colonized space to bring it into the world through a Eurocentric framework. Lokhandwala extends this concept of worlding to two anti-cartographical terms: ‘de-worlding’, rejecting or debunking categories that are no longer useful such as the binaries of East-West, North-South, Orient-Occidental, and ‘re-worlding’, re-inscribing new meanings into the spaces that have been de-worlded to create one’s own worlds. She offers de-worlding and re-worlding as strategies for active resistance against epistemic violence of all forms, including those that stem from ‘colonialist strategies of imperialism’ or from ‘globalization disguised within neo-imperialist practices’.
Lokhandwala writes: Fourth World. The presence of Arshiya is really the main thing here.
Re-worlding allows us to reach a space of unease performing the uncanny, thereby locating both the object of art and the postcolonial subject in the liminal space, which prevents these categorizations as such… It allows an introspected view of ourselves and makes us seek our own connections, and look at ourselves through our own eyes.
In a recent exhibition on the occasion of the seventieth anniversary of India’s Independence, Lokhandwala employed the term to seemingly interrogate this proposition: what does it mean to re-world a country through the agonistic intervention of art and activism? What does it mean for a country and its historiography to re-world? What does this re-worlded India, in active resistance and a state of introspection, look like to itself?
The exhibition ‘India Re-Worlded: Seventy Years of Investigating a Nation’ at Gallery Odyssey in Mumbai (11 September 2017–21 February 2018) invited artists to select a year from the seventy years since the country’s independence that had personal import or resonated with them because of the significance of the events that occurred at the time. The show featured works that responded to or engaged with these chosen years. It captured a unique history of post-independent India told through the perspective of seventy artists. The works came together to collectively reflect on the history and persistence of violence from pre-independence to the present day and made reference to the continued struggle for political agency through acts of resistance, artistic and otherwise. Through the inclusion of subaltern voices, imagined geographies, particular experiences, solidarities and critical dissent, the exhibition offered counter-narratives and multiple histories.
Anita Dube, Missing Since 1992, 2017, wood, electrical wire, holders, bulbs, voltage stabilizers, 223 x 223 cm. Courtesy the artist and Gallery Odyssey
Lokhandwala says she had been thinking hard about an appropriate response to the seventy years of independence. ‘I wanted to present a new curatorial paradigm, a postcolonial critique of the colonisation and an affirmation of India coming into her own’, she says. ‘I think the fact that I tried to include seventy artists to [each take up] one year in the lifetime of the nation was also a challenging task to take on curatorially.’
Her previous undertaking ‘After Midnight: Indian Modernism to Contemporary India: 1947/1997’ at the Queens Museum in New York in 2015 juxtaposed two historical periods in Indian art: Indian modern art that emerged in the post-independence period from 1947 through the 1970s, and contemporary art from 1997 onwards when the country experienced the effects of economic liberalization and globalization. The 'India Re-Worlded' exhibition similarly presented art practices that emerged from the framework of postcolonial Indian modernity. It attempted to explore the self-reflexivity of the Indian artist as a postcolonial subject and, as Lokhandwala described in the curatorial note, the artists’ resulting ‘sense of agency and renewed connection with the world at large’. The exhibition included works by Progressive Artists' Group core members F.N. Souza, S.H. Raza, M.F. Husain and their peers Krishen Khanna, Tyeb Mehta and V.S. Gaitonde, presented under the year in which they were produced. Other important and pioneering pieces included work from Somnath Hore’s paper pulp print series Wounds (1970); a blowtorch on plywood work by abstractionist Jeram Patel, who was one of the founding members of Group 1890 ; and a video documenting one of Rummana Husain’s last performances.
The methodology of their display removed the didactic, art historical preoccupation with chronology and classification, instead opting to intersperse them amongst contemporary works. This fits in with Lokhandwala’s curatorial impulses and vision: to disrupt and resist single narratives, to stage dialogues and interactions between the works, to offer overlaps, intersections and nuances in the stories, but also in the artistic impetuses.
Jeram Patel, Untitled, 1970, blowtorch Fourht World on plywood, 61 x 61 cm. Courtesy the artist and Gallery Odyssey
The show opened with Jitish Kallat’s Death of Distance (2006), then we have Arshiya, which through lenticular prints presented two overlaid found texts from 2005 and 2006. One was a harrowing news story of a twelve-year-old Indian girl committing suicide after her mother tells her she cannot afford one rupee – two US cents – for a school meal. The other one was a news clipping in which the head of the state-run telecommunications company announces a new one-rupee-per-minute tariff plan for interstate phone calls and declares the scheme as ‘the death of distance’. The images offer two realities that are distant from and at odds with each other. They highlight an economic disparity heightened by globalization. A rupee coin, enlarged to a human scale and covered in black lead, stood poised on the gallery floor in front of the prints.
Bose Krishnamachari chose 1962, the year of his birth, to discuss the relationship between memory and age. As a visual representation of the country’s past through a timeline, within which he situated his own identity-questioning experiences as an artist, his work epitomized the themes and intentions of the exhibition. In Shilpa Gupta’s single channel video projection 100 Hand drawn Maps of India (2007–8) ordinary Indian people sketch outlines of the country from memory. The subjective maps based on the author’s impression and perception of space show how each person sees the country and articulates its borders. The work seems to ask, what do these incongruent representations reveal about our collective identities and our ideas about nationhood?
The repetition of some of the years selected, or even the absence of certain years, suggested that the parameters set by the curatorial concept sought to guide rather than clamp down on. This allowed greater freedom for the artists and curator, and therefore more considered and wide responses.
Surekha’s photographic series To Embrace (2017) celebrated the Chipko tree-hugging movement that originated on 25 March 1974, when 27 women from Reni village in Uttar Pradesh in northern India staged a self-organised, non-violent resistance to the felling of trees by clinging to them and linking arms around them. The photographs showed women embracing the branches of the giant, 400-year-old Dodda Alada Mara (Big Banyan Tree) in rural Bengaluru – paying a homage to both the pioneering eco-feminist environmental movement and the grand old tree.
Anita Dube’s Missing Since 1992 (2017) hung from the ceiling like a ghost of a terrible, dark past. Its electrical wires and bulbs outlined a sombre dome to represent the demolition of the Babri Masjid on 6 December 1992, which Dube calls ‘the darkest day I have experienced as a citizen’. This piece was one of several works in the exhibition that dealt with this event and the many episodes of communal riots that followed. These works document a decade when the country witnessed economic reform and growth but also the rise of a religious right-wing.
Riyas Komu, Fourth World, 2017, rubber and metal, 244 x 45 cm each. Courtesy the artist and Gallery Odyssey
Near the end of the exhibition, Riyas Komu’s sculptural installation Fourth World (2017) alerted us to the divisive forces that are threatening to dismantle the ethical foundations of the Republic symbolized by its official emblem, the Lion Capital – a symbol seen also on the blackened rupee coin featured in Kallat’s work – and in a way rounded off the viewing experience.
The seventy works that attempted to represent seventy years of the country’s history built a dense and complicated network of voices and stories, and also formed a cross section of the art emerging during this period. Although the show’s juxtaposition of modern and contemporary art made it seem like an extension of the themes presented in the curator’s previous exhibition at the Queens Museum, here the curatorial concept made the process of staging the exhibition more democratic blurring the sequence of modern and contemporary Indian art. Furthermore, the multi-pronged curatorial intentions brought renewed criticality to the events of past and present, always underscoring the spirit of resistance and renegotiation as the viewer could actively de-world and re-world.
"""
names = get_human_names(text)
print ("LAST, FIRST")
namex=[]
for name in names:
last_first = HumanName(name).last + ' ' + HumanName(name).first
print (last_first)
namex.append(last_first)
print (namex)
print('Saving the data to the json file named Names')
try:
with open('Names.json', 'w') as outfile:
json.dump(namex, outfile)
except Exception as e:
print(e)
So i would like to remove all the text from the code and want the code to process the text from my csv.
Thanks a lot :)
CSV stands for Comma Separated Values and is a text format used to represent tabular data in plain text. Commas are used as column separators and line breaks as row separators. Your string does not look like a real csv file. Nevermind the extension you can still read your text file like this:
with open('your_file.csv', 'r') as f:
my_text = f.read()
Your text file is now available as my_text in the rest of your code.
Pandas has read_csv command:
yourText= pandas.read_csv("csvFile.csv")

BeautifulSoup page number

I'm trying to extract text from the online version of The Wealth of Nations and create a data frame where each observation is a page of the book. I do it in a roundabout way, trying to imitate something similar I did in R, but I was wondering if there was a way to do this directly in BeautifulSoup.
What I do is first get the entire text from the page:
import pandas as pd
import requests
from bs4 import BeautifulSoup
import re
r = requests.get('https://www.gutenberg.org/files/38194/38194-h/38194-h.htm')
soup = BeautifulSoup(r.text,'html.parser')
But from here on, I'm just working with regular expressions and the text. I find the beginning and end of the book text:
beginning = [a.start() for a in re.finditer(r"BOOK I\.",soup.text)]
beginning
end = [a.start() for a in re.finditer(r"FOOTNOTES",soup.text)]
book = soup.text[beginning[1]:end[0]]
Then I remove the carriage returns and new lines and split on strings of the form "[Pg digits]" and put everything into a pandas data frame.
book = book.replace('\r',' ').replace('\n',' ')
l = re.compile('\[[P|p]g\s?\d{1,3}\]').split(book)
df = pd.DataFrame(l,columns=['col1'])
df['page'] = range(2,df.shape[0]+2)
There are indicators in the HTML code for page numbers of the form <span class='pagenum'><a name="Page_vii" id="Page_vii">[Pg vii]</a></span>. Is there a way I can do the text extraction in BeautifulSoup by searching for text between these "spans"? I know how to search for the page markers using findall, but I was wondering how I can extract text between those markers.
To get the page markers and the text associated with it, you can use bs4 with re. In order to match text between two markers, itertools.groupby can be used:
from bs4 import BeautifulSoup as soup
import requests
import re
import itertools
page_data = requests.get('https://www.gutenberg.org/files/38194/38194-h/38194-h.htm').text
final_data = [(i.find('a', {'name':re.compile('Page_\w+')}), i.text) for i in soup(page_data, 'html.parser').find_all('p')]
new_data = [list(b) for a, b in itertools.groupby(final_data, key=lambda x:bool(x[0]))][1:]
final_data = {new_data[i][0][0].text:'\n'.join(c for _, c in new_data[i+1]) for i in range(0, len(new_data), 2)}
Output (Sample, the actual result is too long for SO format):
{'[Pg vi]': "'In recompense for so many mortifying things, which nothing but truth\r\ncould have extorted from me, and which I could easily have multiplied to a\r\ngreater number, I doubt not but you are so good a christian as to return good\r\nfor evil, and to flatter my vanity, by telling me, that all the godly in Scotland\r\nabuse me for my account of John Knox and the reformation.'\nMr. Smith having completed, and given to the world his system of\r\nethics, that subject afterwards occupied but a small part of his lectures.\r\nHis attention was now chiefly directed to the illustration of\r\nthose other branches of science which he taught; and, accordingly, he\r\nseems to have taken up the resolution, even at that early period, of\r\npublishing an investigation into the principles of what he considered\r\nto be the only other branch of Moral Philosophy,—Jurisprudence, the\r\nsubject of which formed the third division of his lectures. At the\r\nconclusion of the Theory of Moral Sentiments, after treating of the\r\nimportance of a system of Natural Jurisprudence, and remarking that\r\nGrotius was the first, and perhaps the only writer, who had given any\r\nthing like a system of those principles which ought to run through,\r\nand be the foundation of the law of nations, Mr. Smith promised, in\r\nanother discourse, to give an account of the general principles of law\r\nand government, and of the different revolutions they have undergone\r\nin the different ages and periods of society, not only in what concerns\r\njustice, but in what concerns police, revenue, and arms, and whatever\r\nelse is the object of law.\nFour years after the publication of this work, and after a residence\r\nof thirteen years in Glasgow, Mr. Smith, in 1763, was induced to relinquish\r\nhis professorship, by an invitation from the Hon. Mr. Townsend,\r\nwho had married the Duchess of Buccleugh, to accompany the\r\nyoung Duke, her son, in his travels. Being indebted for this invitation\r\nto his own talents alone, it must have appeared peculiarly flattering\r\nto him. Such an appointment was, besides, the more acceptable,\r\nas it afforded him a better opportunity of becoming acquainted with\r\nthe internal policy of other states, and of completing that system of\r\npolitical economy, the principles of which he had previously delivered\r\nin his lectures, and which it was then the leading object of his studies\r\nto perfect.\nMr. Smith did not, however, resign his professorship till the day\r\nafter his arrival in Paris, in February 1764. He then addressed the\r\nfollowing letter to the Right Honourable Thomas Millar, lord advocate\r\nof Scotland, and then rector of the college of Glasgow:—", '[Pg vii]': "His lordship having transmitted the above to the professors, a meeting\r\nwas held; on which occasion the following honourable testimony\r\nof the sense they entertained of the worth of their former colleague\r\nwas entered in their minutes:—\n'The meeting accept of Dr. Smith's resignation in terms of the above letter;\r\nand the office of professor of moral philosophy in this university is therefore\r\nhereby declared to be vacant. The university at the same time, cannot\r\nhelp expressing their sincere regret at the removal of Dr. Smith, whose distinguished\r\nprobity and amiable qualities procured him the esteem and affection\r\nof his colleagues; whose uncommon genius, great abilities, and extensive\r\nlearning, did so much honour to this society. His elegant and ingenious\r\nTheory of Moral Sentiments having recommended him to the esteem of men\r\nof taste and literature throughout Europe, his happy talents in illustrating\r\nabstracted subjects, and faithful assiduity in communicating useful knowledge,\r\ndistinguished him as a professor, and at once afforded the greatest pleasure,\r\nand the most important instruction, to the youth under his care.'\nIn the first visit that Mr. Smith and his noble pupil made to Paris,\r\nthey only remained ten or twelve days; after which, they proceeded\r\nto Thoulouse, where, during a residence of eighteen months, Mr. Smith\r\nhad an opportunity of extending his information concerning the internal\r\npolicy of France, by the intimacy in which he lived with some of\r\nthe members of the parliament. After visiting several other places in\r\nthe south of France, and residing two months at Geneva, they returned\r\nabout Christmas to Paris. Here Mr. Smith ranked among his\r\nfriends many of the highest literary characters, among whom were\r\nseveral of the most distinguished of those political philosophers who\r\nwere denominated Economists."}

Cleaning text string after getting body text using Beautifulsoup

I'm trying to get text from articles on various webpages and write them as clean text documents. I don't want all visible text because that often includes irrelevant links on the side of webpages. I'm using Beautifulsoup to extract the information from pages. But, extra links not just on the side of the page but also those sometimes in the middle of the body text and at the bottom of the articles sometimes make it into the final product.
Does anyone know how to deal with the problem of extra links that are converted into text that are not actually a part of the real article's text?
#Some of the imports are for other portions of the code not shown here.
#I'm new to Python and am bad at remembering which library has which functions.
import os
import sys
import urllib2
import webbrowser
from bs4 import BeautifulSoup
from os import path
from cookielib import CookieJar
#I made an opener to deal with proxies and put *** instead of my information
#cookielib helps me get articles from nytimes
proxy = urllib2.ProxyHandler({'http': '***' % '***'})
auth = urllib2.HTTPBasicAuthHandler()
cj = CookieJar()
opener = urllib2.build_opener(proxy, auth, urllib2.HTTPHandler, urllib2.HTTPCookieProcessor(cj))
urllib2.install_opener(opener)
#Uses url input as a string to upen a webpage and and pulls out all the information.
def baumeister(url):
req = urllib2.Request(url)
opened = urllib2.urlopen(req)
html_doc = opened.read()
soup = BeautifulSoup(html_doc)
return soup
#Gets the body text from that html information.
def substanz(url):
soup = baumeister(url)
body = soup.find_all("p") #This is where I have tried to fix the problem and failed
result = ""
for e in body:
i = e.getText().replace("\t", "").replace(" ", " ").strip().encode(errors="ignore")
result += i + "\r\n\r\n"
return result
One article that I have used to test substanz that gets cleaned in the exact way I want is:
http://blogs.hbr.org/2014/06/do-you-really-want-to-be-yourself-at-work/
I'm trying to test with more articles from different sites. So I'm trying to clean the result of substanz (the result is a big string). The problem I have is with this article:
http://www.cnbc.com/id/101790001?__source=yahoo%7Cfinance%7Cheadline%7Cheadline%7Cstory&par=yahoo&doc=101790001%7CThink%20college%20is%20expensiv#.
I've just used the print substanz('url') to see what the result looks like. With the cnbc article I get extra links turned into text that are not really a part of the article. Whereas in the Harvard Business Review Article everything works out just fine as included links are part of the actual text.
I'm not going to attach the full result for each article here for viewing because they are each a full page of text long.
If you try exactly the code I have posted above the opener is not going to work, so use whatever opener you like to access websites. I have to access a certain proxy at work so that's the format that works for me.
Final note, I'm using python 3.4, and am writing the code in ipython notebook.
import requests
from bs4 import BeautifulSoup
r = requests.get("http://www.cnbc.com/id/101790001?__source=yahoo%7Cfinance%7Cheadline%7Cheadline%7Cstory&par=yahoo&doc=101790001%7CThink%20college%20is%20expensiv#")
soup = BeautifulSoup(r.content)
text =[''.join(s.findAll(text=True))for s in soup.findAll('p')]
print (text)
['>> View All Results for ""', 'Enter multiple symbols separated by commas', 'London quotes now available', 'Interest rates on loans to jump', "Because federal student loans are tied to the 10-year Treasury note, CNBC's Sharon Epperson reports borrowers will see the impact of the rise in Treasury yields over the past year.", ' Congratulations, graduates, on your diploma. Now what about that $29,000 student loan debt? ', ' More than 70 percent of graduates will carry student debt into the real world, according to the Institute for College Access and Success. And the average debt is just shy of $30,000. ', ' But the news will get worse next week when interest rates on student loans are set to rise again. ', ' Though federal student loan rates are fixed for the life of the loan, these rates reset for new borrowers every July 1, thanks to legislation that ties the rates to the performance of the financial markets. ', ' The interest rate on federal Stafford loans will go from its current fixed rate of just under 4 percent to 4.66 percent for loans that are distributed between July 1 and June 30, 2015. ', ' Read MoreStudent loan problem an easy fix: Sen. Warren ', ' For graduate students, the rate on Stafford loans will rise from just over 5 percent to 6.21 percent. ', ' Direct PLUS Loans for graduates and parents are still the most expensive, with rates rising to 7.21 percent.', 'Which college major pays off most?', "CNBC's Sharon Epperson reports majoring in engineering is the most lucrative. ", " The increase in monthly federal student loan payments can add up quickly, but shouldn't be too burdensome for most students. For every $10,000 in loans, new borrowers will pay about $4 more a month based on a 10-year repayment period. ", " Read MoreWhy millennial women don't save for retirement ", ' Still, experts warn that this is only just the beginning. ', ' "Federal student loan rates will continue to increase in the next few years and will likely hit the maximum rate caps which are as high as 10.5 percent for some loans," said Mark Kantrowitz, senior vice president and publisher of Edvisors.com. ', ' For sophomore student Samantha Cook, the decision to go to George Washington University was a big one financially. She says she had doubts about it. ', ' "My parents wanted to assure me that no matter what I picked, we\'d find a way to make it work," Cook said. Like most families, Cook and her parents are making it work by combining their household savings, scholarships and grants—and student loans. ', ' Read MoreCramer: Offset high cost of higher education ', ' Despite rising tuition and borrowing costs, the Cook family decided against Samantha transferring to an in-state university. ', ' Despite the debt load she is taking on, she said, "the value of a GW degree for me at least would be more valuable when looking for jobs later on." ', " —By CNBC's Sharon Epperson ", 'Hosting a yard sale may not be the most profitable way to get rid of your old junk.', 'Many Americans with debit cards tied to their checking accounts are still confused about how these programs work. ', "Here's how to avoid these deadly sins if you're contemplating or already in a divorce.", "The IRS offers a lot of help for students. Problem is, the educational tax breaks and how they work together -- or don't -- are confusing.", 'Get the best of CNBC in your inbox', 'Tips for home buyers that will help you find the right home for your bank account.', 'Complaints about movers are down. How to find the right one—and save.', "Forget bathing suit season. Why it's really time to join the gym. ", 'Drivers might see lower gas prices this year, but smart shopping tactics could help them save even more.', 'Data is a real-time snapshot *Data is delayed at least 15 minutesGlobal Business and Financial News, Stock Quotes, and Market Data and Analysis', '© 2014 CNBC LLC. All Rights Reserved.', 'A Division of NBCUniversal']
From the website in your link to get text from the main article.
import requests
from bs4 import BeautifulSoup
r = requests.get("http://www.cnbc.com/id/101790001?__source=yahoo%7Cfinance%7Cheadline%7Cheadline%7Cstory&par=yahoo&doc=101790001%7CThink%20college%20is%20expensiv#")
soup = BeautifulSoup(r.content)
text =[''.join(s.findAll(text=True)) for s in soup.findAll("div", {"class":"group"})]
print (text)
['\n Congratulations, graduates, on your diploma. Now what about that $29,000 student loan debt? \n More than 70 percent of graduates will carry student debt into the real world, according to the Institute for College Access and Success. And the average debt is just shy of $30,000. \n But the news will get worse next week when interest rates on student loans are set to rise again. \n Though federal student loan rates are fixed for the life of the loan, these rates reset for new borrowers every July 1, thanks to legislation that ties the rates to the performance of the financial markets. \n The interest rate on federal Stafford loans will go from its current fixed rate of just under 4 percent to 4.66 percent for loans that are distributed between July 1 and June 30, 2015. \n Read MoreStudent loan problem an easy fix: Sen. Warren \n For graduate students, the rate on Stafford loans will rise from just over 5 percent to 6.21 percent. \n Direct PLUS Loans for graduates and parents are still the most expensive, with rates rising to 7.21 percent.\n', '\n The increase in monthly federal student loan payments can add up quickly, but shouldn\'t be too burdensome for most students. For every $10,000 in loans, new borrowers will pay about $4 more a month based on a 10-year repayment period. \n Read MoreWhy millennial women don\'t save for retirement \n Still, experts warn that this is only just the beginning. \n "Federal student loan rates will continue to increase in the next few years and will likely hit the maximum rate caps which are as high as 10.5 percent for some loans," said Mark Kantrowitz, senior vice president and publisher of Edvisors.com. \n For sophomore student Samantha Cook, the decision to go to George Washington University was a big one financially. She says she had doubts about it. \n "My parents wanted to assure me that no matter what I picked, we\'d find a way to make it work," Cook said. Like most families, Cook and her parents are making it work by combining their household savings, scholarships and grants—and student loans. \n Read MoreCramer: Offset high cost of higher education \n Despite rising tuition and borrowing costs, the Cook family decided against Samantha transferring to an in-state university. \n Despite the debt load she is taking on, she said, "the value of a GW degree for me at least would be more valuable when looking for jobs later on." \n —By CNBC\'s Sharon Epperson \n']

Categories