I want To Check If Someone Send Link + code
like this link
```
https://pastebin.com/gD0KD6u4
```
after the last / is the code
As if the code is: gD0KD6u4
Check this code I believe this would be helpful.
link = 'https://pastebin.com/gD0KD6u4'
# reverse the string
reverse_link = link[::-1]
print(reverse_link)
position = reverse_link.find('/')
code_link = reverse_link[:position]
#reverse code
print(code_link)
code_link = code_link[::-1]
print(code_link)
Another solution is to use rpartition():
url = 'https://pastebin.com/gD0KD6u4'
decompose = url.rpartition('/')
url = decompose[0]
code = decompose[-1]
print(url, code)
Related
I have a problem in function which should remove tld from domain. If domain has some subdomain it works correctly. For example:
Input: asdf.xyz.example.com
Output: asdf.xyz.example
Problem is when the domain has not any subdomain, there is dot in front of domain
Input: example.com
Output: .example
This is my code:
res = get_tld(domain, as_object=True, fail_silently=True, fix_protocol=True)
domain = '.'.join([res.subdomain, res.domain])
Function get_tld is from tld library
Could someone help me how to solve this problem?
With a very simple string manipulation, is this what you are looking for?
d1 = 'asdf.xyz.example.com'
output = '.'.join(d1.split('.')[:-1])
# output = 'asdf.xyz.example'
d2 = 'example.com'
output = '.'.join(d2.split('.')[:-1])
# output = 'example'
You can use filtering. It looks like get_tld works as intended but join is incorrect
domain = '.'.join(filter(lambda x: len(x), [res.subdomain, res.domain]))
another simple version is this:
def remove_tld(url):
*base, tld = url.split(".")
return ".".join(base)
url = "asdf.xyz.example.com"
print(remove_tld(url)) # asdf.xyz.example
url = "example.com"
print(remove_tld(url)) # example
*base, tld = url.split(".") puts the TLD in tld and everything else in base. then you just join tĥat with ".".join(base).
I am little bit confused about creating full url.
I have such code :
def flats(self):
return [JsonFlatPage(property_data = flat, url = flat['propertyUrl'])
for flat in self.data['properties']]
in flat['propertyUrl'] I have '/properties/75599853', but I need to get like this one:
'https://www.rightmove.co.uk/properties/75599853#/'
with full path and # at the end.
I know that I should make constant URI in settings file, but then how I can convert it? Should I use f-strings?
I think since the base url https://www.rightmove.co.uk/ is fixed, you can do something like below to get what you need:
def flats(self):
baseUrl = 'https://www.rightmove.co.uk/'
return [JsonFlatPage(property_data = flat, url = baseUrl + flat['propertyUrl'] + "#/")
for flat in self.data['properties']]
You can also use f-strings as you mentioned as:
def flats(self):
baseUrl = 'https://www.rightmove.co.uk/'
return [JsonFlatPage(property_data = flat, url = f"{baseUrl}{flat['propertyUrl']}#/")
for flat in self.data['properties']]
I'm fairly new to python and web-scraping in general. The code below works but it seems to be awfully slow for the amount of information its actually going through. Is there any way to easily cut down on execution time. I'm not sure but it does seem like I have typed out more/made it more difficult then I actually needed to, any help would be appreciated.
Currently the code starts at the sitemap then iterates through a list of additional sitemaps. Within the new sitemaps it pulls data information to construct a url for the json data of a webpage. From the json data I pull an xml link that I use to search for a string. If the string is found it appends it to a text file.
#global variable
start = 'https://www.govinfo.gov/wssearch/getContentDetail?packageId='
dash = '-'
urlSitemap="https://www.govinfo.gov/sitemap/PLAW_sitemap_index.xml"
old_xml=requests.get(urlSitemap)
print (old_xml)
new_xml= io.BytesIO(old_xml.content).read()
final_xml=BeautifulSoup(new_xml)
linkToBeFound = final_xml.findAll('loc')
for loc in linkToBeFound:
urlPLmap=loc.text
old_xmlPLmap=requests.get(urlPLmap)
print(old_xmlPLmap)
new_xmlPLmap= io.BytesIO(old_xmlPLmap.content).read()
final_xmlPLmap=BeautifulSoup(new_xmlPLmap)
linkToBeFound2 = final_xmlPLmap.findAll('loc')
for pls in linkToBeFound2:
argh = pls.text.find('PLAW')
theWanted = pls.text[argh:]
thisShallWork =eval(requests.get(start + theWanted).text)
print(requests.get(start + theWanted))
dict1 = (thisShallWork['download'])
finaldict = (dict1['modslink'])[2:]
print(finaldict)
url2='https://' + finaldict
try:
old_xml4=requests.get(url2)
print(old_xml4)
new_xml4= io.BytesIO(old_xml4.content).read()
final_xml4=BeautifulSoup(new_xml4)
references = final_xml4.findAll('identifier',{'type': 'Statute citation'})
for sec in references:
if sec.text == "106 Stat. 4845":
Print(dash * 20)
print(sec.text)
Print(dash * 20)
sec313 = open('sec313info.txt','a')
sec313.write("\n")
sec313.write(pls.text + '\n')
sec313.close()
except:
print('error at: ' + url2)
No idea why i spent so long on this, but i did. Your code was really hard to look through. So i started with that, I broke it up into 2 parts, getting the links from the sitemaps, then the other stuff. I broke out a few bits into separate functions too.
This is checking about 2 urls per second on my machine which seems about right.
How this is better (you can argue with me about this part).
Don't have to reopen and close the output file after each write
Removed a fair bit of unneeded code
gave your variables better names (this does not improve speed in any way but please do this especially if you are asking for help with it)
Really the main thing... once you break it all up it becomes fairly clear that whats slowing you down is waiting on the requests which is pretty standard for web-scraping, you can look into multi threading to avoid the wait. Once you get into multi threading, the benefit of breaking up your code will likely also become much more evident.
# returns sitemap links
def get_links(s):
old_xml = requests.get(s)
new_xml = old_xml.text
final_xml = BeautifulSoup(new_xml, "lxml")
return final_xml.findAll('loc')
# gets the final url from your middle url and looks through it for the thing you are looking for
def scrapey(link):
link_id = link[link.find("PLAW"):]
r = requests.get('https://www.govinfo.gov/wssearch/getContentDetail?packageId={}'.format(link_id))
print(r.url)
try:
r = requests.get("https://{}".format(r.json()["download"]["modslink"][2:]))
print(r.url)
soup = BeautifulSoup(r.text, "lxml")
references = soup.findAll('identifier', {'type': 'Statute citation'})
for ref in references:
if ref.text == "106 Stat. 4845":
return r.url
else:
return False
except:
print("bah" + r.url)
return False
sitemap_links_el = get_links("https://www.govinfo.gov/sitemap/PLAW_sitemap_index.xml")
sitemap_links = map(lambda x: x.text, sitemap_links_el)
nlinks_el = map(get_links, sitemap_links)
links = [num.text for elem in nlinks_el for num in elem]
with open("output.txt", "a") as f:
for link in links:
url = scrapey(link)
if url is False:
print("no find")
else:
print("found on: {}".format(url))
f.write("{}\n".format(url))
I'm a teacher. I want a list of all the students who commented on the essay I assigned, and what they said. The Drive API stuff was too challenging for me, but I figured I could download them as a zip and parse the XML.
The comments are tagged in w:comment tags, with w:t for the comment text and . It should be easy, but XML (etree) is killing me.
via the tutorial (and official Python docs):
z = zipfile.ZipFile('test.docx')
x = z.read('word/comments.xml')
tree = etree.XML(x)
Then I do this:
children = tree.getiterator()
for c in children:
print(c.attrib)
Resulting in this:
{}
{'{http://schemas.openxmlformats.org/wordprocessingml/2006/main}author': 'Joe Shmoe', '{http://schemas.openxmlformats.org/wordprocessingml/2006/main}id': '1', '{http://schemas.openxmlformats.org/wordprocessingml/2006/main}date': '2017-11-17T16:58:27Z'}
{'{http://schemas.openxmlformats.org/wordprocessingml/2006/main}rsidR': '00000000', '{http://schemas.openxmlformats.org/wordprocessingml/2006/main}rsidDel': '00000000', '{http://schemas.openxmlformats.org/wordprocessingml/2006/main}rsidP': '00000000', '{http://schemas.openxmlformats.org/wordprocessingml/2006/main}rsidRDefault': '00000000', '{http://schemas.openxmlformats.org/wordprocessingml/2006/main}rsidRPr': '00000000'}
{}
{'{http://schemas.openxmlformats.org/wordprocessingml/2006/main}val': '0'}
{'{http://schemas.openxmlformats.org/wordprocessingml/2006/main}val': '0'}
{'{http://schemas.openxmlformats.org/wordprocessingml/2006/main}val': '0'}
And after this I am totally stuck. I've tried element.get() and element.findall() with no luck. Even when I copy/paste the value ('{http://schemas.openxmlformats.org/wordprocessingml/2006/main}val'), I get None in return.
Can anyone help?
You got remarkably far considering that OOXML is such a complex format.
Here's some sample Python code showing how to access the comments of a DOCX file via XPath:
from lxml import etree
import zipfile
ooXMLns = {'w':'http://schemas.openxmlformats.org/wordprocessingml/2006/main'}
def get_comments(docxFileName):
docxZip = zipfile.ZipFile(docxFileName)
commentsXML = docxZip.read('word/comments.xml')
et = etree.XML(commentsXML)
comments = et.xpath('//w:comment',namespaces=ooXMLns)
for c in comments:
# attributes:
print(c.xpath('#w:author',namespaces=ooXMLns))
print(c.xpath('#w:date',namespaces=ooXMLns))
# string value of the comment:
print(c.xpath('string(.)',namespaces=ooXMLns))
Thank you #kjhughes for this amazing answer for extracting all the comments from the document file. I was facing same issue like others in this thread to get the text that the comment relates to. I took the code from #kjhughes as a base and try to solve this using python-docx. So here is my take at this.
Sample document.
I will extract the comment and the paragraph which it was referenced in the document.
from docx import Document
from lxml import etree
import zipfile
ooXMLns = {'w':'http://schemas.openxmlformats.org/wordprocessingml/2006/main'}
#Function to extract all the comments of document(Same as accepted answer)
#Returns a dictionary with comment id as key and comment string as value
def get_document_comments(docxFileName):
comments_dict={}
docxZip = zipfile.ZipFile(docxFileName)
commentsXML = docxZip.read('word/comments.xml')
et = etree.XML(commentsXML)
comments = et.xpath('//w:comment',namespaces=ooXMLns)
for c in comments:
comment=c.xpath('string(.)',namespaces=ooXMLns)
comment_id=c.xpath('#w:id',namespaces=ooXMLns)[0]
comments_dict[comment_id]=comment
return comments_dict
#Function to fetch all the comments in a paragraph
def paragraph_comments(paragraph,comments_dict):
comments=[]
for run in paragraph.runs:
comment_reference=run._r.xpath("./w:commentReference")
if comment_reference:
comment_id=comment_reference[0].xpath('#w:id',namespaces=ooXMLns)[0]
comment=comments_dict[comment_id]
comments.append(comment)
return comments
#Function to fetch all comments with their referenced paragraph
#This will return list like this [{'Paragraph text': [comment 1,comment 2]}]
def comments_with_reference_paragraph(docxFileName):
document = Document(docxFileName)
comments_dict=get_document_comments(docxFileName)
comments_with_their_reference_paragraph=[]
for paragraph in document.paragraphs:
if comments_dict:
comments=paragraph_comments(paragraph,comments_dict)
if comments:
comments_with_their_reference_paragraph.append({paragraph.text: comments})
return comments_with_their_reference_paragraph
if __name__=="__main__":
document="test.docx" #filepath for the input document
print(comments_with_reference_paragraph(document))
Output for the sample document look like this
I have done this at a paragraph level. This could be done at a python-docx run level as well.
Hopefully it will be of help.
I used Word Object Model to extract comments with replies from a Word document. Documentation on Comments object can be found here. This documentation uses Visual Basic for Applications (VBA). But I was able to use the functions in Python with slight modifications. Only issue with Word Object Model is that I had to use win32com package from pywin32 which works fine on Windows PC, but I'm not sure if it will work on macOS.
Here's the sample code I used to extract comments with associated replies:
import win32com.client as win32
from win32com.client import constants
word = win32.gencache.EnsureDispatch('Word.Application')
word.Visible = False
filepath = "path\to\file.docx"
def get_comments(filepath):
doc = word.Documents.Open(filepath)
doc.Activate()
activeDoc = word.ActiveDocument
for c in activeDoc.Comments:
if c.Ancestor is None: #checking if this is a top-level comment
print("Comment by: " + c.Author)
print("Comment text: " + c.Range.Text) #text of the comment
print("Regarding: " + c.Scope.Text) #text of the original document where the comment is anchored
if len(c.Replies)> 0: #if the comment has replies
print("Number of replies: " + str(len(c.Replies)))
for r in range(1, len(c.Replies)+1):
print("Reply by: " + c.Replies(r).Author)
print("Reply text: " + c.Replies(r).Range.Text) #text of the reply
doc.Close()
If you want also the text the comments relates to :
def get_document_comments(docxFileName):
comments_dict = {}
comments_of_dict = {}
docx_zip = zipfile.ZipFile(docxFileName)
comments_xml = docx_zip.read('word/comments.xml')
comments_of_xml = docx_zip.read('word/document.xml')
et_comments = etree.XML(comments_xml)
et_comments_of = etree.XML(comments_of_xml)
comments = et_comments.xpath('//w:comment', namespaces=ooXMLns)
comments_of = et_comments_of.xpath('//w:commentRangeStart', namespaces=ooXMLns)
for c in comments:
comment = c.xpath('string(.)', namespaces=ooXMLns)
comment_id = c.xpath('#w:id', namespaces=ooXMLns)[0]
comments_dict[comment_id] = comment
for c in comments_of:
comments_of_id = c.xpath('#w:id', namespaces=ooXMLns)[0]
parts = et_comments_of.xpath(
"//w:r[preceding-sibling::w:commentRangeStart[#w:id=" + comments_of_id + "] and following-sibling::w:commentRangeEnd[#w:id=" + comments_of_id + "]]",
namespaces=ooXMLns)
comment_of = ''
for part in parts:
comment_of += part.xpath('string(.)', namespaces=ooXMLns)
comments_of_dict[comments_of_id] = comment_of
return comments_dict, comments_of_dict
I have never used Python before but am trying to use it due to some restrictions in another (proprietary) language, to retrieve some values from a web service and return them in json format to a home automation processor. The relevant section of code below returns :
[u'Name:London', u'Mode:Auto', u'Name:Ling', u'Mode:Away']
["Name:London", "Mode:Auto", "Name:Ling", "Mode:Away"]
…which isn't valid json. I am sure this is a really dumb question but I have searched here and haven't found an answer that helps me. Apologies if I missed something obvious but can anyone tell me what I need to do to ensure the json.dumps command outputs data in the correct format?
CresData = []
for i in range(0, j):
r = requests.get('http://xxxxxx.com/WebAPI/emea/api/v1/location/installationInfo?userId=%s&includeTemperatureControlSystems=True' % UserID, headers=headers)
CresData.append("Name:" + r.json()[i]['locationInfo']['name'])
r = requests.get('http://xxxxxx.com/WebAPI/emea/api/v1/location/%s/status?includeTemperatureControlSystems=True' % r.json()[i]['locationInfo']['locationId'], headers = headers)
CresData.append('Mode:' + r.json()['gateways'][0]['temperatureControlSystems'][0]['systemModeStatus']['mode'])
Cres_json = json.dumps(CresData)
print CresData
print Cres_json
I wasn't able to test the code as the link you mentioned is not a live link but your solution should be something like this
It looks like you are looking for JSON format with key value pair. you need to pass a dict object into json.dumps() which will return you string in required JSON format.
CresData = dict()
key_str = "Location"
idx = 0
for i in range(0, j):
data = dict()
r = requests.get('http://xxxxxx.com/WebAPI/emea/api/v1/location/installationInfo?userId=%s&includeTemperatureControlSystems=True' % UserID, headers=headers)
data["Name"] = r.json()[i]['locationInfo']['name']
r = requests.get('http://xxxxxx.com/WebAPI/emea/api/v1/location/%s/status?includeTemperatureControlSystems=True' % r.json()[i]['locationInfo']['locationId'], headers = headers)
data["mode"] = r.json()['gateways'][0]['temperatureControlSystems'][0]['systemModeStatus']['mode']
CresData[key_str + str(idx)] = data
idx +=1
Cres_json = json.dumps(CresData)
print CresData
print Cres_json