This is a general question about whether it is possible, and if so how, to automate the download of a scribd.com search result document.
Scenario:
I have a Scribd account and find a document I want. I normally I then have to click the download button to start the download.
Any ideas for automating this? I'm using the scribd api and python to automatically extract document IDs based on automated queries, but once I get the doc_id's I have to physically go to each doc page and click the download button to get the physical txt/pdf file. I want to automate this step as well.
Any Ideas?
Looking at the python-scribd documentation or the scribd API reference, any object that can give you a document ID or website URL can also give you a download URL. Or, if you already have a document ID, you can just call get to get an object that can give you a download URL.
Most likely, you've got a Document object, which has this method:
get_download_url(self, doc_type='original')
Returns a link that can be used to download a static version of the document.
So, wherever you're calling get_scribd_url, just call get_download_url.
And then, to download the result, Python has urllib2 (2.x) or urllib.request (3.x) built into the standard library, or you can use requests or any other third-party library instead.
Putting it all together as an example:
# do all the stuff to set up the api_key, get a `User` object, etc.
def is_document_i_want(document):
return document.author == "Me"
urls = [document.get_download_url() for document in user.all()
if is_document_i_want(document)]
for url in urls:
path = urllib.parse.urlparse(url).path
name = os.path.basename(path)
u = urllib.request.urlopen(url)
with open(name, 'w') as f:
f.write(u.read())
print('Wrote {} as {}'.format(url, name))
Presumably you're going to want to use something like user.find instead of user.all. Or, if you've already written the code that gets the document IDs and don't want to change it, you can use user.get with each one.
And if you want to post-filter the results, you probably want to use attributes beyond the basic ones (or you would have just passed them to the query), which means you need to call load on each document before you can access them (so add document.load() at the top of the is_document_i_want function). But really, there's nothing complicated here.
Related
Here's my problem, I am trying to download excel xlsx files from a WP site, if I type
the url I assigned to a variable in my code, called stock, directly in browser, Firefox downloads it perfectly.
I'm trying to do this with Python so I've made a script using requests and then Pandas for processing and manipulation.
However even though the file seems to download it returns an error, I tried using both open and with open as suggested on similar problems I've found here, but in my case it returns an error 'ValueError: Seek of closed file', I attempted several variations to the code, with no result, the outcome was always the error.
Here is my code
import pandas as pd
import requests, os
import http.client
http.client.HTTPConnection._http_vsn = 10
http.client.HTTPConnection._http_vsn_str = 'HTTP/1.0'
# Url of the same link I used to manually fetch the file
stock = 'https://filmar.com/wp-content/uploads/2021/05/Apple-Lot-5-14-21.xlsx'
resp = requests.get(stock) # passed the GET method to the http request with the URL
print("Downloading...") # This works
# When I try to retrieve the file it fails
with open('Apple-Lot-5-14-21.xlsx', 'wb') as output:
output.write(resp.content)
print('The file has been downloaded') # this is printed
# The error happens when I try to assign the file to the pd.read_excel method in Pandas
apple = pd.read_excel(output)
Addendum
After entering the code resp - objectprovided by #MattDMo, apparently there's a permission problem or something, because upon analysis of the response object, models.response it returned a 404, not found, so either it's a protection or some redirection that takes place on the server, so requests retrieves an empty file.
You can't pass output to pd.read_excel(), because when the with context manager exits, the reference to the file (output) is destroyed. One option here, if you don't really need to save the Excel file for anything else, is to pass resp.content directly to read_excel(). Alternatively, if you want the Excel file for backup or other purposes, create a filename variable like so:
xl_file = 'Apple-Lot-5-14-21.xlsx'
then use that variable both when you're calling with open(... and when you're calling read_excel(), as that function can take both file names and file-like objects.
As an extra note, I'm not sure why you're using http.client, as requests doesn't look at any of those values to my knowledge.
I'm very beginner to python but I know intermediate JavaScript. I have one project to complete this is like a scraper but I want to automate some work for me.
1 ) I have a Excel with more than 1000 data and it also has URLs. I want to code that python visit every URL from that Excel sheet and search first page for Some predefine search texts (List of Texts)
2 ) If my code find any of the Text from that web page then it should return true else false
I want any idea or logic to do this kind of process. Any help will make my head pain less 😅
it is very heavy work which is not very good idea to do in JavaScript that's why I want to do it in Python
An easy way to do this would be to get the requests module. Then learn how to use the csv module which can read spreadsheets such as excel spreadsheets. Then here is what you want to do
import csv
import requests
URLS = []
def GetUrlFromCSVFile():
global URLS
#Figure out how to get link from csv file then append them to the URLS list
for url in URLS:
r = requests.get(URL, headers=#You Should Probs get some headers)
if whatever_keyword_u_looking_for in r.text:
print("Found")
else:
print("Not here")
I suggest the following:
Read about the csv library - to read the content of an excel file.
Read about the requests library - to get the page's content from its URL.
Read about regular expressions in the re library.
Is it possible to read the actual URL from a .urlfile, as described here, with Python?
In the following image for instance, I can easily print mapbox-OSM Bright- A Carto template for OpenStreetMap data but I would also like to extract the actual URL: https://github.com/mapbox/osm-bright
I need to extract hundreds of URLs and I am currently using a batch file to get the job done, but I would much rather do it with Python. I am using Python 2.7.11 but 3 would be a possibility as well.
I assume you mean a URL file as described here. Those files have a syntax similar to .ini-files and can probably be parsed like this:
import ConfigParser
config = ConfigParser.RawConfigParser()
config.read('example.url')
try:
url = config.get('InternetShortcut', 'URL') # Classic URL Format
except ConfigParser.NoOptionError:
url = config.get('DEFAULT', 'BASEURL') # Extended URL Format
i have to basically make a program that take a user-input web address and parses html to find links . then stores all the links in another HTML file in a certain format. i only have access to builtin python modules (python 3) . im able to get the HTML code from the link using urllib.request and put that into a string. how would i actually go about extracting links from this string and putting them into a string array? also would it be possible to identify links (such as an image link / mp3 link) so i can put them into different arrays (then i could catagorize them when im creating the output file)
You can use the re module to parse the HTML text for links. Particularly the findall method can return every match.
As far as sorting by file type that depends on whether the url actually contains the extension (i.e. .mp3, .js, .jpeg, etc...)
You could do a simple for loop like such:
import re
html = getHTMLText()
mp3s = []
other = []
for match in re.findall('<reexpression>',html):
if match.endswith('.mp3'):
mp3s.append(match)
else:
other.append(match)
try to use HTML.Parser library or re library
they will help you to do that
and i think you can use regex to do it
r'http[s]?://[^\s<>"]+|www.[^\s<>"]+
Hello I am trying to make a python function to save a list of URLs in .txt file
Example: visit http://forum.domain.com/ and save all viewtopic.php?t= word URL in .txt file
http://forum.domain.com/viewtopic.php?t=1333
http://forum.domain.com/viewtopic.php?t=2333
I use this function but not save
I am very new in python can someone help me to create this
web_obj = opener.open('http://forum.domain.com/')
data = web_obj.read()
fl_url_list = open('urllist.txt', 'r')
url_arr = fl_url_list.readlines()
fl_url_list.close()
This is far from trivial and can have quite a few corner cases (I suppose the page you're referring to is a web page)
To give you a few pointers, you need to:
download the web page : you're already doing it (in data)
extract the URLs : this is hard, most probably, you'll want to usae an html parser, extract <a> tags, fetch the hrefattribute and put that into a list. then filter that list to have only the url formatted like you like (say with viewtopic). Let's say you got it into urlList
then open a file for Writing Text (thus wt, not r).
write the content f.write('\n'.join(urlList))
close the file
I advise to try to follow these steps and ask relevant questions when you're stuck on a particular issue.