Is it possible to read the actual URL from a .urlfile, as described here, with Python?
In the following image for instance, I can easily print mapbox-OSM Bright- A Carto template for OpenStreetMap data but I would also like to extract the actual URL: https://github.com/mapbox/osm-bright
I need to extract hundreds of URLs and I am currently using a batch file to get the job done, but I would much rather do it with Python. I am using Python 2.7.11 but 3 would be a possibility as well.
I assume you mean a URL file as described here. Those files have a syntax similar to .ini-files and can probably be parsed like this:
import ConfigParser
config = ConfigParser.RawConfigParser()
config.read('example.url')
try:
url = config.get('InternetShortcut', 'URL') # Classic URL Format
except ConfigParser.NoOptionError:
url = config.get('DEFAULT', 'BASEURL') # Extended URL Format
Related
I'm very beginner to python but I know intermediate JavaScript. I have one project to complete this is like a scraper but I want to automate some work for me.
1 ) I have a Excel with more than 1000 data and it also has URLs. I want to code that python visit every URL from that Excel sheet and search first page for Some predefine search texts (List of Texts)
2 ) If my code find any of the Text from that web page then it should return true else false
I want any idea or logic to do this kind of process. Any help will make my head pain less 😅
it is very heavy work which is not very good idea to do in JavaScript that's why I want to do it in Python
An easy way to do this would be to get the requests module. Then learn how to use the csv module which can read spreadsheets such as excel spreadsheets. Then here is what you want to do
import csv
import requests
URLS = []
def GetUrlFromCSVFile():
global URLS
#Figure out how to get link from csv file then append them to the URLS list
for url in URLS:
r = requests.get(URL, headers=#You Should Probs get some headers)
if whatever_keyword_u_looking_for in r.text:
print("Found")
else:
print("Not here")
I suggest the following:
Read about the csv library - to read the content of an excel file.
Read about the requests library - to get the page's content from its URL.
Read about regular expressions in the re library.
I am writing a code which creates several URLs, which again are stored in a list.
The next step would be, open each URL, download the data (which is only text, formatted in XML or JSON) and save the downloaded data.
My code works fine thanks to the online community here up. It stuck at the point to open the URL and download the data. I want the url.request to loop through the list with my created urls and call each url seperately, open it, display it and move on to the next. But it only does the loop to create the urls, but then nothing. No feedback, nothing.
import urllib.request
.... some calculations for llong and llat ....
#create the URLs and store in list
urls = []
for lat,long,lat1,long1 in (zip(llat, llong,llat[1:],llong[1:])):
for pages in range (1,17):
print ("https://api.flickr.com/services/rest/?method=flickr.photos.search&format=json&api_key=5.b&nojsoncallback=1&page={}&per_page=250&bbox={},{},{},{}&accuracy=1&has_geo=1&extras=geo,tags,views,description".format(pages,long,lat,long1,lat1))
print (urls)
#accessing the website
data = []
for amounts in urls:
response = urllib.request.urlopen(urls)
flickrapi = data.read()
data.append(+flickrapi)
data.close()
print (data)
What am I doing wrong`?
The next step would be, downloading the data and save them to a file or somewhere else for further processing.
Since I will receive heaps of data, like a lot lot lot, I am not sure what would be the best way to store it to precess it with R (or maybe Python? - need to do some statistical work on it). Any suggestions?
You're not appending your generated urls to the url list, you are printing them:
print ("https://api.flickr.com/services/rest/?method=flickr.photos.search&format=json&api_key=5.b&nojsoncallback=1&page={}&per_page=250&bbox={},{},{},{}&accuracy=1&has_geo=1&extras=geo,tags,views,description".format(pages,long,lat,long1,lat1))
Should be:
urls.append("https://api.flickr.com/services/rest/?method=flickr.photos.search&format=json&api_key=5.b&nojsoncallback=1&page={}&per_page=250&bbox={},{},{},{}&accuracy=1&has_geo=1&extras=geo,tags,views,description".format(pages,long,lat,long1,lat1))
Then you can iterate over the urls as planned.
But you'll run into the error on the following line:
response = urllib.request.urlopen(urls)
Here you are feeding the whole set of urls into urlopen, where you should be passing in a single url from urls which you have named amounts like so:
response = urllib.request.urlopen(amounts)
i have to basically make a program that take a user-input web address and parses html to find links . then stores all the links in another HTML file in a certain format. i only have access to builtin python modules (python 3) . im able to get the HTML code from the link using urllib.request and put that into a string. how would i actually go about extracting links from this string and putting them into a string array? also would it be possible to identify links (such as an image link / mp3 link) so i can put them into different arrays (then i could catagorize them when im creating the output file)
You can use the re module to parse the HTML text for links. Particularly the findall method can return every match.
As far as sorting by file type that depends on whether the url actually contains the extension (i.e. .mp3, .js, .jpeg, etc...)
You could do a simple for loop like such:
import re
html = getHTMLText()
mp3s = []
other = []
for match in re.findall('<reexpression>',html):
if match.endswith('.mp3'):
mp3s.append(match)
else:
other.append(match)
try to use HTML.Parser library or re library
they will help you to do that
and i think you can use regex to do it
r'http[s]?://[^\s<>"]+|www.[^\s<>"]+
This is a general question about whether it is possible, and if so how, to automate the download of a scribd.com search result document.
Scenario:
I have a Scribd account and find a document I want. I normally I then have to click the download button to start the download.
Any ideas for automating this? I'm using the scribd api and python to automatically extract document IDs based on automated queries, but once I get the doc_id's I have to physically go to each doc page and click the download button to get the physical txt/pdf file. I want to automate this step as well.
Any Ideas?
Looking at the python-scribd documentation or the scribd API reference, any object that can give you a document ID or website URL can also give you a download URL. Or, if you already have a document ID, you can just call get to get an object that can give you a download URL.
Most likely, you've got a Document object, which has this method:
get_download_url(self, doc_type='original')
Returns a link that can be used to download a static version of the document.
So, wherever you're calling get_scribd_url, just call get_download_url.
And then, to download the result, Python has urllib2 (2.x) or urllib.request (3.x) built into the standard library, or you can use requests or any other third-party library instead.
Putting it all together as an example:
# do all the stuff to set up the api_key, get a `User` object, etc.
def is_document_i_want(document):
return document.author == "Me"
urls = [document.get_download_url() for document in user.all()
if is_document_i_want(document)]
for url in urls:
path = urllib.parse.urlparse(url).path
name = os.path.basename(path)
u = urllib.request.urlopen(url)
with open(name, 'w') as f:
f.write(u.read())
print('Wrote {} as {}'.format(url, name))
Presumably you're going to want to use something like user.find instead of user.all. Or, if you've already written the code that gets the document IDs and don't want to change it, you can use user.get with each one.
And if you want to post-filter the results, you probably want to use attributes beyond the basic ones (or you would have just passed them to the query), which means you need to call load on each document before you can access them (so add document.load() at the top of the is_document_i_want function). But really, there's nothing complicated here.
I have the following code (doop.py), which strips a .html file of all the 'nonsense' html script, outputting only the 'human-readable' text; eg. it will take a file containing the following:
<html>
<body>
<a href="http://www.w3schools.com">
This is a link</a>
</body>
</html>
and give
$ ./doop.py
File name: htmlexample.html
This is a link
The next thing I need to do is add a function that, if any of the html arguments within the file represent a URL (a web address), the program will read the content of the designated webpage instead of a disk file. (For present purposes, it is sufficient for doop.py to recognize an argument beginning with http:// (in any mixture of letter-cases) as a URL.)
I'm not sure where to start with this - I'm sure it would involve telling python to open a URL, but how do I do that?
Thanks,
A
Apart from urllib2 that others already mentioned, you can take a look at Requests module by Kenneth Reitz. It has a more concise and expressive syntax than urllib2.
import requests
r = requests.get('https://api.github.com', auth=('user', 'pass'))
r.text
As with most things pythonic: there is a library for that.
Here you need the urllib2 library
This allows you to open a url like a file, and read and writ from it like a file.
The code you would need would look something like this:
import urllib2
urlString = "http://www.my.url"
try:
f = urllib2.urlopen(urlString) #open url
pageString = f.read() #read content
f.close() #close url
readableText = getReadableText(pageString)
#continue using the pageString as you wish
except IOException:
print("Bad URL")
Update:
(I don't have a python interpreter to hand, so can't test that this code will work or not, but it should!!)
Opening the URL is the easy part, but first you need to extract the URLs from your html file. This is done using regular expressions (regex's), and unsurprisingly, python has a library for that (re). I recommend that you read up on both regex's, but they are basically a patter against which you can match text.
So what you need to do is write a regex that matches URLs:
(http|ftp|https)://[\w-_]+(.[\w-_]+)+([\w-.,#?^=%&:/~+#]*[\w-\#?^=%&/~+#])?
If you don't want to follow urls to ftp resources, then remove "ftp|" from the beginning of the pattern. Now you can scan your input file for all character sequences that match this pattern:
import re
input_file_str = #open your input file and read its contents
pattern = re.compile("(http|ftp|https):\/\/[\w\-_]+(\.[\w\-_]+)+([\w\-\.,#?^=%&:/~\+#]*[\w\-\#?^=%&/~\+#])?") #compile the pattern matcher
matches = pattern.findall(input_file_str) #find all matches, storing them in an interator
for match in matches : #go through iteratr
urlString = match #get the string that matched the pattern
#use the code above to load the url using matched string!
That should do it
You can use third part libraries like beautifulsoup or Standard HTML Parser . Here is a previous stack overflow question. html parser python
Other Links
http://unethicalblogger.com/2008/05/03/parsing-html-with-python.html
Standard Library
http://docs.python.org/library/htmlparser.html
Performance comparision
http://blog.ianbicking.org/2008/03/30/python-html-parser-performance/
While parsing one needs to parse for http
Rather than write your own HTML Parser / Scraper, I would personally recommend Beautiful Soup which you can use to load up your HTML, get the elements you want out of it, find all the links, and then use urllib to fetch the new links for you to parse and process further.