My Python script:
import wget
if windowsbit == x86:
url = 'http://test.com/test_windows_2_56_30-STEST.exe'
filename = wget.download(url)
else:
url = 'http://test.com/test_windows-x64_2_56_30-STEST.exe'
filename = wget.download(url)
In above python script, I am using wget module to download a file form URL, based on windows 32 bit or 64 bit. Its working as expected.
I want to use regular expression to do the following:
if windowsbit == x86, its should download the file that starting with test_windows and ends with STEST.exe file.
else its should download file that starting with test_windows-x64 and ends with STEST.exe file.
I am new to python, I not getting any idea on how to do this. Could any one guide me on this?
This doesn't look possible. The regular expression that would match what you're trying to do is something like:
import re
urlre = re.compile("""
http://test.com/test_windows # base URL
(?P<bit>-x64)? # captures -x64 if present
_(?P<version_major>\d+) # captures major version
_(?P<version_minor>\d+) # captures minor version
_(?P<version_revision>\d+) # captures revision version
-STEST.exe # ending filename""", re.X)
However you can't just throw that in wget. You can't use wildcards in requests -- the webserver would have to know how to process them and it doesn't. A better method might be:
base_url = "http://test.com/test_windows"
if windowsbit == x64:
base_url += "-x64"
version = "2_56_30"
filename = "STEST.exe"
final_url = "{base}_{version}-{filename}".format(
base=base_url, version=version, filename=filename)
May be try this without regular expression:
import wget
text ="http://test.com/test_windows"
if windowsbit == x86:
url = '{}_2_56_30-STEST.exe'.format(text)
else:
url = '{}-x64_2_56_30-STEST.exe'.format(text)
filename = wget.download(url)
With version:
import wget
text ="http://test.com/test_windows"
version = '2_56_30'
if windowsbit == x86:
url = '{}_{}-STEST.exe'.format(text,version)
else:
url = '{}-x64_{}-STEST.exe'.format(text,version)
filename = wget.download(url)
Related
I'm still a newbie in Python but I'm trying to make my first little program.
My intention is to print only the link ending with .m3u8 (if available) istead of printing the whole web page.
The code I'm currently using:
import requests
channel1 = requests.get('https://website.tv/user/111111')
print(channel1.content)
print('\n')
channel2 = requests.get('https://website.tv/user/222222')
print(channel2.content)
print('\n')
input('Press Enter to Exit...')
The link I'm looking for always has 47 characters in total, and it's always the same model just changing the stream id represented as X:
https://website.tv/live/streamidXXXXXXXXX.m3u8
Can anyone help me?
You can use regex for this problem.
Explanation:
here in the expression portion .*? means to consider everything and whatever enclosed in \b(expr)\b needs to be present there mandatorily.
For e.g.:
import re
link="https://website.tv/live/streamidXXXXXXXXX.m3u8"
p=re.findall(r'.*?\b.m3u8\b',link)
print(p)
OUTPUT:
['https://website.tv/live/streamidXXXXXXXXX.m3u8']
There are a few ways to go about this, one that springs to mind which others have touched upon is using regex with findall that returns back a list of matched urls from our url_list.
Another option could also be BeautifulSoup but without more information regarding the html structure it may not be the best tool here.
Using Regex
from re import findall
from requests import get
def check_link(response):
result = findall(
r'.*?\b.m3u8\b',
str(response.content),
)
return result
def main(url):
response = get(url)
if response.ok:
link_found = check_link(response)
if link_found:
print('link {} found at {}'.format(
link_found,
url,
),
)
if __name__ == '__main__':
url_list = [
'http://www.test_1.com',
'http://www.test_2.com',
'http://www.test_3.com',
]
for url in url_list:
main(url)
print("All finished")
If I understand your question correctly I think you want to use Python's .split() string method. If your goal is to take a string like "https://website.tv/live/streamidXXXXXXXXX.m3u8" and extract just "streamidXXXXXXXXX.m3u8" then you could do that with the following code:
web_address = "https://website.tv/live/streamidXXXXXXXXX.m3u8"
specific_file = web_address.split('/')[-1]
print(specific_file)
The calling .split('/') on the string like that will return a list of strings where each item in the list is a different part of the string (first part being "https:", etc.). The last one of these (index [-1]) will be the file extension you want.
This will extract all URLs from webpage and filter only those which contain your required keyword ".m3u8"
import requests
import re
def get_desired_url(data):
urls = []
for url in re.findall(r'(https?://\S+)', data):
if ".m3u8" in url:
urls.append(url)
return urls
channel1 = requests.get('https://website.tv/user/111111')
urls = get_desired_url(channel1 )
Try this, I think this will be robust
import re
links=[re.sub('^<[ ]*a[ ]+.*href[ ]*=[ ]*', '', re.sub('.*>$', '', link) for link in re.findall(r'<[ ]*a[ ]+.*href[ ]*=[]*"http[s]*://.+\.m3u8".*>',channel2.content)]
Im currently using the following code to download gz file. The url of the gz file will be constructed from pieces of information provided by the user:
generalUrl = theWebsiteURL + "/" + packageName
So generalURl can contain something like: http://www.example.com/blah-0.1.0.tar.gz
res = requests.get(generalUrl)
res.raise_for_status()
The problem I have here is; I have a list of websites for the variable called theWebsiteURL. I need to check all of these websites to see which ones have the package in packageName available for download. I would prefer not to download the package during the confirmation.
Once the code goes through the list of websites to discover which ones have the package, I then want to pick the first website from the list of websites that were found to have the package and automatically download the package from it.
something like this:
#!/usr/bin/env python2.7
listOfWebsites = [ website1, website2, website3, website4, and so on ]
goodWebsites = []
for eachWebsite in listOfWebsites:
genURL = eachWebsite + "/" + packageName
res = requests.get(genUrl)
res.raise_for_status()
if raise_for_status == "200"
goodWebsites.append(genURL)
This is where my imagination stops. I need assistance completing this. Not even sure I'm going about it the right way.
You can try to send a HEAD request first in order to check that the URL is valid, and only then download the package via a GET request.
#!/usr/bin/env python2.7
listOfWebsites = [ website1, website2, website3, website4, and so on ]
goodWebsites = []
for eachWebsite in listOfWebsites:
genURL = eachWebsite + "/" + packageName
res = requests.head(genUrl)
if res.ok:
goodWebsites.append(genURL)
In a Django 1.8 simple tag, I need to resolve the path to the HTTP_REFERER found in the context. I have a piece of code that works, but I would like to know if a more elegant solution could be implemented using Django tools.
Here is my code :
from django.core.urlresolvers import resolve, Resolver404
# [...]
#register.simple_tag(takes_context=True)
def simple_tag_example(context):
# The referer is a full path: http://host:port/path/to/referer/
# We only want the path: /path/to/referer/
referer = context.request.META.get('HTTP_REFERER')
if referer is None:
return ''
# Build the string http://host:port/
prefix = '%s://%s' % (context.request.scheme, context.request.get_host())
path = referer.replace(prefix, '')
resolvermatch = resolve(path)
# Do something very interesting with this resolvermatch...
So I manually construct the string 'http://sub.domain.tld:port', then I remove it from the full path to HTTP_REFERER found in context.request.META. It works but it seems a bit overwhelming for me.
I tried to build a HttpRequest from referer without success. Is there a class or type that I can use to easily extract the path from an URL?
You can use urlparse module to extract the path:
try:
from urllib.parse import urlparse # Python 3
except ImportError:
from urlparse import urlparse # Python 2
parsed = urlparse('http://stackoverflow.com/questions/32809595')
print(parsed.path)
Output:
'/questions/32809595'
I'm quite new to python. I'm trying to parse a file of URLs to leave only the domain name.
some of the urls in my log file begin with http:// and some begin with www.Some begin with both.
This is the part of my code which strips the http:// part. What do I need to add to it to look for both http and www. and remove both?
line = re.findall(r'(https?://\S+)', line)
Currently when I run the code only http:// is stripped. if I change the code to the following:
line = re.findall(r'(https?://www.\S+)', line)
Only domains starting with both are affected.
I need the code to be more conditional.
TIA
edit... here is my full code...
import re
import sys
from urlparse import urlparse
f = open(sys.argv[1], "r")
for line in f.readlines():
line = re.findall(r'(https?://\S+)', line)
if line:
parsed=urlparse(line[0])
print parsed.hostname
f.close()
I mistagged by original post as regex. it is indeed using urlparse.
It might be overkill for this specific situation, but i'd generally use urlparse.urlsplit (Python 2) or urllib.parse.urlsplit (Python 3).
from urllib.parse import urlsplit # Python 3
from urlparse import urlsplit # Python 2
import re
url = 'www.python.org'
# URLs must have a scheme
# www.python.org is an invalid URL
# http://www.python.org is valid
if not re.match(r'http(s?)\:', url):
url = 'http://' + url
# url is now 'http://www.python.org'
parsed = urlsplit(url)
# parsed.scheme is 'http'
# parsed.netloc is 'www.python.org'
# parsed.path is None, since (strictly speaking) the path was not defined
host = parsed.netloc # www.python.org
# Removing www.
# This is a bad idea, because www.python.org could
# resolve to something different than python.org
if host.startswith('www.'):
host = host[4:]
You can do without regexes here.
with open("file_path","r") as f:
lines = f.read()
lines = lines.replace("http://","")
lines = lines.replace("www.", "") # May replace some false positives ('www.com')
urls = [url.split('/')[0] for url in lines.split()]
print '\n'.join(urls)
Example file input:
http://foo.com/index.html
http://www.foobar.com
www.bar.com/?q=res
www.foobar.com
Output:
foo.com
foobar.com
bar.com
foobar.com
Edit:
There could be a tricky url like foobarwww.com, and the above approach would strip the www. We will have to then revert back to using regexes.
Replace the line lines = lines.replace("www.", "") with lines = re.sub(r'(www.)(?!com)',r'',lines). Of course, every possible TLD should be used for the not-match pattern.
I came across the same problem. This is a solution based on regular expressions:
>>> import re
>>> rec = re.compile(r"https?://(www\.)?")
>>> rec.sub('', 'https://domain.com/bla/').strip().strip('/')
'domain.com/bla'
>>> rec.sub('', 'https://domain.com/bla/ ').strip().strip('/')
'domain.com/bla'
>>> rec.sub('', 'http://domain.com/bla/ ').strip().strip('/')
'domain.com/bla'
>>> rec.sub('', 'http://www.domain.com/bla/ ').strip().strip('/')
'domain.com/bla'
Check out the urlparse library, which can do these things for you automatically.
>>> urlparse.urlsplit('http://www.google.com.au/q?test')
SplitResult(scheme='http', netloc='www.google.com.au', path='/q', query='test', fragment='')
You can use urlparse. Also, the solution should be generic to remove things other than 'www' before the domain name (i.e., handle cases like server1.domain.com). The following is a quick try that should work:
from urlparse import urlparse
url = 'http://www.muneeb.org/files/alan_turing_thesis.jpg'
o = urlparse(url)
domain = o.hostname
temp = domain.rsplit('.')
if(len(temp) == 3):
domain = temp[1] + '.' + temp[2]
print domain
I believe #Muneeb Ali is the nearest to the solution but the problem appear when is something like frontdomain.domain.co.uk....
I suppose:
for i in range(1,len(temp)-1):
domain = temp[i]+"."
domain = domain + "." + temp[-1]
Is there a nicer way to do this?
I have checked Google Search API's and it seems that they have not released any API for searching "Images". So, I was wondering if there exists a python script/library through which I can automate the "search by image feature".
This was annoying enough to figure out that I thought I'd throw a comment on the first python-related stackoverflow result for "script google image search". The most annoying part of all this is setting up your proper application and custom search engine (CSE) in Google's web UI, but once you have your api key and CSE, define them in your environment and do something like:
#!/usr/bin/env python
# save top 10 google image search results to current directory
# https://developers.google.com/custom-search/json-api/v1/using_rest
import requests
import os
import sys
import re
import shutil
url = 'https://www.googleapis.com/customsearch/v1?key={}&cx={}&searchType=image&q={}'
apiKey = os.environ['GOOGLE_IMAGE_APIKEY']
cx = os.environ['GOOGLE_CSE_ID']
q = sys.argv[1]
i = 1
for result in requests.get(url.format(apiKey, cx, q)).json()['items']:
link = result['link']
image = requests.get(link, stream=True)
if image.status_code == 200:
m = re.search(r'[^\.]+$', link)
filename = './{}-{}.{}'.format(q, i, m.group())
with open(filename, 'wb') as f:
image.raw.decode_content = True
shutil.copyfileobj(image.raw, f)
i += 1
There is no API available but you are can parse the page and imitate the browser, but I don't know how much data you need to parse because google may limit or block access.
You can imitate the browser by simply using urllib and setting correct headers, but if you think parsing complex web-pages may be difficult from python, you can directly use a headless browser like phontomjs, inside a browser it is trivial to get correct elements using javascript/DOM
Note before trying all this check google's TOS
You can try this:
https://developers.google.com/image-search/v1/jsondevguide#json_snippets_python
It's deprecated, but seems to work.