I am trying to make a simple program that will help with the confusing part of rooting.
I need to download the file from tiny.cc/latestmagisk
I am using this python code
import request
url = tiny.cc/latestmagisk
r = request.get(url)
r.content
The content it returns is the usual 403 Forbidden for nginx
I need this to work with the shortened URL is there anyway to make that happen?
its's not necessary to import request lib
all you need to do is import ssl, urllib and pass ssl._create_unverified_context() as context to the server while you're sendig a request!
your code should be look like this:
import ssl, urllib
certcontext = ssl._create_unverified_context()
f = open('image.jpg','wb') #creating placeholder
#creating image from url and saving it as `image.jpg`!
f.write(urllib.urlopen("https://i.stack.imgur.com/IKh7E.png", context=certcontext).read())
f.close()
note: it will save the image as image.jpg file ..
Contrary to the other answer, you really should use requests for this as requests has better support for redirects.
For getting a page through a redirect from requests:
r=requests.get(url, allow_redirects=True)
For downloading files through redirects:
r = requests.get(url, allow_redirects=True, stream=True)
with open(filename, 'wb') as f:
for chunk in r.iter_content(chunk_size=1024):
if chunk: f.write(chunk)
However, in this case, either tiny.cc or XDA does not allow a simple requests.get; the 403 forbidden is likely due to the User-Agent or other intrinsic header as this method works well with bit.ly and other shortlink generators. You may need to fake headers.
Related
Hey guys I need some help, I am trying to download videos from this sitehttps://ttdownloader.com/dl.php?v=YTo0OntzOjk6IndhdGVybWFyayI7YjowO3M6NzoidmlkZW9JZCI7czoxOToiNjkxMjEwNzYyNzY1MjY5NzM1MCI7czozOiJ1aWQiO3M6MzI6Ijk0MTdiOWE3NWU2MmE3MDQ1NjZhYzk0MzJjMThlY2VlIjtzOjQ6InRpbWUiO2k6MTYxMTQ5NzE1ODt9 using python.
this is code I have tried.
import requests
url ='''https://ttdownloader.com/dl.php?v=YTo0OntzOjk6IndhdGVybWFyayI7YjowO3M6NzoidmlkZW9JZCI7czoxOToiNjkxMjEwNzYyNzY1MjY5NzM1MCI7czozOiJ1aWQiO3M6MzI6Ijk0MTdiOWE3NWU2MmE3MDQ1NjZhYzk0MzJjMThlY2VlIjtzOjQ6InRpbWUiO2k6MTYxMTQ5NzE1ODt9'''
page = requests.get(url)
with open('output.mp4', 'wb') as file:
file.write(page.content)
But it doesnt work as expected, when i check page.content all I see is b''
❌ The link that you are using is NOT a html page.
❌ Therefore it doesn't return anything as html.
✅ Your link is a media link.
✅ Therefore you must stream it and download it. Something like this:
import requests
url = '/your/valid/ttdownloader/url'
with requests.get(url, stream=True) as r:
with open('ouput.mp4', 'wb') as f:
for chunk in r.iter_content(chunk_size=8192):
f.write(chunk)
NOTE:
The link that you posted in the question is now invalid.
Please try the above code with a newly generated link.
You should use request.urlretrieve to directly save the URL to a file:
from urllib import request
url ='''https://ttdownloader.com/dl.php?v=YTo0OntzOjk6IndhdGVybWFyayI7YjowO3M6NzoidmlkZW9JZCI7czoxOToiNjkxMjEwNzYyNzY1MjY5NzM1MCI7czozOiJ1aWQiO3M6MzI6Ijk0MTdiOWE3NWU2MmE3MDQ1NjZhYzk0MzJjMThlY2VlIjtzOjQ6InRpbWUiO2k6MTYxMTQ5NzE1ODt9'''
request.urlretrieve(url, output.mp4)
However, this code gave me a urllib.error.HTTPError: HTTP Error 403: Forbidden error. It appears that this link is not publicly available without authentication.
I am new to Python and I have a requirement to download multiple csv-files from a website authenticated using username and password.
I wrote the below piece of code to download a single file but unfortunately the contents in the downloaded file are not same as in the original file.
Could you please let me know what I am doing wrong here and how to achieve this.
import requests
import shutil
import urllib3
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
url="https:xxxxxxxxxxxxxxxxxxxx.aspx/20-02-2019 124316CampaignExport.csv"
r = requests.get(url, auth=('username', 'Password'),
verify=False,stream=True)
r.raw.decode_content = True
with open("D:/20-02-2019 124316CampaignExport.csv", 'wb') as f:
shutil.copyfileobj(r.raw, f)
The following code worked for me (only indenting the last line):
import requests
import shutil
import urllib3
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
url="linkToDownload"
r = requests.get(url, auth=('username', 'Password'),
verify=False,stream=True)
r.raw.decode_content = True
with open("filename", 'wb') as f:
shutil.copyfileobj(r.raw, f)
This means the problem is stemming from your URL or authentication rather than the python code itself.
Your URL has a space in it, which is likely causing an error. I can't confirm for sure as I don't have your URL. If you have write-access to it, try renaming it with a "_" insetead of a space.
The problem I am currently having is trying to download an image that displays as an animated gif, but appears encoded as a jpg. I say that it appears to be encoded as a jpg because the file extension and mime-type are both .jpg add image/jpeg.
When downloading the file to my local machine (Mac OSX), then attempting to open the file I get the error:
The file could not be opened. It may be damaged or use a file format that Preview doesn’t recognize.
While I realize that some people would maybe just ignore that image, if it can be fixed, I'm looking for a solution to do that, not just ignore it.
The url in question is here:
http://www.supergrove.com/wp-content/uploads/2017/03/gif-images-22-1000-about-gif-on-pinterest.jpg
Here is my code, and I am open to suggestions:
from PIL import Image
import requests
response = requests.get(media, stream = True)
response.raise_for_status()
with open(uploadedFile, 'wb') as img:
for chunk in response.iter_content(chunk_size=1024):
if chunk:
img.write(chunk)
img.close()
According to Wheregoes, the link of the image:
http://www.supergrove.com/wp-content/uploads/2017/03/gif-images-22-1000-about-gif-on-pinterest.jpg
receives a 302 redirect to the page that contains it:
http://www.supergrove.com/gif-images/gif-images-22-1000-about-gif-on-pinterest/
Therefore, your code is trying to download a web page as an image.
I tried:
r = requests.get(the_url, headers=headers, allow_redirects=False)
But it returns zero content and status_code = 302.
(Indeed that was obvious it should happen ...)
This server is configured in a way that it will never fulfill that request.
Bypassing that limitation sounds illegal difficult, to the best of my -limited perhaps- knowledge.
Had to answer my own question in this case, but the answer to this problem, was to add a referer for the request. Most likely an htaccess file preventing some direct file access on the image's server unless the request came from their own server.
from fake_useragent import UserAgent
from io import StringIO,BytesIO
import io
import imghdr
import requests
# Set url
mediaURL = 'http://www.supergrove.com/wp-content/uploads/2017/03/gif-images-22-1000-about-gif-on-pinterest.jpg'
# Create a user agent
ua = UserAgent()
# Create a request session
s = requests.Session()
# Set some headers for the request
s.headers.update({ 'User-Agent': ua.chrome, 'Referrer': media })
# Make the request to get the image from the url
response = s.get(mediaURL, allow_redirects=False)
# The request was about to be redirected
if response.status_code == 302:
# Get the next location that we would have been redirected to
location = response.headers['Location']
# Set the previous page url as referer
s.headers.update({'referer': location})
# Try the request again, this time with a referer
response = s.get(mediaURL, allow_redirects=False, cookies=response.cookies)
print(response.headers)
Hat tip to #raratiru for suggesting the use of allow_redirects.
Also noted in their answer is that the image's server might be intentionally blocking access to prevent general scrapers from viewing their images. Hard to tell, but regardless, this solution works.
How can i get a part of response of get/post-request from python-requests library? I need get URL content and analys it fast, but web-server may return a very large response (500Mb for example).
Set stream to True.
example_url = 'http://www.example.com/somethingbig'
r = requests.get(example_url, stream=True)
At this point only the response headers have been downloaded and the connection remains open.
Documentation: body-content-workflow
I need to download a CSV file, which works fine in browsers using:
http://www.ftse.com/objects/csv_to_csv.jsp?infoCode=100a&theseFilters=&csvAll=&theseColumns=Mw==&theseTitles=&tableTitle=FTSE%20100%20Index%20Constituents&dl=&p_encoded=1&e=.csv
The following code works for any other file (url) (with a fully qualified path), however with the above URL is downloads 800 bytes of gibberish.
def getFile(self,URL):
proxy_support = urllib2.ProxyHandler({'http': 'http://proxy.REMOVED.com:8080/'})
opener = urllib2.build_opener(proxy_support)
urllib2.install_opener(opener)
response = urllib2.urlopen(URL)
print response.geturl()
newfile = response.read()
output = open("testFile.csv",'wb')
output.write(newfile)
output.close()
urllib2 uses httplib under the hood, so the best way to diagnose this is to turn on http connection debugging. Add this code before you access the url and you should get a nice summary of exactly what http traffic is being generated:
import httplib
httplib.HTTPConnection.debuglevel = 1