Launching multiple urls in firefox from python3 - python

i am trying to lunch multiple urls from a text file in Firefox via python. i am using win7 OS & python 3. i need some direction to pass the argument to Firefox.
import os
import subprocess
f = open ('C:\\Users\\test\\Desktop\\urls.txt','r')
data = f.read()
print(data)
# i need some help here to pass this argument to Firefox.
f.close()
urls.txt
http://www.abc.com
http://www.xyz.com/test
http://www.abc.net/test.html
http://www.test.com
http://www.msn.com

Use the webbrowser module.
import webbrowser
firefox = webbrowser.get('firefox')
for url in data.split('\n'):
firefox.open_new_tab(url)
If you don't want to enforce a particular browser and just start the default one, use webbrowser.open_new_tab.
The webbrowser module isn't very reliable, especially on Windows, so you might have to start the process manually using the subprocess module:
import subprocess
firefox_path = 'C:/Program Files/Firefox/firefox' # change this line accordingly
for url in data.split('\n'):
subprocess.Popen([firefox_path, url])
Also, Firefox supports multiple URLs in the command line, so the following solution is better for it:
import subprocess
urls = open('C:/Users/test/Desktop/urls.txt').read().split('\n')
subprocess.Popen(['C:/Program Files/Firefox/firefox']+urls)

Related

How to download a file with .torrent extension from link with Python

I tried using wget:
url = https://yts.lt/torrent/download/A4A68F25347C709B55ED2DF946507C413D636DCA
wget.download(url, 'c:/path/')
The result was that I got a file with the name A4A68F25347C709B55ED2DF946507C413D636DCA and without any extension.
Whereas when I put the link in the navigator bar and click enter, a torrent file gets downloaded.
EDIT:
Answer must be generic not case dependent.
It must be a way to download .torrent files with their original name.
You can get the filename inside the content-disposition header, i.e.:
import re, requests, traceback
try:
url = "https://yts.lt/torrent/download/A4A68F25347C709B55ED2DF946507C413D636DCA"
r = requests.get(url)
d = r.headers['content-disposition']
fname = re.findall('filename="(.+)"', d)
if fname:
with open(fname[0], 'wb') as f:
f.write(r.content)
except:
print(traceback.format_exc())
Py3 Demo
The code above is for python3. I don't have python2 installed and I normally don't post code without testing it.
Have a look at https://stackoverflow.com/a/11783325/797495, the method is the same.
I found an a way that gets the torrent files downloaded with their original name like as they were actually downloaded by putting the link in the browser's nav bar.
The solution consists of opening the user's browser from Python :
import webbrowser
url = "https://yts.lt/torrent/download/A4A68F25347C709B55ED2DF946507C413D636DCA"
webbrowser.open(url, new=0, autoraise=True)
Read more:
Call to operating system to open url?
However the downside is :
I don't get the option to choose the folder where I want to save the
file (unless I changed it in the browser but still, in case I want to save
torrents that matches some criteria in an other
path, it won't be possible).
And of course, your browser goes insane opening all those links XD

How to download a zip file from a URL with authentification? [duplicate]

I've been going through the Q&A on this site, for an answer to my question. However, I'm a beginner and I find it difficult to understand some of the solutions. I need a very basic solution.
Could someone please explain a simple solution to 'Downloading a file through http' and 'Saving it to disk, in Windows', to me?
I'm not sure how to use shutil and os modules, either.
The file I want to download is under 500 MB and is an .gz archive file.If someone can explain how to extract the archive and utilise the files in it also, that would be great!
Here's a partial solution, that I wrote from various answers combined:
import requests
import os
import shutil
global dump
def download_file():
global dump
url = "http://randomsite.com/file.gz"
file = requests.get(url, stream=True)
dump = file.raw
def save_file():
global dump
location = os.path.abspath("D:\folder\file.gz")
with open("file.gz", 'wb') as location:
shutil.copyfileobj(dump, location)
del dump
Could someone point out errors (beginner level) and explain any easier methods to do this?
A clean way to download a file is:
import urllib
testfile = urllib.URLopener()
testfile.retrieve("http://randomsite.com/file.gz", "file.gz")
This downloads a file from a website and names it file.gz. This is one of my favorite solutions, from Downloading a picture via urllib and python.
This example uses the urllib library, and it will directly retrieve the file form a source.
For Python3+ URLopener is deprecated.
And when used you will get error as below:
url_opener = urllib.URLopener() AttributeError: module 'urllib' has no
attribute 'URLopener'
So, try:
import urllib.request
urllib.request.urlretrieve(url, filename)
As mentioned here:
import urllib
urllib.urlretrieve ("http://randomsite.com/file.gz", "file.gz")
EDIT: If you still want to use requests, take a look at this question or this one.
Four methods using wget, urllib and request.
#!/usr/bin/python
import requests
from StringIO import StringIO
from PIL import Image
import profile as profile
import urllib
import wget
url = 'https://tinypng.com/images/social/website.jpg'
def testRequest():
image_name = 'test1.jpg'
r = requests.get(url, stream=True)
with open(image_name, 'wb') as f:
for chunk in r.iter_content():
f.write(chunk)
def testRequest2():
image_name = 'test2.jpg'
r = requests.get(url)
i = Image.open(StringIO(r.content))
i.save(image_name)
def testUrllib():
image_name = 'test3.jpg'
testfile = urllib.URLopener()
testfile.retrieve(url, image_name)
def testwget():
image_name = 'test4.jpg'
wget.download(url, image_name)
if __name__ == '__main__':
profile.run('testRequest()')
profile.run('testRequest2()')
profile.run('testUrllib()')
profile.run('testwget()')
testRequest - 4469882 function calls (4469842 primitive calls) in 20.236 seconds
testRequest2 - 8580 function calls (8574 primitive calls) in 0.072 seconds
testUrllib - 3810 function calls (3775 primitive calls) in 0.036 seconds
testwget - 3489 function calls in 0.020 seconds
I use wget.
Simple and good library if you want to example?
import wget
file_url = 'http://johndoe.com/download.zip'
file_name = wget.download(file_url)
wget module support python 2 and python 3 versions
Exotic Windows Solution
import subprocess
subprocess.run("powershell Invoke-WebRequest {} -OutFile {}".format(your_url, filename), shell=True)
import urllib.request
urllib.request.urlretrieve("https://raw.githubusercontent.com/dnishimoto/python-deep-learning/master/list%20iterators%20and%20generators.ipynb", "test.ipynb")
downloads a single raw juypter notebook to file.
For text files, you can use:
import requests
url = 'https://WEBSITE.com'
req = requests.get(url)
path = "C:\\YOUR\\FILE.html"
with open(path, 'wb') as f:
f.write(req.content)
I started down this path because ESXi's wget is not compiled with SSL and I wanted to download an OVA from a vendor's website directly onto the ESXi host which is on the other side of the world.
I had to disable the firewall(lazy)/enable https out by editing the rules(proper)
created the python script:
import ssl
import shutil
import tempfile
import urllib.request
context = ssl._create_unverified_context()
dlurl='https://somesite/path/whatever'
with urllib.request.urlopen(durl, context=context) as response:
with open("file.ova", 'wb') as tmp_file:
shutil.copyfileobj(response, tmp_file)
ESXi libraries are kind of paired down but the open source weasel installer seemed to use urllib for https... so it inspired me to go down this path
Another clean way to save the file is this:
import csv
import urllib
urllib.retrieve("your url goes here" , "output.csv")

Send Rust variable to Python?

I am building a Rust program in which the user types in a command, and then the program reads the command and responds accordingly. One of these commands is to download a file from a set site.
I have a .py file with the following code that I made a while ago that downloads files from a set site:
import urllib
import urllib2
import requests
url = 'http://www.blog.pythonlibrary.org/wpcontent/uploads/2012/06/wxDbViewer.zip'
print "downloading with urllib"
urllib.urlretrieve(url, "code.zip")
print "downloading with urllib2"
f = urllib2.urlopen(url)
data = f.read()
with open("code2.zip", "wb") as code:
code.write(data)
print "downloading with requests"
r = requests.get(url)
with open("code3.zip", "wb") as code:
code.write(r.content)
The URLs in the code are not ones that I will be using; they are examples.
If the Rust program sets the site it needs to go to as a variable, is there a way that I could send the variable to my Python program? I know you can send Python to Rust:
Passing a list of strings from Python to Rust
http://www.joesacher.com/blog/2017/08/24/ptr-types/
Is there a way to do this in the other direction?

How to read remote page content using Python

I need to read the remote file content using python but here I am facing some challenges. My code is below:
import subprocess
path = 'http://securityxploded.com/remote-file-inclusion.php'
subprocess.Popen(["rsync", host-ip+path],stdout=subprocess.PIPE)
for line in ssh.stdout:
line
Here I am getting the error NameError: name 'host' is not defined. I could not know what should be the host-ip value because I am running my Python file using terminal(python sub.py). Here I need to read the content of the http://securityxploded.com/remote-file-inclusion.php remote file.
You need the urllib library. Also you are using parameters which you don't use.
Try something like this:
import urllib.request
fp = urllib.request.urlopen("http://www.stackoverflow.com")
mybytes = fp.read()
mystr = mybytes.decode("utf8")
fp.close()
print(mystr)
Note: this is for python 3
For python 2.7 use this:
import urllib
fp = urllib.urlopen("http://www.stackoverflow.com")
myfile = fp.read()
print myfile
if you want to read remote content via http.
requests or urllib2 are both good choice.
for Python2, use requests.
import requests
resp = requests.get('http://example.com/')
print resp.text
will work.

Basic http file downloading and saving to disk in python?

I've been going through the Q&A on this site, for an answer to my question. However, I'm a beginner and I find it difficult to understand some of the solutions. I need a very basic solution.
Could someone please explain a simple solution to 'Downloading a file through http' and 'Saving it to disk, in Windows', to me?
I'm not sure how to use shutil and os modules, either.
The file I want to download is under 500 MB and is an .gz archive file.If someone can explain how to extract the archive and utilise the files in it also, that would be great!
Here's a partial solution, that I wrote from various answers combined:
import requests
import os
import shutil
global dump
def download_file():
global dump
url = "http://randomsite.com/file.gz"
file = requests.get(url, stream=True)
dump = file.raw
def save_file():
global dump
location = os.path.abspath("D:\folder\file.gz")
with open("file.gz", 'wb') as location:
shutil.copyfileobj(dump, location)
del dump
Could someone point out errors (beginner level) and explain any easier methods to do this?
A clean way to download a file is:
import urllib
testfile = urllib.URLopener()
testfile.retrieve("http://randomsite.com/file.gz", "file.gz")
This downloads a file from a website and names it file.gz. This is one of my favorite solutions, from Downloading a picture via urllib and python.
This example uses the urllib library, and it will directly retrieve the file form a source.
For Python3+ URLopener is deprecated.
And when used you will get error as below:
url_opener = urllib.URLopener() AttributeError: module 'urllib' has no
attribute 'URLopener'
So, try:
import urllib.request
urllib.request.urlretrieve(url, filename)
As mentioned here:
import urllib
urllib.urlretrieve ("http://randomsite.com/file.gz", "file.gz")
EDIT: If you still want to use requests, take a look at this question or this one.
Four methods using wget, urllib and request.
#!/usr/bin/python
import requests
from StringIO import StringIO
from PIL import Image
import profile as profile
import urllib
import wget
url = 'https://tinypng.com/images/social/website.jpg'
def testRequest():
image_name = 'test1.jpg'
r = requests.get(url, stream=True)
with open(image_name, 'wb') as f:
for chunk in r.iter_content():
f.write(chunk)
def testRequest2():
image_name = 'test2.jpg'
r = requests.get(url)
i = Image.open(StringIO(r.content))
i.save(image_name)
def testUrllib():
image_name = 'test3.jpg'
testfile = urllib.URLopener()
testfile.retrieve(url, image_name)
def testwget():
image_name = 'test4.jpg'
wget.download(url, image_name)
if __name__ == '__main__':
profile.run('testRequest()')
profile.run('testRequest2()')
profile.run('testUrllib()')
profile.run('testwget()')
testRequest - 4469882 function calls (4469842 primitive calls) in 20.236 seconds
testRequest2 - 8580 function calls (8574 primitive calls) in 0.072 seconds
testUrllib - 3810 function calls (3775 primitive calls) in 0.036 seconds
testwget - 3489 function calls in 0.020 seconds
I use wget.
Simple and good library if you want to example?
import wget
file_url = 'http://johndoe.com/download.zip'
file_name = wget.download(file_url)
wget module support python 2 and python 3 versions
Exotic Windows Solution
import subprocess
subprocess.run("powershell Invoke-WebRequest {} -OutFile {}".format(your_url, filename), shell=True)
import urllib.request
urllib.request.urlretrieve("https://raw.githubusercontent.com/dnishimoto/python-deep-learning/master/list%20iterators%20and%20generators.ipynb", "test.ipynb")
downloads a single raw juypter notebook to file.
For text files, you can use:
import requests
url = 'https://WEBSITE.com'
req = requests.get(url)
path = "C:\\YOUR\\FILE.html"
with open(path, 'wb') as f:
f.write(req.content)
I started down this path because ESXi's wget is not compiled with SSL and I wanted to download an OVA from a vendor's website directly onto the ESXi host which is on the other side of the world.
I had to disable the firewall(lazy)/enable https out by editing the rules(proper)
created the python script:
import ssl
import shutil
import tempfile
import urllib.request
context = ssl._create_unverified_context()
dlurl='https://somesite/path/whatever'
with urllib.request.urlopen(durl, context=context) as response:
with open("file.ova", 'wb') as tmp_file:
shutil.copyfileobj(response, tmp_file)
ESXi libraries are kind of paired down but the open source weasel installer seemed to use urllib for https... so it inspired me to go down this path
Another clean way to save the file is this:
import csv
import urllib
urllib.retrieve("your url goes here" , "output.csv")

Categories