I'm trying to use ConfigParser to edit a string that stores a .ini file content.
To import the string I use ConfigParser.read_string(my_string).
what I'm looking for is the exact opposite function so I can later set my string's value as content of my confing object.
I could use f-string in some loop but this wouldn't be ideal in my opinion.
import configparser
config = configparser.ConfigParser()
sample_config = """
[settings]
user = jeff
help = pls
"""
config.read_string(sample_config)
config["settings"]["user"] = "Orili"
sample_config = config.write_string()
the function write_string() does not exist
what I want:
print(sample_config)
output:
[settings]
user = Orili
help = pls
I have a project for my company where I need to go to an internal website that downloads an excel file with factory data. I have two files link_file.py that stores the links to the websites and main.py. This script compares two factories to each other and I need to input different factory codes for each run so I was wondering, what is the most pythonic way to get variables from the main.py file to the dynamic http addresses?
Currently I have fstrings in the main.py file but it looks cluttered with all of the links:
#main.py
import webbrowser
link1 = f"www.factorycode.com/{factory_code1}"
link2 = f"www.factorycode.com/{factory_code2}"
factory_code1 = "abc"
factory_code2 = "xyz"
webbrowser.open(link1)
webbrowser.open(link2)
I tried a solution using os and .format() but it still looks cluttered.
# link_file.py
link1 = f"www.factorycode.com/{factory_code1}"
link2 = f"www.factorycode.com/{factory_code2}"
# main.py
import link_file
import webbrowser
import os
factory_code1 = "abc"
factory_code2 = "xyz"
link_file.link1.format(factory_code1= factory_code1)
link_file.link2.format(factory_code2= factory_code2)
webbrowser.open(link1)
webbrowser.open(link2)
Try
# link_file.py
link_prefix = "www.factorycode.com/"
# main.py
from link_file import link_prefix
import webbrowser
factory_code1 = "abc"
factory_code2 = "xyz"
link1 = f'{link_prefix}{factory_code1}'
link2 = f'{link_prefix}{factory_code2}'
webbrowser.open(link1)
webbrowser.open(link2)
This is what functions are for.
def open_link(code):
link = f"www.factorycode.com/{code}"
webbrowser.open(link)
codes = ['abc', 'xyz', 'def', 'jkl']
for code in codes:
open_link(code)
I am using Jupyter Notebook to get docid=PE209374738 as my output using reg ex. It is currently stored in a dictionary in this format:
{'Url': 'https://backtoschool.com/document.php?docid=PE209374738&datasource=PHE&vid=3326&referrer=api'}.
This is my code:
results= xmldoc.getElementsByTagName("result")
dict= {}
for a in results:
url= 'Url'
dict[url] = a.getElementsByTagName("url")[0].childNodes[0].nodeValue
docid= re.search(r'\?(.*?)&')
Does anyone have any suggestions on how to print that id?
The standard library already has methods for parsing URLs properly, no need for regex.
In Python 3:
from urllib.parse import urlparse, parse_qs
url = 'https://backtoschool.com/document.php?docid=PE209374738&datasource=PHE&vid=3326&referrer=api'
print(parse_qs(urlparse(url).query)['docid'][0]) # PE209374738
In Python 2 the first line is:
from urlparse import urlparse, parse_qs
#alex-hall is correct, you probably should better parse this using a proper URL parser.
That said, your original question was about doing it with using regexps, so here is the solution (which you nearly nailed already):
s = 'https://backtoschool.com/document.php?docid=PE209374738&datasource=PHE&vid=3326&referrer=api'
m = re.search(r'\?docid=(.*?)&', s)
print m.groups()[0]
This will print the desired PE209374738.
I'm doing this:
urlparse.urljoin('http://example.com/mypage', '?name=joe')
And I get this:
'http://example.com/?name=joe'
While I want to get this:
'http://example.com/mypage?name=joe'
What am I doing wrong?
You could use urlparse.urlunparse :
import urlparse
parsed = list(urlparse.urlparse('http://example.com/mypage'))
parsed[4] = 'name=joe'
urlparse.urlunparse(parsed)
You're experiencing a known bug which affects Python 2.4-2.6.
If you can't change or patch your version of Python, #jd's solution will work around the issue.
However, if you need a more generic solution that works as a standard urljoin would, you can use a wrapper method which implements the workaround for that specific use case, and default to the standard urljoin() otherwise.
For example:
import urlparse
def myurljoin(base, url, allow_fragments=True):
if url[0] != "?":
return urlparse.urljoin(base, url, allow_fragments)
if not allow_fragments:
url = url.split("#", 1)[0]
parsed = list(urlparse.urlparse(base))
parsed[4] = url[1:] # assign params field
return urlparse.urlunparse(parsed)
I solved it by bundling Python 2.6's urlparse module with my project. I also had to bundle namedtuple which was defined in collections, since urlparse uses it.
Are you sure? On Python 2.7:
>>> import urlparse
>>> urlparse.urljoin('http://example.com/mypage', '?name=joe')
'http://example.com/mypage?name=joe'
If I open a file using urllib2, like so:
remotefile = urllib2.urlopen('http://example.com/somefile.zip')
Is there an easy way to get the file name other then parsing the original URL?
EDIT: changed openfile to urlopen... not sure how that happened.
EDIT2: I ended up using:
filename = url.split('/')[-1].split('#')[0].split('?')[0]
Unless I'm mistaken, this should strip out all potential queries as well.
Did you mean urllib2.urlopen?
You could potentially lift the intended filename if the server was sending a Content-Disposition header by checking remotefile.info()['Content-Disposition'], but as it is I think you'll just have to parse the url.
You could use urlparse.urlsplit, but if you have any URLs like at the second example, you'll end up having to pull the file name out yourself anyway:
>>> urlparse.urlsplit('http://example.com/somefile.zip')
('http', 'example.com', '/somefile.zip', '', '')
>>> urlparse.urlsplit('http://example.com/somedir/somefile.zip')
('http', 'example.com', '/somedir/somefile.zip', '', '')
Might as well just do this:
>>> 'http://example.com/somefile.zip'.split('/')[-1]
'somefile.zip'
>>> 'http://example.com/somedir/somefile.zip'.split('/')[-1]
'somefile.zip'
If you only want the file name itself, assuming that there's no query variables at the end like http://example.com/somedir/somefile.zip?foo=bar then you can use os.path.basename for this:
[user#host]$ python
Python 2.5.1 (r251:54869, Apr 18 2007, 22:08:04)
Type "help", "copyright", "credits" or "license" for more information.
>>> import os
>>> os.path.basename("http://example.com/somefile.zip")
'somefile.zip'
>>> os.path.basename("http://example.com/somedir/somefile.zip")
'somefile.zip'
>>> os.path.basename("http://example.com/somedir/somefile.zip?foo=bar")
'somefile.zip?foo=bar'
Some other posters mentioned using urlparse, which will work, but you'd still need to strip the leading directory from the file name. If you use os.path.basename() then you don't have to worry about that, since it returns only the final part of the URL or file path.
I think that "the file name" isn't a very well defined concept when it comes to http transfers. The server might (but is not required to) provide one as "content-disposition" header, you can try to get that with remotefile.headers['Content-Disposition']. If this fails, you probably have to parse the URI yourself.
Just saw this I normally do..
filename = url.split("?")[0].split("/")[-1]
Using urlsplit is the safest option:
url = 'http://example.com/somefile.zip'
urlparse.urlsplit(url).path.split('/')[-1]
Do you mean urllib2.urlopen? There is no function called openfile in the urllib2 module.
Anyway, use the urllib2.urlparse functions:
>>> from urllib2 import urlparse
>>> print urlparse.urlsplit('http://example.com/somefile.zip')
('http', 'example.com', '/somefile.zip', '', '')
Voila.
You could also combine both of the two best-rated answers :
Using urllib2.urlparse.urlsplit() to get the path part of the URL, and then os.path.basename for the actual file name.
Full code would be :
>>> remotefile=urllib2.urlopen(url)
>>> try:
>>> filename=remotefile.info()['Content-Disposition']
>>> except KeyError:
>>> filename=os.path.basename(urllib2.urlparse.urlsplit(url).path)
The os.path.basename function works not only for file paths, but also for urls, so you don't have to manually parse the URL yourself. Also, it's important to note that you should use result.url instead of the original url in order to follow redirect responses:
import os
import urllib2
result = urllib2.urlopen(url)
real_url = urllib2.urlparse.urlparse(result.url)
filename = os.path.basename(real_url.path)
I guess it depends what you mean by parsing. There is no way to get the filename without parsing the URL, i.e. the remote server doesn't give you a filename. However, you don't have to do much yourself, there's the urlparse module:
In [9]: urlparse.urlparse('http://example.com/somefile.zip')
Out[9]: ('http', 'example.com', '/somefile.zip', '', '', '')
not that I know of.
but you can parse it easy enough like this:
url = 'http://example.com/somefile.zip'
print url.split('/')[-1]
using requests, but you can do it easy with urllib(2)
import requests
from urllib import unquote
from urlparse import urlparse
sample = requests.get(url)
if sample.status_code == 200:
#has_key not work here, and this help avoid problem with names
if filename == False:
if 'content-disposition' in sample.headers.keys():
filename = sample.headers['content-disposition'].split('filename=')[-1].replace('"','').replace(';','')
else:
filename = urlparse(sample.url).query.split('/')[-1].split('=')[-1].split('&')[-1]
if not filename:
if url.split('/')[-1] != '':
filename = sample.url.split('/')[-1].split('=')[-1].split('&')[-1]
filename = unquote(filename)
You probably can use simple regular expression here. Something like:
In [26]: import re
In [27]: pat = re.compile('.+[\/\?#=]([\w-]+\.[\w-]+(?:\.[\w-]+)?$)')
In [28]: test_set
['http://www.google.com/a341.tar.gz',
'http://www.google.com/a341.gz',
'http://www.google.com/asdasd/aadssd.gz',
'http://www.google.com/asdasd?aadssd.gz',
'http://www.google.com/asdasd#blah.gz',
'http://www.google.com/asdasd?filename=xxxbl.gz']
In [30]: for url in test_set:
....: match = pat.match(url)
....: if match and match.groups():
....: print(match.groups()[0])
....:
a341.tar.gz
a341.gz
aadssd.gz
aadssd.gz
blah.gz
xxxbl.gz
Using PurePosixPath which is not operating system—dependent and handles urls gracefully is the pythonic solution:
>>> from pathlib import PurePosixPath
>>> path = PurePosixPath('http://example.com/somefile.zip')
>>> path.name
'somefile.zip'
>>> path = PurePosixPath('http://example.com/nested/somefile.zip')
>>> path.name
'somefile.zip'
Notice how there is no network traffic here or anything (i.e. those urls don't go anywhere) - just using standard parsing rules.
import os,urllib2
resp = urllib2.urlopen('http://www.example.com/index.html')
my_url = resp.geturl()
os.path.split(my_url)[1]
# 'index.html'
This is not openfile, but maybe still helps :)