Does anyone know how to get a list of DNS search suffixes on a client - both ones that have been manually added and ones assigned by DHCP. I'd prefer to have a cross-platform solution, but a Windows only solution will work. I couldn't find anything in pywin32 or other modules...
After a bit of investigation, it doesn't look like there is a cross-platform way since the OS stores this information differently. On Windows, I ended up querying the information via the registry:
def getLocalDomainSuffix():
domainSuffixSet = set()
netKey = _winreg.OpenKey(_winreg.HKEY_LOCAL_MACHINE, 'SYSTEM\\CurrentControlSet\\Services\\Tcpip\\Parameters')
for keyName in ("DhcpDomain", "SearchList"):
value, type = _winreg.QueryValueEx(netKey, keyName)
if value:
for item in value.split(','):
domainSuffixSet.add(item)
return domainSuffixSet
Related
I am working on a huge email-address dataset in Python and need to retrieve the organization name.
For example, email#organizationName.com is easy to extract, but what about email#info.organizationName.com or even email#organizationName.co.uk?
I need a universal extractor that should be able to handle all different possibilities accordingly.
If organisationName is always before .com or other ending - this may work -
email_str.split('#')[1].split('.')[-2]
A regex won't work well here. In order to be able to reliably do this, you need to use a lib that has knowledge on what constitutes a valid suffix.
Otherwise, how would the extractor be able distinguish email#info.organizationName.com from email#organizationName.co.uk?
This can be done using tldextract:
Example:
import tldextract
emails = ['email#organizationName.com',
'email#info.organizationName.com',
'email#organizationName.co.uk',
'email#info.organizationName.co.uk',
]
for addr in emails:
print(tldextract.extract(addr))
Output:
ExtractResult(subdomain='', domain='organizationName', suffix='com')
ExtractResult(subdomain='info', domain='organizationName', suffix='com')
ExtractResult(subdomain='', domain='organizationName', suffix='co.uk')
ExtractResult(subdomain='info', domain='organizationName', suffix='co.uk')
To access just the domain, use tldextract.extract(addr).domain.
I try to get all values from section in my ini file (via configparser) as a variable:
hue310section = dict(parser.items('HUE_310'))
for keys, value in hue310section.items():
pairs = keys + ' = ' + value
print(pairs)
it gave me partnewfilepath = http://some_site:PORT/about, but I don't know how to import this output as an python variable, that I can use partnewfilepath somewhere in my code. Of course one section will have more values than only one, and I want to change all that in variable. I trying to find solution but I think I miss something because my knowledge about python is not enough yet. I think I need to rebuilt my for statement but don't have a clue how to do it in this particular problem.
My config.ini file looks like:
[HUE_310]
partNewFilePath = ${common:domain}/about
otherValues = something
nextvalue = another something
UPDATE:
I think I need to elaborate more about what I want to achieve. In other part of my code I check version of site I want to process. If the site has, let say version 3.10 I want to get all values from section HUE_310 from my ini file, and use them as python variable. Rest of my code use those variable and if the site version will change I can get values from other section from my ini file and get those values to python variable and use them. I assume that some variables will change from version to version and that's why I want to prepare my code to check this. Also it gives me some freedom to modify some variable if site will change.
I hope it is now more clear.
You don't need a new variable or a for loop, you already have hue310section dict.
You can just use
hue310section['partNewFilePath']
which will be equal to
"http://some_site:PORT/about"
Note that after hue310section = dict(parser.items('HUE_310'))
, otherValues and nextvalue keys will also be defined.
from configobj import ConfigObj
parser_data = ConfigObj(config_path)
current = parser_data['HUE_310'].get('partNewFilePath', 'http://www.default.com')
config_path is path to the file
http://www.default.com is the default value in case that particular key is not found.
I am trying to access the worklogs in python by using the jira python library. I am doing the following:
issues = jira.search_issues("key=MYTICKET-1")
print(issues[0].fields.worklogs)
issue = jira.search_issues("MYTICKET-1")
print(issue.fields.worklogs)
as described in the documentation, chapter 2.1.4. However, I get the following error (for both cases):
AttributeError: type object 'PropertyHolder' has no attribute 'worklogs'
Is there something I am doing wrong? Is the documentation outdated? How to access worklogs (or other fields, like comments etc)? And what is a PropertyHolder? How to access it (its not described in the documentation!)?
This is because it seems jira.JIRA.search_issues doesn't fetch all "builtin" fields, like worklog, by default (although documentation only uses vague term "fields - [...] Default is to include all fields"
- "all" out of what?).
You either have to use jira.JIRA.issue:
client = jira.JIRA(...)
issue = client.issue("MYTICKET-1")
or explicitly list fields which you want to fetch in jira.JIRA.search_issues:
client = jira.JIRA(...)
issue = client.search_issues("key=MYTICKET-1", fields=[..., 'worklog'])[0]
Also be aware that this way you will get at most 20 worklog items attached to your JIRA issue instance. If you need all of them you should use jira.JIRA.worklogs:
client = jira.JIRA(...)
issue = client.issue("MYTICKET-1")
worklog = issue.fields.worklog
all_worklogs = client.worklogs(issue) if worklog.total > 20 else worklog.worklogs
This question here is similar to yours and someone has posted a work around.
There is a also a similar question on Github in relation to attachments (not worklogs). The last answer in the comments has workaround that might assist.
I'm cleaning up some localisation and translation settings in our PyGTK application. The app is only intended to be used under GNU/Linux systems. One of the features we want is for users to select the language used for the applications (some prefer their native language, some prefer English for consistency, some like French because it sounds romantic, etc).
For this to work, I need to actually show a combo box with the various languages available. How can I get this list? In fact, I need a list of pairs of the language code ("en", "ru", etc) and the language name in the native language ("English (US)", "Русские").
If I had to implement a brute force method, I'd do something like: look in the system locale dir (eg. "/usr/share/locale") for all language code dirs (eg. "en/") containing the relative path "LC_MESSAGES/OurAppName.mo".
Is there a more programmatic way?
You can use gettext to find whether a translation is available and installed, but you need babel (which was available on my Ubuntu system as the package python-pybabel) to get the names. Here is a code snippet which returns the list that you want:
import gettext
import babel
messagefiles = gettext.find('OurAppName',
languages=babel.Locale('en').languages.keys(),
all=True)
messagefiles.sort()
languages = [path.split('/')[-3] for path in messagefiles]
langlist = zip(languages,
[babel.Locale.parse(lang).display_name for lang in languages])
print langlist
To change languages in the middle of your program, see the relevant section of the Python docs. This probably entails reconstructing all your GTK widgets, although I'm not sure.
For more information on gettext.find, here is the link to that too.
Here's a function inspired by gettext.find, but looks to see what files exist, rather than needing a list of languages from Babel. It returns the locale codes, you'll still have to use babel to get display_name for each.
def available_langs(self, domain=None, localedir=None):
if domain is None:
domain = gettext._current_domain
if localedir is None:
localedir = gettext._default_localedir
files = glob(os.path.join(localedir, '*', 'LC_MESSAGES', '%s.mo' % domain))
langs = [file.split(os.path.sep)[-3] for file in files]
return langs
I have a large number of email addresses to validate. Initially I parse them with a regexp to throw out the completely crazy ones. I'm left with the ones that look sensible but still might contain errors.
I want to find which addresses have valid domains, so given me#abcxyz.com I want to know if it's even possible to send emails to abcxyz.com .
I want to test that to see if it corresponds to a valid A or MX record - is there an easy way to do it using only Python standard library? I'd rather not add an additional dependency to my project just to support this feature.
There is no DNS interface in the standard library so you will either have to roll your own or use a third party library.
This is not a fast-changing concept though, so the external libraries are stable and well tested.
The one I've used successful for the same task as your question is PyDNS.
A very rough sketch of my code is something like this:
import DNS, smtplib
DNS.DiscoverNameServers()
mx_hosts = DNS.mxlookup(hostname)
# Just doing the mxlookup might be enough for you,
# but do something like this to test for SMTP server
for mx in mx_hosts:
smtp = smtplib.SMTP()
#.. if this doesn't raise an exception it is a valid MX host...
try:
smtp.connect(mx[1])
except smtplib.SMTPConnectError:
continue # try the next MX server in list
Another library that might be better/faster than PyDNS is dnsmodule although it looks like it hasn't had any activity since 2002, compared to PyDNS last update in August 2008.
Edit: I would also like to point out that email addresses can't be easily parsed with a regexp. You are better off using the parseaddr() function in the standard library email.utils module (see my answer to this question for example).
The easy way to do this NOT in the standard library is to use the validate_email package:
from validate_email import validate_email
is_valid = validate_email('example#example.com', check_mx=True)
For faster results to process a large number of email addresses (e.g. list emails, you could stash the domains and only do a check_mx if the domain isn't there. Something like:
emails = ["email#example.com", "email#bad_domain", "email2#example.com", ...]
verified_domains = set()
for email in emails:
domain = email.split("#")[-1]
domain_verified = domain in verified_domains
is_valid = validate_email(email, check_mx=not domain_verified)
if is_valid:
verified_domains.add(domain)
An easy and effective way is to use a python package named as validate_email.
This package provides both the facilities. Check this article which will help you to check if your email actually exists or not.