What is the easiest way of changing the server that is queryed againest when using dnspython
Ive been using the following,
dns.resolver.query(hostname,type)
However from the documentation it appears you can only change the file it looks at for its resolver servers.
Any ideas ?
Resolver can load configuration either from the Windows registry or from /etc/resolv.conf. If you want to configure dns server manually, don't read system configuration (create it with configure=False) and configure it yourself.
Resolver has the nameservers property which is a list of DNS servers IPs (as strings).
Related
I'm working with a simple website (a few html files and one Python script) that's running on my LAN. In Chrome I can pull up the HTML files and Python scripts through port 80, as normal and I am using WSGIScriptAlias commands in /Library/Server/Web/Config/apache2/httpd_wsgi.conf that are working and I've set up the site and specified for it to be allowed to use Python apps through the Server GUI application.
For several reasons, I'm using a different port number for this site. If I go to http://mycomputer.lan:1234/myfile.html, I can see the HTML file. But if I go to http://mycomputer.lan:1234/MyWSGIApplicationScript, the server (the latest version, got it installed today) reports:
Not Found
The requested URL /LandSearch was not found on this server.
I've seen this work before, on other servers and I remember setting it up and getting it working on another system running OS X so the wsgi scripts worked fine on a non-standard port, but I don't have access to the notes and information I had at that time. That makes me suspect it's probably a simple configuration option I need to change for the server to find and use the Python scripts from a different port.
What do I need to reconfigure to get it to use wsgi scripts on a non-standard port?
Even AppleCare didn't have an answer for this one.
When I first set up the site, I enabled the 'Python "Hello World" app at /wsgi'. This is in the advanced settings:
I did that just for testing, so when I set up the site again, I didn't bother with it. It turns out that this one setting does more than enable one wsgi application. It turns out that, by default, the file /Library/Server/Web/Config/apache2/httpd_wsgi.conf is not read by Apache while setting up a virtual host. But checking the box to enable this one wsgi webapp means that the following line:
Include /Library/Server/Web/Config/apache2/httpd_wsgi.conf
will be included in the configuration file for this particular virtual host. Any scripts aliases defined with the WSGIScriptAlias command in that file will now be available to your website, no matter what port your website is on.
I have a default installation of Elasticsearch which I am trying to query from a third party server. However, it seems that by default this is blocked.
Is anyone please able to tell me how I can configure Elasticsearch so that I can query it from a different server?
When elasticsearch is installed and run without any configuration changes by default it binds to localhost only. To access the elasticsearch REST API endpoint remotely the below changes has to be made on the server where elasticsearch has been installed.
Elasticsearch Configuration Change
Update the network.host property in elasticsearch.yml as per the guidelines provided in the elasticsearch documentation
For example to bind to all IPv4 addresses on the local machine, change as below
network.host : 0.0.0.0
Firewall Rules Update
Update the Linux firewall to allow access to port 9200. Please refer your Linux documentation for adding rules to the firewall.
For example to allow access to all the servers(public) in CentosOS use the firewall-cmd
sudo firewall-cmd --zone=public --permanent --add-port=9200/tcp
sudo firewall-cmd --reload
Note : In production environment public access is discouraged. A restricted access should be preferred.
In config/elasticsearch.yml, put network.host: 0.0.0.0.
And also add Inbound Rule in firewall for your ElasticSearch port(9200 ByDefault).
It worked in ElasticSearch version 2.3.0
Edit: As Sisso mentions in his comment below, Elasticsearch as of 2.0 at least binds to localhost by default. See https://www.elastic.co/guide/en/elasticsearch/reference/2.0/modules-network.html for more information.
As Damien mentions in his answer, by default ES allows all access to port 9200. In fact, you need to use external tools to provide authentication to the ES resource - something like a webapp frontend or just simple nginx with Basic Auth turned on.
Things that can prevent you from accessing a remote system (you probably know these):
network configuration problems
ES host firewall blocks incoming requests on port 9200
remote host firewall blocks outgoing requests to ES host and/or port 9200
ES is configured to bind to the wrong IP address (by default however, it binds to all available IPs)
Best guess? Check that you can connect from remote host to ES host, then check firewall on both systems. If you can't diagnose further, maybe someone on the ES mailing list (https://groups.google.com/forum/#!forum/elasticsearch) or IRC channel (#elasticsearch on Freenode) can help.
There is no restriction by default, ElasticSearch expose a standard HTTP API on the port 9200.
From your third party server, are you able to: curl http://es_hostname:9200/?
To allow remote access with one default node, settings\elasticsearch.yml should have:
network.host: 0.0.0.0
http.port: 9200
My case I need three instances. For each instance, it's necessary declare also the port range used.
network.host: 0.0.0.0
http.port: 9200-9202
I have such problem. I have local http server (BottlePy or Django), and when i use http:// localhost/ or http:// 127.0.0.1/ - it loads immediately. But when i use my local ip (192.168.1.100), it loads very long time (some minutes). What could be the problem?
Server works on Ubuntu 11.
you should take a look at
Slow Python HTTP server on localhost
similar issues has been resolved there.
It looks like that you have problems with DNS. can you check this idea running host 192.168.1.100 on the host? Please also check that other DNS queries being quickly processed.
Check /etc/hosts file for a quick-and-dirty solution.
I try to connect to database in a domain from my virtual machine.
It works on XP, but somehow does not work on Win7 and quitting with:
"OperationalError: (1042, "Can't get hostname for your address")"
Now I tried disable Firewall and stuff, but that doesn't matter anyway.
I don't need the DNS resolving, which will only slow everything down.
So I want to use the option "skip-name-resolve", but there is no my.ini
or my.cnf when using MySQLdb for Python, so how can I still use this option?
Thanks for your help
-Alex
Add the following line(skip-name-resolve) in the /etc/mysql/my.cnf file
[mysqld]
port = 3306
socket = /tmp/mysql.sock
skip-locking
skip-name-resolve
And restart the mysql server
This is an option which needs to be set in the MySQL configuration file on the server. It can't be set by client APIs such as MySQLdb. This is because of the potential security implications.
That is, I may want to deny access from a particular hostname. With skip-name-resolve enabled, this won't work. (Admittedly, access control via hostname is probably not the best idea anyway.)
I'm attempting to use fabric for the first time and I really like it so far, but at a certain point in my deployment script I want to clone a mercurial repository. When I get to that point I get an error:
err: abort: http authorization required
My repository requires http authorization and fabric doesn't prompt me for the user and password. I can get around this by changing my repository address from:
https://hostname/repository
to:
https://user:password#hostname/repository
But for various reasons I would prefer not to go this route.
Are there any other ways in which I could bypass this problem?
Here are four options with various security trade-offs and requiring various amounts of sys admin mojo:
With newer mercurial's you could put the password in the [auth] section of the local user's .hgrc file. The password will still be on disk in plaintext, but at least not in the URL
Or
You could locally set up a HTTP proxy that presents as no-auth locally and does the auth for you when communicating with remote.
Or
Of you're able to alter configuration on the hosting server you could set it (Apache?) to not require a user/pass when accessed from localhost, and then use a SSH tunnel to make the local machine look like it's coming from localhost when it access the server:
ssh -L 8080:localhost:80 user#hostname # run in background and leave running
and then have fabric connect to http://localhost:8080/repository
Or
Newer mercurial's support client side certificates for authentication, so you could configure your Apache to honor those as authorization/authentcation and then tweak your local hg to provide the certificate.
Depending on your fabfile, you might be able to reframe the problem. Instead of doing a hg clone on the remote system you could do your mercurial commands on your local system, and then ship the artifact you've constructed across with fabric.
Specifically, you could clone the mercurial repository by using fabric's local() commands, and run a 'hg archive' command to prepare a tarball. Then you can use fabrics put() to upload that tarball, and fabrics run() to unpack it in the correct location.
A code snippet for the clone, pack, put might look a bit like the following:
from fabric.api import local
def task():
local("hg clone ssh://hg#host/repo tmpdir")
with lcd("tmpdir"):
local("hg archive ../repo.tgz")
local("rm tmpdir")
put("repo.tgz")