I am attempting to initialize a context for GSSAPI server-side authentication, using python-kerberos (1.0.90-3.el6). My problem is that myserver.localdomain gets converted to myserver - a part of my given principal gets chopped off somewhere. Why does this happen?
Example failure:
>>> import kerberos
>>> kerberos.authGSSServerInit("HTTP#myserver.localdomain")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
kerberos.GSSError: (('Unspecified GSS failure. Minor code may provide more information', 851968), ('Unknown error', 0))
>>>
With the help of KRB5_TRACE I get the reason:
[1257] 1346344556.406343: Retrieving HTTP/myserver#LOCALDOMAIN from WRFILE:/etc/krb5.keytab (vno 0, enctype 0) with result: -1765328203/No key table entry found for HTTP/myserver#LOCALDOMAIN
I can not generate the keytab for plain HTTP/myserver#LOCALDOMAIN because it would force also the users to access the server with such address. I need to get the function to work with the proper FQDN name. As far as I can see authGSSServerInit is supposed to work with the FQDN without mutilating it.
I think the python-kerberos method calls the following krb5-libs (1.9-33.el6) provided functions, the problem might be also in those:
maj_stat = gss_import_name(&min_stat, &name_token, GSS_C_NT_HOSTBASED_SERVICE, &state->server_name);
maj_stat = gss_acquire_cred(&min_stat, state->server_name,GSS_C_INDEFINITE,GSS_C_NO_OID_SET, GSS_C_ACCEPT, &state->server_creds, NULL, NULL);
The kerberos is properly configured on this host, and confirmed to work. I can for instance kinit as user, and perform authentication the tickets. It is just the authGSSServerInit that fails to function properly.
Some of the documentation is misleading:
def authGSSServerInit(service):
"""
Initializes a context for GSSAPI server-side authentication with the given service principal.
authGSSServerClean must be called after this function returns an OK result to dispose of
the context once all GSSAPI operations are complete.
#param service: a string containing the service principal in the form 'type#fqdn'
(e.g. 'imap#mail.apple.com').
#return: a tuple of (result, context) where result is the result code (see above) and
context is an opaque value that will need to be passed to subsequent functions.
"""
In fact the API expects only the type. For instance "HTTP". The rest of the principal gets generated with the help of resolver(3). Although the rest of the kerberos stuff is happy using short names the resolver generates FQDN, but only if dnsdomainname is properly set.
A bit more info for completeness, include following variables in the python command:
This is optional -> KRB5_TRACE=/path-to-log/file.log
Usually this path -> KRB5_CONFIG= /etc/krb5.conf
Usually this path -> KTNAME=/etc/security/keytabs/foo.keytab
For example:
KRB5_TRACE=/path-to-log/file.log KRB5_CONFIG='/etc/krb5.conf' KTNAME=/etc/security/keytabs/foo.keytab /opt/anaconda3.5/bin/python3.6
In python run:
import kerberos
kerberos.authGSSServerInit("user")
Considerations:
In your keytab the principal must be user/host#REALM
Both "user" must be identical
The full principal will be composed by your kerberos client config
If the return code is 0 you are done! Congratz!
If not go to the log file and enjoy debugging :P
Related
I raised a feature request on the CDK github account recently and was pointed in the direction of Core.Token as being pretty much the exact functionality I was looking for. I'm now having some issues implementing it and getting similar errors, heres the feature request I raised previously: https://github.com/aws/aws-cdk/issues/3800
So my current code looks something like this:
fargate_service = ecs_patterns.LoadBalancedFargateService(
self, "Fargate",
cluster = cluster,
memory_limit_mib = core.Token.as_number(ssm.StringParameter.value_from_lookup(self, parameter_name='template-service-memory_limit')),
execution_role=fargate_iam_role,
container_port=core.Token.as_number(ssm.StringParameter.value_from_lookup(self, parameter_name='port')),
cpu = core.Token.as_number(ssm.StringParameter.value_from_lookup(self, parameter_name='template-service-container_cpu')),
image=ecs.ContainerImage.from_registry(ecrRepo)
)
When I try synthesise this code I get the following error:
jsii.errors.JavaScriptError:
Error: Resolution error: Supplied properties not correct for "CfnSecurityGroupEgressProps"
fromPort: "dummy-value-for-template-service-container_port" should be a number
toPort: "dummy-value-for-template-service-container_port" should be a number.
Object creation stack:
To me it seems to be getting past the validation requiring a number to be passed into the FargateService validation, but when it tried to create the resources after that ("CfnSecurityGroupEgressProps") it cant resolve the dummy string as a number. I'd appreciate any help on solving this or alternative suggestions to passing in values from AWS system params instead (I thought it might be possible to parse the values into here via a file pulled from S3 during the build pipeline or something along those lines, but that seems hacky).
With some help I think we've cracked this!
The problem was that I was passing "ssm.StringParameter.value_from_lookup" the solution is to provide the token with "ssm.StringParameter.value_for_string_parameter", when this is synthesised it stores the token and then upon deployment the value stored in system parameter store is substituted.
(We also came up with another approach for achieving similar which we're probably going to use over SSM approach, I've detailed below the code snippet if you're interested)
See the complete code below:
from aws_cdk import (
aws_ec2 as ec2,
aws_ssm as ssm,
aws_iam as iam,
aws_ecs as ecs,
aws_ecs_patterns as ecs_patterns,
core,
)
class GenericFargateService(core.Stack):
def __init__(self, scope: core.Construct, id: str, **kwargs) -> None:
super().__init__(scope, id, **kwargs)
containerPort = core.Token.as_number(ssm.StringParameter.value_for_string_parameter(
self, 'template-service-container_port'))
vpc = ec2.Vpc(
self, "cdk-test-vpc",
max_azs=2
)
cluster = ecs.Cluster(
self, 'cluster',
vpc=vpc
)
fargate_iam_role = iam.Role(self,"execution_role",
assumed_by = iam.ServicePrincipal("ecs-tasks"),
managed_policies=[iam.ManagedPolicy.from_aws_managed_policy_name("AmazonEC2ContainerRegistryFullAccess")]
)
fargate_service = ecs_patterns.LoadBalancedFargateService(
self, "Fargate",
cluster = cluster,
memory_limit_mib = 1024,
execution_role=fargate_iam_role,
container_port=containerPort,
cpu = 512,
image=ecs.ContainerImage.from_registry("000000000000.dkr.ecr.eu-west-1.amazonaws.com/template-service-ecr")
)
fargate_service.target_group.configure_health_check(path=self.node.try_get_context("health_check_path"), port="9000")
app = core.App()
GenericFargateService(app, "generic-fargate-service", env={'account':'000000000000', 'region': 'eu-west-1'})
app.synth()
Solutions to problems are like buses, apparently you spend ages waiting for one and then two arrive together. And I think this new bus is the option we're probably going to run with.
The plan is to have developers provide an override for the cdk.json file withing their code repos, which can then put parsed into the CDK pipeline where the generic code will be synthesised. This file will contain some "context", the context will then be used within the CDK to set our variables for the LoadBalancedFargate service.
I've included some code snippets for setting cdk.json file and then using its values within code below.
Example CDK.json:
{
"app": "python3 app.py",
"context": {
"container_name":"template-service",
"memory_limit":1024,
"container_cpu":512,
"health_check_path": "/gb/template/v1/status",
"ecr_repo": "000000000000.dkr.ecr.eu-west-1.amazonaws.com/template-service-ecr"
}
}
Python example for assigning context to variables:
memoryLimitMib = self.node.try_get_context("memory_limit")
I believe we could also use a Try/Catch block to assign some default values to this if not provided by the developer in their CDK.json file.
I hope this post has provided some useful information to those looking for ways to create a generic template for deploying CDK code! I don't know if we're doing the right thing here, but this tool is so new it feels like some common patterns dont exist yet.
I'm trying to use ipython-cypher to run Neo4j Cypher queries (and return a Pandas dataframe) in a Python program. I have no trouble forming a connection and running a query when using IPython Notebook, but when I try to run the same query outside of IPython, as per the documentation:
http://ipython-cypher.readthedocs.org/en/latest/introduction.html#usage-out-of-ipython
import cypher
results = cypher.run("MATCH (n)--(m) RETURN n.username, count(m) as neighbors",
"http://XXX.XXX.X.XXX:xxxx")
I get the following error:
neo4jrestclient.exceptions.StatusException: Code [401]: Unauthorized. No permission -- see authorization schemes.
Authorization Required
and
Format: (http|https)://username:password#hostname:port/db/name, or one of dict_keys([])
Now, I was just guessing that that was how I should enter a Connection object as the last parameter, because I couldn't find any additional documentation explaining how to connect to a remote host using Python, and in IPython, I am able to do:
%load_ext cypher
results = %cypher http://XXX.XXX.X.XXX:xxxx MATCH (n)--(m) RETURN n.username,
count(m) as neighbors
Any insight would be greatly appreciated. Thank you.
The documentation has a section for the API. When used outside of IPython and in need to connect to a different host, just using the parameter conn and passing a string should work.
import cypher
results = cypher.run("MATCH (n)--(m) RETURN n.username, count(m) as neighbors",
conn="http://XXX.XXX.X.XXX:xxxx")
But also consider that with the new authentication support in Neo4j 2.2, you need to set the new password before connecting from ipython-cypher. I will fix this as soon as I implement the forcing password change mechanism in neo4jrestclient, the library underneath.
I'm using the gdata Python library to do batched deletes of contacts, and I just get the "If-Match or If-None-Match header or entry etag attribute required" error.
I think the problem started when I had to enable the Contacts API in the console (which until a few days ago wasn't required? *).
EDIT:
It's actually failing for both updating and deleting operations. Batched insert works fine.
Tried specifying the If-Match header, but it's still failing:
custom_headers = atom.client.CustomHeaders(**{'If-Match': '*'})
request_feed = gdata.contacts.data.ContactsFeed()
request_feed.AddDelete(entry=contact, batch_id_string='delete')
response_feed = self.gd_client.ExecuteBatch(
request_feed,
'https://www.google.com/m8/feeds/contacts/default/full/batch',
custom_headers=custom_headers
)
Also created a ticket on the project page, but I doubt it will get any attention there.
EDIT 2:
Using the Batch method with force=True (which just adds the If-Match: * header) is the same result.
response_feed = self.gd_client.Batch(
request_feed,
uri='https://www.google.com/m8/feeds/contacts/default/full/batch',
force=True
)
* Can someone verify this? I never had to enable it in the console before and my app was able to use the Contacts API without problem, and I believe it wasn't even available before. I was surprised to see it yesterday.
Copying answer from the Google code ticket.
Basically, you need to patch the client's Post method to modify the request feed slightly. Here's one way to do it without directly modifying the library source:
def patched_post(client, entry, uri, auth_token=None, converter=None, desired_class=None, **kwargs):
if converter is None and desired_class is None:
desired_class = entry.__class__
http_request = atom.http_core.HttpRequest()
entry_string = entry.to_string(gdata.client.get_xml_version(client.api_version))
entry_string = entry_string.replace('ns1', 'gd') # where the magic happens
http_request.add_body_part(
entry_string,
'application/atom+xml')
return client.request(method='POST', uri=uri, auth_token=auth_token,
http_request=http_request, converter=converter,
desired_class=desired_class, **kwargs)
# when it comes time to do a batched delete/update,
# instead of calling client.ExecuteBatch, instead directly call patched_post
patched_post(client_instance, entry_feed, 'https://www.google.com/m8/feeds/contacts/default/full/batch')
The ticket referenced in the original post has some updated information and a temporary work around that allows batch deletes to succeed. So far it's working for me!
http://code.google.com/p/gdata-python-client/issues/detail?id=700
You can also specify the etag attribute to get around it. This works in the batch request payload:
<entry gd:etag="*" >
<batch:id>delete</batch:id>
<batch:operation type="delete"/>
<id> urlAsId </id>
</entry>
I'm using Pyramid with Cornice to create an API for a Backbone.js application to consume. My current code is working perfectly for GET and POST requests, but it is returning 404 errors when it receives PUT requests. I believe that this is because Backbone sends them as http://example.com/api/clients/ID, where ID is the id number of the object in question.
My Cornice setup code is:
clients = Service(name='clients', path='/api/clients', description="Clients")
#clients.get()
def get_clients(request):
...
#clients.post()
def create_client(request):
...
#clients.put()
def update_client(request):
...
It seems that Cornice only registers the path /api/clients and not /api/clients/{id}. How can I make it match both?
The documentation gives an example of a service that has both an individual path (/users/{id}) and an object path (/users). Would this work for you ?
#resource(collection_path='/users', path='/users/{id}')
A quick glance at the code for the resource decorator shows that it mainly creates two Service : one for the object and one for the collection. Your problem can probably be solved by adding another Service :
client = Service(name='client', path='/api/clients/{id}', description="Client")
I'm trying to integrate OpenERP and Asterisk with asterisk_click2dial module. Calling from softphone to softphone works, but i cant call from OpenERP to softphone.
manager.conf:
[general]
enabled = yes
webenabled = yes
port = 5038
bindaddr = 0.0.0.0
[openerp]
secret = openerp
deny=0.0.0.0/0.0.0.0
permit=0.0.0.0/0.0.0.0
read = system,call,log,verbose,command,agent,user
write = system,call,log,verbose,command,agent,user
asterisk server config (img)
I'm sure, user settings are ok.
It doesn't works when AMI login is phone number, like in softphone config.
python debugg:
[2012-04-17 14:17:44,072][asterisk] INFO:asterisk_click2dial:Asterisk Click2Dial from 103 to 101
[2012-04-17 14:17:44,078][asterisk] WARNING:web-services:The method action_dial_phone of the object crm.lead can not return `None` !
asterisk server debugg:
== connect attempt from '192.168.1.106' unable to authenticate
While catching SIP packages by Wireshark i saw only reciver number (101#192.168.1.100). I didn't see openerp user number (103), but only Unknown#192.168.1.106. But i first time used Wireshark, so maybe it doesn't matter.
Question is: why OpenERP can't call to softphone, but softphone to softphone can?
Sorry for my english :)
You need to concentrate on the authentication side. If OpenERP (which I am not familiar with) can only send a phone number (or extension number) as a username, then you need to set that as your username in manager.conf. The username portion is what's between the [ and ] above (in this case it's [openerp]. If you do not have the flexibility to set an actual username on the client side of OpenERP, then you'll need to simply replace the [openerp] with [phone_no or ext_no].
Then it should authenticate fine. Wireshark isn't likely to be terribly helpful in this instance.