How to generate self-signed cert using subjectAltName with dirName using OpenSSL? - python

I am attempting to generate a self-signed cert with a SubjectAltName of type DirName. Other types of SubjectAltName like DNS work just fine, but DirName will not work. The code to reproduce fairly simple (python 3.8.5)
import string
from OpenSSL import crypto
def _create_csr():
key = crypto.PKey()
key.generate_key(crypto.TYPE_RSA, 2048)
csr = crypto.X509Req()
csr.set_pubkey(key)
works = "DNS:abc.xyz"
fails = "dirName:MyGeneratedCert"
csr.add_extensions([crypto.X509Extension(b"subjectAltName", False, fails.encode("ascii"))])
csr.sign(key, "sha256")
if __name__=="__main__":
_create_csr()
The exception I am receiving is as the following
Traceback (most recent call last):
File "tests/createcert.py", line 16, in <module>
_create_csr()
File "tests/createcert.py", line 12, in _create_csr
csr.add_extensions([crypto.X509Extension(b"subjectAltName", False, fails.encode("ascii"))])
File "/usr/lib/python3/dist-packages/OpenSSL/crypto.py", line 779, in __init__
_raise_current_error()
File "/usr/lib/python3/dist-packages/OpenSSL/_util.py", line 54, in exception_from_error_queue
raise exception_type(errors)
OpenSSL.crypto.Error: [('X509 V3 routines', 'X509V3_get_section', 'operation not defined'), ('X509 V3 routines', 'do_dirname', 'section not found'), ('X509 V3 routines', 'a2i_GENERAL_NAME', 'dirname error'), ('X509 V3 routines', 'X509V3_EXT_nconf', 'error in extension')]
The call is making it into OpenSSL's do_dirname function (stack trace). I assume that the value is not being passed in in correct way, but I cannot understand how to pass it as desired.
Any help would be appreciated.

You cannot, via python, because dirName references a value in the configuration database, but pyOpenSSL does not provide an interface to create a configuration database.
Background: dirName references a section in the database, which could be a config file. Reference the x509v3_config tool, for example (https://github.com/openssl/openssl/blob/master/doc/man5/x509v3_config.pod) where you may use a config file:
[extensions]
subjectAltName = dirName:dir_sect
[dir_sect]
C = UK
O = My Org
OU = My Unit
CN = My Name
Note how dirName simply refers to a different section in the configuration database.
But pyopenssl does not have a provision for creating such a database, so your dirName reference won't be found -- hence your error.
Note this is a known limitation. The Python code itself includes mention of lack of configuration database: See comment in code: https://github.com/pyca/pyopenssl/blob/master/src/OpenSSL/crypto.py, line approx 754:
# We have no configuration database - but perhaps we should (some
# extensions may require it).

Related

(Python3) How do you get the time stamp of a remote file in python (i.e. a web link)?

I have already seen the examples on here of using python's os library to get a local file's time stamp in python by passing it the local path (i.e. /var/www/html/etc.../filename.txt), but when I try to pass getmtime a link, it cannot process it.
Here is what the code looks like:
import os
print(os.path.getmtime('https://www.sec.gov/Archives/edgar/data/1474439/000169655519000022/xslF345X03/wf-form4_156772823294389.xml'))
Here is the error I get:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib64/python3.7/genericpath.py", line 55, in getmtime
return os.stat(filename).st_mtime
FileNotFoundError: [Errno 2] No such file or directory: 'https://www.sec.gov/Archives/edgar/data/1474439/000169655519000022/xslF345X03/wf-form4_156772823294389.xml'
I know that this link exists.
So it obviously doesn't like me passing it a link. Is there another function that you use to pass links, to get the last modification time of a remote file?
An URL is not necessarily a file. You can ask the remote server to tell you about the link, and the remote server may provide a Last-Modified header, or may not, at the remote server's discretion. It could also lie, if so instructed. In order to do this, you would need to make a HTTP request; the easiest way to do it from Python is the nice requests library.
import requests
import dateutil.parser
response = requests.head(url)
last_modified = response.headers.get('Last-Modified')
if last_modified:
last_modified = dateutil.parser.parse(last_modified)

AWS - OS Error permission denied Lambda Script

I'm trying to execute a Lambda Script in Python with an imported library, however I'm getting permission errors.
I am also getting some alerts about the database, but database queries are called after the subprocess so I don't think they are related. Could someone explain why do I get error?
Alert information
Alarm:Database-WriteCapacityUnitsLimit-BasicAlarm
State changed to INSUFFICIENT_DATA at 2016/08/16. Reason: Unchecked: Initial alarm creation
Lambda Error
[Errno 13] Permission denied: OSError Traceback (most recent call last):File "/var/task/lambda_function.py", line 36, in lambda_handler
xml_output = subprocess.check_output(["./mediainfo", "--full", "--output=XML", signed_url])
File "/usr/lib64/python2.7/subprocess.py", line 566, in check_output process = Popen(stdout=PIPE, *popenargs, **kwargs)
File "/usr/lib64/python2.7/subprocess.py", line 710, in __init__ errread, errwrite) File "/usr/lib64/python2.7/subprocess.py", line 1335, in _execute_child raise child_exception
OSError: [Errno 13] Permission denied
Lambda code
import logging
import subprocess
import boto3
SIGNED_URL_EXPIRATION = 300 # The number of seconds that the Signed URL is valid
DYNAMODB_TABLE_NAME = "TechnicalMetadata"
DYNAMO = boto3.resource("dynamodb")
TABLE = DYNAMO.Table(DYNAMODB_TABLE_NAME)
logger = logging.getLogger('boto3')
logger.setLevel(logging.INFO)
def lambda_handler(event, context):
"""
:param event:
:param context:
"""
# Loop through records provided by S3 Event trigger
for s3_record in event['Records']:
logger.info("Working on new s3_record...")
# Extract the Key and Bucket names for the asset uploaded to S3
key = s3_record['s3']['object']['key']
bucket = s3_record['s3']['bucket']['name']
logger.info("Bucket: {} \t Key: {}".format(bucket, key))
# Generate a signed URL for the uploaded asset
signed_url = get_signed_url(SIGNED_URL_EXPIRATION, bucket, key)
logger.info("Signed URL: {}".format(signed_url))
# Launch MediaInfo
# Pass the signed URL of the uploaded asset to MediaInfo as an input
# MediaInfo will extract the technical metadata from the asset
# The extracted metadata will be outputted in XML format and
# stored in the variable xml_output
xml_output = subprocess.check_output(["./mediainfo", "--full", "--output=XML", signed_url])
logger.info("Output: {}".format(xml_output))
save_record(key, xml_output)
def save_record(key, xml_output):
"""
Save record to DynamoDB
:param key: S3 Key Name
:param xml_output: Technical Metadata in XML Format
:return:
"""
logger.info("Saving record to DynamoDB...")
TABLE.put_item(
Item={
'keyName': key,
'technicalMetadata': xml_output
}
)
logger.info("Saved record to DynamoDB")
def get_signed_url(expires_in, bucket, obj):
"""
Generate a signed URL
:param expires_in: URL Expiration time in seconds
:param bucket:
:param obj: S3 Key name
:return: Signed URL
"""
s3_cli = boto3.client("s3")
presigned_url = s3_cli.generate_presigned_url('get_object', Params={'Bucket': bucket, 'Key': obj},
ExpiresIn=expires_in)
return presigned_url
I'm fairly certain that this is a restriction imposed by the lambda execution environment, but it can be worked around by executing the script through the shell.
Try providing shell=True to your subprocess call:
xml_output = subprocess.check_output(["./mediainfo", "--full", "--output=XML", signed_url], shell=True)
I encountered a similar situation. I was receiving the error:
2016-11-28T01:49:01.304Z d4505c71-b50c-11e6-b0a1-65eecf2623cd Error: Command failed: /var/task/node_modules/youtube-dl/bin/youtube-dl --dump-json -f best https://soundcloud.com/bla/blabla
python: can't open file '/var/task/node_modules/youtube-dl/bin/youtube-dl': [Errno 13] Permission denied
For my (and every other) Node Lambda project containing third party libraries, there will be a directory called "node_modules" (most tutorials, such as this one, will detail how this directory is created) that has all the third party packages and their dependencies. The same principles apply to the other supported languages (currently Python and Java). THESE ARE THE FILES THAT AMAZON IS ACTUALLY PUTTING ON THE LAMBDA AMIS AND ATTEMPTING TO USE. So, to fix the issue, run this on the node_modules directory (or whatever directory your third party libraries live in):
chmod -R 777 /Users/bla/bla/bla/lambdaproject/node_modules
This command means making the file readable, writable and executable by all users. Which is apparently what the servers that execute Lambda functions need, in order to work. Hopefully this helps!

read certificate(.crt) and key(.key) file in python

So i'm using the JIRA-Python module to connect to my company's instance on JIRA and it requires me to pass the certificate and key for this.
However using the OpenSSL module,i'm unable to read my local certificate and key to pass it along the request.
the code for reading is below
import OpenSSL.crypto
c = open('/Users/mpadakan/.certs/mpadakan-blr-mpsot-20160704.crt').read()
cert = OpenSSL.crypto.dump_certificate(OpenSSL.crypto.FILETYPE_PEM, c)
the error i get is
Traceback (most recent call last):
File "flaskApp.py", line 19, in <module>
cert = OpenSSL.crypto.dump_certificate(OpenSSL.crypto.FILETYPE_PEM, c)
TypeError: must be X509, not str
could someone tell me how to read my local .crt and .key file into x509 objects?
#can-ibanoglu was right on:
import OpenSSL.crypto
cert = OpenSSL.crypto.load_certificate(
OpenSSL.crypto.FILETYPE_PEM,
open('/tmp/server.crt').read()
)
>>> cert
<OpenSSL.crypto.X509 object at 0x7f79906a6f50>
Which format in your .crt file.
Are there:
text starting with -----BEGIN CERTIFICATE-----
base64 text started with MI chars
binary data starting with \x30 byte?
In first two case there are PEM format, but in second one you are missing staring line, just add it to get correct PEM file or convert file to binary with base64 and get third case.
In third case you have DER format, so to load it you should use OpenSSL.crypto.FILETYPE_ASN1

openchange provision failure

I am currently looking at openchange because I find it fascinating that there is actually something out there that can effectively work as an exchange server. I followed the directions verbatim however, I keep running into the same problem:
When I get to the part where I need to provision openchange detailed here:
http://www.openchange.org/cookbook/configuring.html
I am directed to type in the following command:
./setup/openchange_provision --standalone
I keep getting the following error:
Error: "(53, 'schema_data_add: updates are not allowed: reject request\n')" when adding element:
dn: CN=ms-Exch-Access-Control-Map,CN=Schema,CN=Configuration,DC=domain,DC=local
objectClass: top
objectClass: attributeSchema
cn: ms-Exch-Access-Control-Map
distinguishedName: CN=ms-Exch-Access-Control-Map,CN=Schema,CN=Configuration,DC=domain,DC=local
attributeID: 1.2.840.113556.1.4.7000.102.64
attributeSyntax: 2.5.5.12
isSingleValued: TRUE
showInAdvancedViewOnly: TRUE
adminDisplayName: ms-Exch-Access-Control-Map
adminDescription: ms-Exch-Access-Control-Map
oMSyntax: 64
searchFlags: 0
lDAPDisplayName: msExchAccessControlMap
name: ms-Exch-Access-Control-Map
#schemaIDGUID: 8ff54464-b093-11d2-aa06-00c04f8eedd8
isMemberOfPartialAttributeSet: FALSE
objectCategory: CN=Attribute-Schema,CN=Schema,CN=Configuration,DC=domain,DC=local
[!] error while provisioning the Exchange schema classes (53): schema_data_add: updates are not allowed: reject request
Traceback (most recent call last):
File "./setup/openchange_provision", line 90, in <module>
openchange.provision(setup_path, provisionnames, lp, creds)
File "python/openchange/provision.py", line 742, in provision
install_schemas(setup_path, names, lp, creds, reporter)
File "python/openchange/provision.py", line 441, in install_schemas
provision_schema(sam_db, setup_path, names, reporter, schema['path'], schema['description'], schema['modify_mode'])
File "python/openchange/provision.py", line 227, in provision_schema
sam_db.add_ldif(el, ['relax:0'])
File "/usr/local/samba/lib/python2.7/site-packages/samba/__init__.py", line 224, in add_ldif
self.add(msg, controls)
_ldb.LdbError: (53, 'schema_data_add: updates are not allowed: reject request\n')
I am at a complete loss as to what might be wrong, I have rebuilt this many times and keep running into the same roadblock. Any help with this would be greatly appreciated.
If you look at output of Openchange, you can find a root cause of the problem:
[!] error while provisioning the Exchange schema classes (53): schema_data_add: updates are not allowed: reject request
Add the following line to the [global] section in your smb.conf to allow schema changing:
dsdb:schema update allowed=true

Google Cloud Storage using Python

I set up a required environment for Google Cloud Storage according to the manual.
I have installed "gsutil" and set up all paths.
My gsutil works perfectly, however, when I try to run the code below,
#!/usr/bin/python
import StringIO
import os
import shutil
import tempfile
import time
from oauth2_plugin import oauth2_plugin
import boto
# URI scheme for Google Cloud Storage.
GOOGLE_STORAGE = 'gs'
# URI scheme for accessing local files.
LOCAL_FILE = 'file'
uri=boto.storage_uri('sangin2', GOOGLE_STORAGE)
try:
uri.create_bucket()
print "done!"
except boto.exception.StorageCreateError, e:
print "failed"
It gives "403 Access denied" error.
Traceback (most recent call last):
File "/Volumes/WingIDE-101-4.0.0/WingIDE.app/Contents/MacOS/src/debug/tserver/_sandbox.py", line 23, in <module>
File "/Users/lsangin/gsutil/boto/boto/storage_uri.py", line 349, in create_bucket
return conn.create_bucket(self.bucket_name, headers, location, policy)
File "/Users/lsangin/gsutil/boto/boto/gs/connection.py", line 91, in create_bucket
response.status, response.reason, body)
boto.exception.GSResponseError: GSResponseError: 403 Forbidden
<?xml version='1.0' encoding='UTF-8'?><Error><Code>AccessDenied</Code><Message>Access denied.</Message></Error>
Since I am new to this, it is kinda hard for me to figure out why.
Can someone help me?
Thank you.
The boto library should automatically find and use your $HOME/.boto file. One thing to check: make sure the project you're using is set as your default project for legacy access (at the API console, click on "Storage Access" and verify that it says "This is your default project for legacy access"). When I have that set incorrectly and I follow the create bucket example you referenced, I also get a 403 error, however, it doesn't make sense that this would work for you in gsutil but not with direct use of boto.
Try adding "debug=2" when you instantiate the storage_uri object, like this:
uri = boto.storage_uri(name, GOOGLE_STORAGE, debug=2)
That will generate some additional debugging information on stdout, which you can then compare with the debug output from an analogous, working gsutil example (via gsutil -D mb ).

Categories