Error creating AWS lambda function using "sam build" - python

When running sam build --use-containers to create an AWS python 3.8 lambda function that uses a downloaded library, I am getting an error:
pip._vendor.requests.exceptions.SSLError: HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Max retries exceeded with url: /packages/d0/32/6c367f54699bd51961cf3e10299f6dee976f0f6813210052a4d8c2bd1d2b/pymemcache-3.2.0-py2.py3-none-any.whl (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate is not yet valid (_ssl.c:1108)')))
I checked the certificate on https://files.pythonhosted.org, and the cert is marked as starting on 7/13/2020. it's currently 7/14/2020.
I see that I can set the trusted hosts option to hopefully avoid this, (similar to: pip install fails with "connection error: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:598)"), but when PIP is being run from within a container via a script Im not sure how to set it.
if looks like I can use an environment variable to set the PIP trusted hosts as well, but I am not sure how to set that in the docker image used by SAM
(running on a windows 10 system)

Related

Python kopf operator throwing ssl certificate error

When i try to deploy my operator on k8 cluster, it’s throwing error like ssl certificate like
clientconnectorcertificateerror(connectionkey(host='10.xxxxxx',
port=443, is_ssl=true, ssl=none, proxy=none, proxy_auth=none,
proxy_headers_hash=xxxxxxxxxx), sslcertverificationerror(1, '[ssl:
certificate_verify_failed] certificate verify failed: unable to get
issuer certificate (_ssl.c:1131)'))
Basically it’s trying to do GET request to kubernetes api and hitting with this error !!
Should I need to configure anything on kopf ?
When I tried to deploy this operator on minikube it was working, but when it is deployed in k8’s cluster it throwing ssl certificate error actually it is trying to do GET request to kubernets api but not able connect with kubernetes api

Youtube DL Unable to get local issuer certificate - CERTIFICATE_VERIFY_FAILED

I'm trying to use youtube DL with FFmpeg to download an m3u8 stream. Just recently I started receiving this error:
ERROR: Unable to download webpage: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED]
certificate verify failed: unable to get local issuer certificate (_ssl.c:992)>
(caused by URLError(SSLCertVerificationError(1,
'[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed:
unable to get local issuer certificate (_ssl.c:992)')))
I know Youtube DL supports an option for nocheckcertificate but by enabling this after a couple of minutes the target machine will refuse the connection. When trying to use the same m3u8 stream on another computer, I could download the stream without any issues.
I know that someone from Youtubedl CERTIFICATE_VERIFY_FAILED suggested fixing "your system's CA certificate list". What is the process of doing this?
I tried upgrading/reinstalling python and reinstalling the latest Windows update
I also want to mention that there hasn't been any issue with downloading for the past year but recently stumbled upon this when switching proxy providers. But because the same setup works on another PC without any issue it's probably not the reason.
The system the program is running on is Windows
Edit: Another note is that downloading other public m3u8 streams works perfectly fine, so the problem is probably with the system CA SSL.

SSL error only in python command window with apify request

I am trying to use endpoint from apify.com. When I run my request in web browser with token everything is fine but if I run my request via requests library from python console I am getting following error:
SSLError: HTTPSConnectionPool(host='', port=443): Max retries exceeded with url: /endpoint?token=token (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1131)')))
Moreover if I set verify = False in my request than request is working. Does anyone have an idea what can be wrong? Thanks in advance
I had this issue come up a few weeks ago.
>>> pip install certifi
>>> python -m certifi
I'm not certain that one needs to actually call the module to get it's functionality, but I did and it solved the error. More info on Certifi here. It is also a recommended package extension to requests from their website. I added those lasts bits because I was wary of installing a package that ostensibly was never called after installation.
Solution was to install internal company SSL package for managing SSL connection from python. There was a recent change.

What exactly causes the "unable to get local issuer certificate" error when accessing an otherwise accessible (via browser) website URL?

I'm on macOS Monterey 12.3 running Python 3.9.7 installed via brew. Given this minimal replication of my production code:
import requests
try:
response = requests.get(website)
except requests.exceptions.SSLError as e:
print("Error: " + str(e))
... it spits out this error:
Error: HTTPSConnectionPool(host='<SNIP>', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1129)')))
Unfortunately, the website URL is something that I cannot share. But it's definitely accessible via HTTPS on Chrome. I'm aware of workarounds and have successfully fixed it by following this one, but I have deployed the same identical code on a Linux server and it errors out all the same (so I'm assuming this isn't a MacOS specific issue). Is this a misconfiguration of the SSL cert on the server? And if so, how is it fixed?

Databricks CLI: SSLError, can't find local issuer certificate

I have installed and configured the Databricks CLI, but when I try using it I get an error indicating that it can't find a local issuer certificate:
$ dbfs ls dbfs:/databricks/cluster_init/
Error: SSLError: HTTPSConnectionPool(host='dbc-12345678-1234.cloud.databricks.com', port=443): Max retries exceeded with url: /api/2.0/dbfs/list?path=dbfs%3A%2Fda
tabricks%2Fcluster_init%2F (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer
certificate (_ssl.c:1123)')))
Does the above error indicate that I need to install a certificate, or somehow configure my environment so that it knows how to find the correct certificate?
My environment is Windows 10 with WSL (Ubuntu 20.04) (the command above is from WSL/Ubuntu command line).
The Databricks CLI was installed into an Anaconda environment including the following certificates and SSL packages:
$ conda list | grep cert
ca-certificates 2020.6.20 hecda079_0 conda-forge
certifi 2020.6.20 py38h32f6830_0 conda-forge
$ conda list | grep ssl
openssl 1.1.1g h516909a_1 conda-forge
pyopenssl 19.1.0 py_1 conda-forge
I get a similar error when I attept to use the REST API with curl:
$ curl -n -X GET https://dbc-12345678-1234.cloud.databricks.com/api/2.0/clusters/list
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
This problem can be solved by disabling the SSL certificate verification. In Databricks CLI you can do so by specifying insecure = True in your Databricks configuration file .databrickscfg.
I established trust to my Databricks instance by setting the environment variable REQUESTS_CA_BUNDLE.
➜ databricks workspace list
Error: SSLError: HTTPSConnectionPool(host='HOSTNAME.azuredatabricks.net', port=443): Max retries exceeded with url: /api/2.0/workspace/list?path=%2F (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1129)')))
➜ export REQUESTS_CA_BUNDLE=path/to/ca-bundle
➜ databricks workspace list
Users
Shared
Repos
From GitHub Issue:
Download the root CA certificate used to sign the Databricks certificate. Determine the path to the CA bundle and set the environment variable REQUESTS_CA_BUNDLE. See SSL Cert Verification for more information.
There is a similar issue in GitHub for Azure CLI. The solution is practically the same. Combining that with the Erik's answer:
Download the certificate using your browser and save it to disk
Open you Chrome and go to the Databricks website
Press CTRL + SHIFT + I to open the dev tools
Click Security tab
Click View certificate button
Click Details tab
On the Certification Hierarchy, (the top panel), click the highest node in the tree
Click Export the selected certificate
Choose where you want to save (eg. /home/cert/certificate.crt)
Use the SET command on Windows or the export on Linux to create a env variable called REQUESTS_CA_BUNDLE and point it to the downloaded file in the Step 1. (keep in mind that this need to be done in the same machine as you are trying to use the dbfs not in the cluster) For instance:
Linux
export REQUESTS_CA_BUNDLE=/home/cert/certificate.crt
Windows
set REQUESTS_CA_BUNDLE=c:\temp\cert\certificate.crt
Try to run your command dbfs ls dbfs:/databricks/cluster_init/ again
$ dbfs ls dbfs:/databricks/cluster_init/
It should work!

Categories