I am having trouble trying to set up a connector to Kusto using the Kusto client library for Python.
I managed to make it work using the with_aad_device_authentication method, by doing the following:
KCSB = KustoConnectionStringBuilder.with_aad_device_authentication(KUSTO_CLUSTER)
KCSB.authority_id = AAD_TENANT_ID
client = KustoClient(KCSB)
KUSTO_QUERY = "Table | take 10"
RESPONSE = client.execute(KUSTO_DATABASE, KUSTO_QUERY)
which requires me to authenticate by going to a web page and entering a code provided by the library.
However, when I am trying to connect to the database using the with_aad_application_key_authentication method it throws me
KustoServiceError: (KustoServiceError(...), [{'error': {'code': 'Forbidden', 'message': 'Caller is not authorized to perform this action', '#type': 'Kusto.DataNode.Exceptions.UnauthorizedDatabaseAccessException' ...
which I don't understand since I have granted my application with the following permissions: Azure Data Explorer (with Multifactor Autentication) and Azure Data Explorer.
I have been struggling on this for a while and I couldn't come up with a solution. Does anyone have any idea of what could be the problem here?
There are two possible reasons:
1) You did not give the app permission on the database itself. Permissions on the Azure data explorer resource (we call it the 'control plane') using the "access control (IAM)" button allow your app to do management operations on the cluster (such as adding and removing databases), while permissions in the database itself allows doing operations within the database such as creating tables and doing queries (we call it the 'data plane'). Please note that you can also provide permissions to all databases in the cluster by clicking on "permissions" button in the cluster blade.
In order to fix it, click on the database in Azure portal and once you are in the database blade, click on the 'permissions' button and give the app permission (admin, user, viewer etc.). see screenshot below.
2) You did not provide the any of the three required datapoints correctly (appId, appKey and authority id)
Here is the relevant screenshot for adding permission in a specific database:
Adding some more context.
Granting your application delegated permissions to ADX only allows your application to perform user authentication for ADX resources, but it does not grant the application itself any roles on your specific ADX resource.
The answer above guides you how to do that.
Related
I have a big Query Database connected to a Google Sheet in which I have a read only access
My request is that I want to get data from a table and this request is working perfectly fine in the Big Query editor but I want to create a Google Cloud function to have an API and access this request directly from URL
I have ceated a Service Account using this command:
gcloud iam service-accounts create connect-to-bigquery
gcloud projects add-iam-policy-binding test-24s --member="serviceAccount:connect-to-bigquery#test-24s.iam.gserviceaccount.com" --role="roles/owner"
and I have created a Google cloud function as follow :
Creating Cloud Function
Service account settings
Here is my code for main.py file :
from google.cloud import bigquery
def hello_world(request):
client = bigquery.Client()
query = "SELECT order_id, MAX(status) AS last_status FROM `test-24s.Dataset.Order_Status` GROUP BY order_id"
query_job = client.query(query)
print("The query data:")
for row in query_job:
print("name={}, count ={}".format(row[0], row["last_status"]))
return f'The query run successfully'
And for the requirements.txt file :
# Function dependencies, for example:
# package>=version
google-cloud-bigquery
The function deploys successfully however when I try to test it I get this error :
Error: function terminated. Recommended action: inspect logs for termination reason. Additional troubleshooting documentation can be found at https://cloud.google.com/functions/docs/troubleshooting#logging Details:
500 Internal Server Error: The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.
And when reading the log file I found this error
403 Access Denied: BigQuery BigQuery: Permission denied while getting Drive credentials.
Please help me to solve this I already tried all the solutions that I found on the net without any success
Based on this: "Permission denied while getting Drive credentials" - I would say that your service account's IAM permissions are not 'transient' => while that service account probably has relevant access to the BigQuery, it does not have access to the underlined spreadsheet maintained on the Drive...
I would try - either
extend the scope of the service account's credentials (if possible, but that may not be very straightforward). Here is an article by Zaar Hai with some details - Google Auth — Dispelling the Magic and a comment from Guillaume - "Yes, my experience isn't the same";
or (preferably from my point of view)
make a copy (may be with regular updates) of the original spreadsheet based table as a native BigQuery table, and use the later in your cloud function. A side effect of this approach - a significant performance improvement (and cost savings).
I am getting the error while deploying the Azure function from the local system.
I wen through some blogs and it is stating that my function is unable to connect with the Azure storage account which has the functions meta data.
Also, The function on the portal is showing the error as: Azure Functions runtime is unreachable
Earlier my function was running but after integrating the function with a Azure premium App service plan it has stooped working. My assumption is that my app service plan having some restriction for the inbound/outbound traffic rule and Due to this it is unable to establish the connection with the function's associated storage account.
Also, I would like to highlight that if a function is using the premium plan then we have to add few other configuration properties.
WEBSITE_CONTENTAZUREFILECONNECTIONSTRING = "DefaultEndpointsProtocol=https;AccountName=blob_container_storage_acc;AccountKey=dummy_value==;EndpointSuffix=core.windows.net"
WEBSITE_CONTENTSHARE = "my-function-name"
For the WEBSITE_CONTENTSHARE property I have added the function app name but I am not sure with the value.
Following is the Microsoft document reference for the function properties
Microsoft Function configuration properties link
Can you please help me to resolve the issue.
Note: I am using python for the Azure functions.
I have created a new function app with Premium plan and selected the interpreter as Python. When we select Python, OS will be automatically Linux.
Below is the message we get to create functions for Premium plan function App:
Your app is currently in read only mode because Elastic Premium on Linux requires running from a package.
PortalScreenshot
We need to create, deploy and run function apps from a package, refer to the documentation on how we can run functions from package.
Documentation
Make sure to add all your local.settings.json configurations to Application Settings in function app.
Not sure of what kind of Azure Function you are using but usually when there is a Storage Account associated, we need to specify the AzureWebJobsStorage field in the serviceDependencies.json file inside Properties folder. And when I had faced the same error, the cause was that while publishing the azure function from local, some settings from the local.settings.json were missing in the Application Settings of the app service under Configuration blade.
There can be few more things which you can recheck:
Does the storage account you are trying to use existing still or is deleted by any chance.
While publishing the application from local, using the web deploy method, the publish profile is correct or has any issues.
Disabling the function app and then stopping the app service before redeploying it.
Hope any of the above mentions help you diagnose and solve the issue.
The thing is that there is a difference in how the function deployed using Consumption vs Premium service plan.
Consumption - working out of the box.
Premium - need to add the WEBSITE_RUN_FROM_PACKAGE = 1 in the function Application settings. (see https://learn.microsoft.com/en-us/azure/azure-functions/run-functions-from-deployment-package for full details)
What I understand is that, in order to access AWS applications such as redshift, the way to do it is
client = boto3.client("redshift", region_name="someRegion", aws_access_key_id="foo", aws_secret_access_key="bar")
response = client.describe_clusters(ClusterIdentifier="mycluster")
print(response)
This code runs fine for both locally through pycharm, as well as on AWS lambda.
However, am I correct that this aws_access_key_id and aws_secret_access_key are both from me? IE: my IAM user security access keys. Is this supposed to be the case? Or am I suppose to create a different user / role in order to access redshift via boto3?
The more important question is, how do I properly store & retrieve aws_access_key_id and aws_secret_access_key? I understand that this could potentially be done via secrets manager, but I am still faced with the problem that, if I run the below code, I get an error saying that it is unable to locate credentials.
client = boto3.client("secretsmanager", region_name="someRegion")
# Met with the problem that it is unable to locate my credentials.
The proper way to do this would be for you to create an IAM role which allows the desired redshift functionality, and then attaching that role to your lambda.
When you create the role, you have the flexibility to create a policy to fine-grain access permissions to certain actions and/or certain resources.
After you have attached the IAM role to your lambda, you will simply be able to do:
>>> client = boto3.client("redshift")
From the docs. The first & seconds options are not secured since you mix the credentials with the code.
If the code runs on AWS EC2 the best way is using "assume role" where you grant the EC2 instance permissions. If the code run outside AWS you will have to select an option like using ~/.aws/credentials
Boto3 will look in several locations when searching for credentials. The mechanism in which Boto3 looks for credentials is to search through a list of possible locations and stop as soon as it finds credentials. The order in which Boto3 searches for credentials is:
Passing credentials as parameters in the boto.client() method
Passing credentials as parameters when creating a Session object
Environment variables
Shared credential file (~/.aws/credentials)
AWS config file (~/.aws/config)
Assume Role provider
Boto2 config file (/etc/boto.cfg and ~/.boto)
Instance metadata service on an Amazon EC2 instance that has an IAM role configured.
I am working on an app with a locally stored mongodb instance and I am strugling with the design of how app-users should be stored in order to implement in-app login.
In one hand, Mongodb provides a solid access control and authentication for db users, with the ability to define roles, actions and privileges. So I feel tempted to leverage this to implement my app-users storage.
On the other hand, considering it uses a system collection, I get the feeling, and from at least this thread I am getting it right, that this user management provided by mongodb should be used to manage db-user accounts only (that would be software that access the database), not app-user accounts (people who use the software that access the database).
So I am thinking my storage schema should look something like this:
system.
users #for db-users (apps and services)
other system cols
...
myappdb.
users #for app-users (actual people using the app)
other app cols
...
So, in order to log into my app, I need a first set of credentials (db-user) so the app can log into my db so I can retrieve app-user credentials in order to log this person into my app when they type their own credentials.
Question 1: does this make sense?
Question 2: if yes, how do I hide my db-user credentials then? because I get the feeling this should not be hardcoded and I am not finding a way to make the connection to the database without it being so.
Question 3: if not, what would be an appropriate way to deal with this? links and articles are welcome.
I'm learning Microsoft Azure and using python3. I got following error code :
C:\Python\python.exe D:/Phyton/Restapi/a.py
Cannot find resource group sgelastic. Check connection/authorization.
{
"error": {
"code": "AuthorizationFailed",
"message": "The client '22273c48-3d9d-4f31-9316-210135595353' with object id '22273c48-3d9d-4f31-9316-210135595353' does not have authorization to perform action 'Microsoft.Resources/subscriptions/resourceGroups/read' over scope '/subscriptions/0f3e0eec-****-****-b9f9-************resourceGroups/sgelastic'."
}
}
Process finished with exit code 0
What should ı do? I should create new subscription or something else?
Thank you.
The credentials you are using have not the necessary permissions to read the resource group "sgelastic".
You can add the "contributor" role to these credentials, or a more precise permission to this specific resource group, depending of your needs.
You should read the documentation on RBAC on Azure for that, current is there:
https://learn.microsoft.com/azure/active-directory/role-based-access-control-what-is
The list of available actions (and name of built-in roles that have it) is there:
https://learn.microsoft.com/azure/active-directory/role-based-access-built-in-roles
As #Laurent Mazuel said, try to follow the steps as the figure below to add the necessary permission.
Click the Subscription tab on Azure portal.
Select the subscription of the related resource group.
Move to the Access control (IAM) tab.
Click the + Add button.
Select the role like Contributor in the Add permission dialog.
Search the name of your user or application, and select the one you used in the searched result list.
Save it.
Or you can use Azure CLI 2.0 to create service principal to do it.
az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/mySubscriptionID/resourceGroups/myResourceGroupName"
Hope it helps.