I have a running MlFlow server on GCS VM instance. I have created a bucket to log the artifacts.
This is the command I'm running to start the server and for specifying bucket path-
mlflow server --default-artifact-root gs://gcs_bucket/artifacts --host x.x.x.x
But facing this error:
TypeError: stat: path should be string, bytes, os.PathLike or integer, not ElasticNet
Note- The mlflow server is running fine with the specified host alone. The problem is in the way when I'm specifying the storage bucket path.
I have given permission of storage api by using these commands:
gcloud auth application-default login
gcloud auth login
Also, on printing the artifact URI, this is what I'm getting:
mlflow.get_artifact_uri()
Output:
gs://gcs_bucket/artifacts/0/122481bf990xxxxxxxxxxxxxxxxxxxxx/artifacts
So in the above path from where this is coming 0/122481bf990xxxxxxxxxxxxxxxxxxxxx/artifacts and why it's not getting auto-created at gs://gcs_bucket/artifacts
After debugging more, why it's not able to get the local path from VM:
And this error I'm getting on VM:
ARNING:root:Malformed experiment 'mlruns'. Detailed error Yaml file './mlruns/mlruns/meta.yaml' does not exist.
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/mlflow/store/tracking/file_store.py", line 197, in list_experiments
experiment = self._get_experiment(exp_id, view_type)
File "/usr/local/lib/python3.6/dist-packages/mlflow/store/tracking/file_store.py", line 256, in _get_experiment
meta = read_yaml(experiment_dir, FileStore.META_DATA_FILE_NAME)
File "/usr/local/lib/python3.6/dist-packages/mlflow/utils/file_utils.py", line 160, in read_yaml
raise MissingConfigException("Yaml file '%s' does not exist." % file_path)
mlflow.exceptions.MissingConfigException: Yaml file './mlruns/mlruns/meta.yaml' does not exist.
Can I get a solution to this and what I'm missing?
I think the main error is from the structure that you want to deploy. For your use case, the structure is suitable that in here. So you miss the URI path which used to store backend metadata. So please install DB SQL(PostgreSQL,...) first, then add the path to --backend-storage-uri.
In case you want to use MlFlow as a model registry and store images on gcs. You can use this structure in here with adding tag --artifacts-only --serve-artifacts
Hope this can help you.
Related
I have been trying to work with polyglot and build a simple python processor. I followed the polyglot recipe and I could not get the stream to deploy. I originally deployed the same processor that is used in the example and got the following errors:
Unknown command line arg requested: spring.cloud.stream.bindings.input.destination
Unknown environment variable requested: SPRING_CLOUD_STREAM_KAFKA_BINDER_BROKERS
Traceback (most recent call last):
File "/processor/python_processor.py", line 10, in
consumer = KafkaConsumer(get_input_channel(), bootstrap_servers=[get_kafka_binder_brokers()])
File "/usr/local/lib/python2.7/dist-packages/kafka/consumer/group.py", line 353, in init
self._client = KafkaClient(metrics=self._metrics, **self.config)
File "/usr/local/lib/python2.7/dist-packages/kafka/client_async.py", line 203, in init
self.cluster = ClusterMetadata(**self.config)
File "/usr/local/lib/python2.7/dist-packages/kafka/cluster.py", line 67, in init
self._bootstrap_brokers = self._generate_bootstrap_brokers()
File "/usr/local/lib/python2.7/dist-packages/kafka/cluster.py", line 71, in _generate_bootstrap_brokers
bootstrap_hosts = collect_hosts(self.config['bootstrap_servers'])
File "/usr/local/lib/python2.7/dist-packages/kafka/conn.py", line 1336, in collect_hosts
host, port, afi = get_ip_port_afi(host_port)
File "/usr/local/lib/python2.7/dist-packages/kafka/conn.py", line 1289, in get_ip_port_afi
host_and_port_str = host_and_port_str.strip()
AttributeError: 'NoneType' object has no attribute 'strip'
Exception AttributeError: "'KafkaClient' object has no attribute '_closed'" in <bound method KafkaClient.del of <kafka.client_async.KafkaClient object at 0x7f8b7024cf10>> ignored
I then attempted to pass the environment and binding arguments through the deployment stream but that did not work. When I manually inserted the SPRING_CLOUD_STREAM_KAFKA_BINDER_BROKERS and spring.cloud.stream.bindings.input.destination parameter into Kafka's consumer I was able to deploy the stream as a workaround. I am not entirely sure what is causing the issue, would deploying this on Kubernetes be any different or is this an issue with Polyglot and Dataflow? Any help with this would be appreciated.
Steps to reproduce:
Attempt to deploy polyglot-processor stream from polyglot recipe on local dataflow server. I am also using the same stream definition as in the example: http --server.port=32123 | python-processor --reversestring=true | log.
Additional context:
I am attempting to deploy the stream on a local installation of SPDF and Kafka since I had some issues deploying custom python applications with Docker.
The recipe you have posted above expects the SPRING_CLOUD_STREAM_KAFKA_BINDER_BROKERS environment variable present as part of the server configuration (since the streams are managed via Skipper server, you would need to set this environment variable in your Skipper server configuration).
You can check this documentation on how you can set SPRING_CLOUD_STREAM_KAFKA_BINDER_BROKERS as environment property in Skipper server deployment.
You can also pass this property as a deployer property when deploying the python-processor stream app. You can refer this documentation on how you can pass deployment property to set the Spring Cloud Stream properties (here the binder configuration property) at the time of stream deployment.
I'm trying to upload a file into Azure blob storage. My application is hosted in the Azure app service Linux server.
Now when I request to file upload from a remote machine, I want a file to be uploaded from the given path.
I have three request parameters which take the value-form GET request
https://testApp.azurewebsites.net/blobs/fileUpload/
containerName:test
fileName:testFile.txt
filePath:C:\Users\testUser\Documents
#app.route("/blobs/fileUpload/")
def fileUpload():
container_name = request.form.get("containerName")
print(container_name)
local_file_name =request.form.get("fileName")
print(local_file_name)
local_path =request.form.get('filePath')
ntpath.normpath(local_path)
print(local_path)
full_path_to_file=ntpath.join(local_path,local_file_name)
print(full_path_to_file)
# Upload the created file, use local_file_name for the blob name
block_blob_service.create_blob_from_path(container_name,
local_file_name, full_path_to_file)
return jsonify({'status': 'fileUploaded'})
local_path =request.form.get('filePath') the value which I get from the request is C:\Users\testUser\Documents\
becasue of which I get this error
OSError: [Errno 22] Invalid argument: 'C:\Users\testUser\Documents\testFile.txt'
all I want is to get the same path that I send in the request. Since the application is hosted in the Linux machine it treats the path as a UNIX file system if I use OS.path
please help me with this
As per the error message says, the local path is invalid for 'C:\Users\testUser\Documents\testFile.txt'. It means that there is no such file path in your local system.
If you want to use create_blob_from_path method, you should download the file to your local system first, then use the method to upload to blob storage.
Or you can get the stream / text of the file from remote, then use create_blob_from_stream / create_blob_from_text method respectively.
blob.upload_from_filename(source) gives the error
raise exceptions.from_http_status(response.status_code, message, >response=response)
google.api_core.exceptions.Forbidden: 403 POST >https://www.googleapis.com/upload/storage/v1/b/bucket1-newsdata->bluetechsoft/o?uploadType=multipart: ('Request failed with status >code', 403, 'Expected one of', )
I am following the example of google cloud written in python here!
from google.cloud import storage
def upload_blob(bucket, source, des):
client = storage.Client.from_service_account_json('/path')
storage_client = storage.Client()
bucket = storage_client.get_bucket(bucket)
blob = bucket.blob(des)
blob.upload_from_filename(source)
I used gsutil to upload files, which is working fine.
Tried to list the bucket names using the python script which is also working fine.
I have necessary permissions and GOOGLE_APPLICATION_CREDENTIALS set.
This whole things wasn't working because I didn't have permission storage admin in the service account that I am using in GCP.
Allowing storage admin to my service account solved my problem.
As other answers have indicated that this is related to the issue of permission, I have found one following command as useful way to create default application credential for currently logged in user.
Assuming, you got this error, while running this code in some machine. Just following steps would be sufficient:
SSH to vm where code is running or will be running. Make sure you are user, who has permission to upload things in google storage.
Run following command:
gcloud auth application-default login
This above command will ask to create token by clicking on url. Generate token and paste in ssh console.
That's it. All your python application started as that user, will use this as default credential for storage buckets interaction.
Happy GCP'ing :)
This question is more appropriate for a support case.
As you are getting a 403, most likely you are missing a permission on IAM, the Google Cloud Platform support team will be able to inspect your resources and configurations.
This is what worked for me when the google documentation didn't work. I was getting the same error with the appropriate permissions.
import pathlib
import google.cloud.storage as gcs
client = gcs.Client()
#set target file to write to
target = pathlib.Path("local_file.txt")
#set file to download
FULL_FILE_PATH = "gs://bucket_name/folder_name/file_name.txt"
#open filestream with write permissions
with target.open(mode="wb") as downloaded_file:
#download and write file locally
client.download_blob_to_file(FULL_FILE_PATH, downloaded_file)
I've build the following script:
import boto
import sys
import gcs_oauth2_boto_plugin
def check_size_lzo(ds):
# URI scheme for Cloud Storage.
CLIENT_ID = 'myclientid'
CLIENT_SECRET = 'mysecret'
GOOGLE_STORAGE = 'gs'
dir_file= 'date_id={ds}/apollo_export_{ds}.lzo'.format(ds=ds)
gcs_oauth2_boto_plugin.SetFallbackClientIdAndSecret(CLIENT_ID, CLIENT_SECRET)
uri = boto.storage_uri('my_bucket/data/apollo/prod/'+ dir_file, GOOGLE_STORAGE)
key = uri.get_key()
if key.size < 45379959:
raise ValueError('umg lzo file is too small, investigate')
else:
print('umg lzo file is %sMB' % round((key.size/1e6),2))
if __name__ == "__main__":
check_size_lzo(sys.argv[1])
It works fine locally but when I try and run on kubernetes cluster I get the following error:
boto.exception.GSResponseError: GSResponseError: 403 Access denied to 'gs://my_bucket/data/apollo/prod/date_id=20180628/apollo_export_20180628.lzo'
I have updated the .boto file on my cluster and added my oauth client id and secret but still having the same issue.
Would really appreciate help resolving this issue.
Many thanks!
If it works in one environment and fails in another, I assume that you're getting your auth from a .boto file (or possibly from the OAUTH2_CLIENT_ID environment variable), but your kubernetes instance is lacking such a file. That you got a 403 instead of a 401 says that your remote server is correctly authenticating as somebody, but that somebody is not authorized to access the object, so presumably you're making the call as a different user.
Unless you've changed something, I'm guessing that you're getting the default Kubernetes Engine auth, with means a service account associated with your project. That service account probably hasn't been granted read permission for your object, which is why you're getting a 403. Grant it read/write permission for your GCS resources, and that should solve the problem.
Also note that by default the default credentials aren't scoped to include GCS, so you'll need to add that as well and then restart the instance.
This question already has an answer here:
Is it necessary to recreate a Google Container Engine cluster to modify API permissions?
(1 answer)
Closed 5 years ago.
Currently I'm trying to write files into Google Cloud Storage bucket. For this, I have used django-storages package.
I have deployed my code and I get into the running container through kubernetes kubectl utility to check the working of GCS bucket.
$ kubectl exec -it foo-pod -c foo-container --namespace=testing python manage.py shell
I can able to read the bucket but if I try to write into the bucket, it shows the below traceback.
>>> from django.core.files.storage import default_storage
>>> f = default_storage.open('storage_test', 'w')
>>> f.write('hi')
2
>>> f.close()
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/google/cloud/storage/blob.py", line 946, in upload_from_file
client, file_obj, content_type, size, num_retries)
File "/usr/local/lib/python3.6/site-packages/google/cloud/storage/blob.py", line 867, in _do_upload
client, stream, content_type, size, num_retries)
File "/usr/local/lib/python3.6/site-packages/google/cloud/storage/blob.py", line 700, in _do_multipart_upload
transport, data, object_metadata, content_type)
File "/usr/local/lib/python3.6/site-packages/google/resumable_media/requests/upload.py", line 98, in transmit
self._process_response(result)
File "/usr/local/lib/python3.6/site-packages/google/resumable_media/_upload.py", line 110, in _process_response
response, (http_client.OK,), self._get_status_code)
File "/usr/local/lib/python3.6/site-packages/google/resumable_media/_helpers.py", line 93, in require_status_code
status_code, u'Expected one of', *status_codes)
google.resumable_media.common.InvalidResponse: ('Request failed with status code', 403, 'Expected one of', <HTTPStatus.OK: 200>)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/usr/local/lib/python3.6/site-packages/storages/backends/gcloud.py", line 75, in close
self.blob.upload_from_file(self.file, content_type=self.mime_type)
File "/usr/local/lib/python3.6/site-packages/google/cloud/storage/blob.py", line 949, in upload_from_file
_raise_from_invalid_response(exc)
File "/usr/local/lib/python3.6/site-packages/google/cloud/storage/blob.py", line 1735, in _raise_from_invalid_response
raise exceptions.from_http_response(error.response)
google.api_core.exceptions.Forbidden: 403 POST https://www.googleapis.com/upload/storage/v1/b/foo.com/o?uploadType=multipart: Insufficient Permission
>>> default_storage.url('new docker')
'https://storage.googleapis.com/foo.appspot.com/new%20docker'
>>>
Seems like it was completely related to the bucket permissions. So I have assigned Storage admin , Storage object creator roles to google cloud build service account (through bucket -> manage permissions) but still it shows the same error.
A possible explanation for this would be if you haven't assigned your cluster with the correct scope. If this is the case, the nodes in the cluster would not have the required authorisation/permission to write to Google Cloud Storage which could explain the 403 error you're seeing.
If no scope is set when the cluster is created, the default scope is assigned and this only provides read permission for Cloud Storage.
In order to check the clusters current scopes using Cloud SDK you could try running a 'describe' command from the Cloud Shell, for example:
gcloud container clusters describe CLUSTER-NAME --zone ZONE
The oauthScopes section of the output contains the current scopes assigned to the cluster/nodes.
The default read only Cloud Storage scope would display:
https://www.googleapis.com/auth/devstorage.read_only
If the Cloud Storage read/write scope is set the output will display:
https://www.googleapis.com/auth/devstorage.read_write
The scope can be set during cluster creation using the --scope switch followed by the desired scope identifier. In your case, this would be “storage-rw”. For example, you could run something like:
gcloud container clusters create CLUSTER-NAME --zone ZONE --scopes storage-rw
The storage-rw scope, combined with your service account should then allow the nodes in your cluster to write to Cloud Storage.
Alternatively you if you don't want to recreate the cluster you can create a new node pool with the new desired scopes, then delete your old node pool. See the accepted answer for Is it necessary to recreate a Google Container Engine cluster to modify API permissions? for information on how to achieve this.