I am reading file from SharePoint and writitng in GCS bucket but when I run the function it gives me error "Error [Errno 30] Read-only file system: 'test.xlsx'"
here is the code
response = File.open_binary(ctx,location/file )
blob = bucket.blob('/sharepoint/' + doc)
print('connection')
with open("test.xlsx", "wb") as local_file:
blob.upload_from_file(local_file)
local_file.close()
please help if anyone know the solution of this error
Doing this open("test.xlsx", "wb") destroys the file. The file is also locked while open which causes the error message.
Change your code to open the file in read mode:
with open("test.xlsx", "rb") as local_file:
blob.upload_from_file(local_file)
local_file.close()
Related
I am having troubles opening a binary file in a jupyter notebook in GCP. I have tried the 2 following methods but I'm always getting an error
Code 1
with open('gs://cloud-ai-platform-fcf9f6d9-ccf6-4e8b-bdf2-6cc69a369809/rank_table.bin', 'rb') as file:
binary_content = file.read()
rank_table_file = binary_content
self.rank_table = np.fromfile(rank_table_file, dtype=np.int32)
Error 1 (obviously I checked, the file path is correct)
FileNotFoundError: [Errno 2] No such file or directory: 'gs://cloud-ai-platform-fcf9f6d9-ccf6-4e8b-bdf2-6cc69a369809/rank_table.bin'
Second code was with a blob method as indicated in GCP documentation
from google.cloud import storage
storage_client = storage.Client()
bucket = storage_client.bucket(bucket_name)
blob = bucket.blob('rank_table.bin')
with blob.open("rb") as f:
binary_content = f.read()
rank_table_file = binary_content
self.rank_table = np.fromfile(rank_table_file, dtype=np.int32)
But instead, I got the following error message:
io.UnsupportedOperation: fileno
I understand that this error comes from the fact that my file does not support the method applied, but I can't figure out why.
Do you have any suggestion I could use?
Cheers
I have a task requirement of reading a tar.gz file from a ftp server and store it on a blob storage.
How I think I can accomplish is that I must first create a temp file in azure function temp directory, write all content on it, close it and then upload it on the blob storage.
What I have done so far is:
fp = tempfile.NamedTemporaryFile()
filesDirListInTemp = listdir(tempFilePath)
logging.info(filesDirListInTemp)
try:
with open('/tmp/fp', 'w+') as fp:
data = BytesIO()
save_file = ftp.retrbinary('RETR '+ filesDirListInTemp, data.write, 1024)
data.seek(0)
blobservice=BlobClient.from_connection_string(conn_str=connection_string,container_name=container_name,blob_name=filename,max_block_size=4*1024*1024,max_single_put_size=16*1024*1024)
blobservice.upload_blob(gzip.decompress(data.read()))
print("File Uploaded!")
except Exception as X:logging.info(X)
But I am getting error as: expected str, bytes or os.PathLike object, not list.
Please tell me what I am doing wrong here?
I'm using this to connect to Azure File Share and upload a file. I would like to chose what extension file will have, but I can't. I got an error shown below. If I remove .txt everything works fine. Is there a way to specify file extension while uploading it?
Error:
Exception: ResourceNotFoundError: The specified parent path does not exist.
Code:
def main(blobin: func.InputStream):
file_client = ShareFileClient.from_connection_string(conn_str="<con_string>",
share_name="data-storage",
file_path="outgoing/file.txt")
f = open('/home/temp.txt', 'w+')
f.write(blobin.read().decode('utf-8'))
f.close()
# Operation on file here
f = open('/home/temp.txt', 'rb')
string_to_upload = f.read()
f.close()
file_client.upload_file(string_to_upload)
I believe the reason you're getting this error is because outgoing folder doesn't exist in your file service share. I took your code and ran it with and without extension and in both situation I got the same error.
Then I created a folder and tried to upload the file and I was able to successfully do so.
Here's the final code I used:
from azure.storage.fileshare import ShareFileClient, ShareDirectoryClient
conn_string = "DefaultEndpointsProtocol=https;AccountName=myaccountname;AccountKey=myaccountkey;EndpointSuffix=core.windows.net"
share_directory_client = ShareDirectoryClient.from_connection_string(conn_str=conn_string,
share_name="data-storage",
directory_path="outgoing")
file_client = ShareFileClient.from_connection_string(conn_str=conn_string,
share_name="data-storage",
file_path="outgoing/file.txt")
# Create folder first.
# This operation will fail if the directory already exists.
print "creating directory..."
share_directory_client.create_directory()
print "directory created successfully..."
# Operation on file here
f = open('D:\\temp\\test.txt', 'rb')
string_to_upload = f.read()
f.close()
#Upload file
print "uploading file..."
file_client.upload_file(string_to_upload)
print "file uploaded successfully..."
file_name = "r1.csv"
client = storage.Client()
bucket = client.get_bucket('upload-testing')
blob = bucket.get_blob(file_name)
blob.download_to_filename("csv_file")
Want to Open r1.csv file in read only Mode.
Getting this Error
with open(filename, 'wb') as file_obj:
Error: [Errno 30] Read-only file system: 'csv_file'
so the function download_to_filename open files in wb mode is there any way threw which i can open r1.csv in read-only mode
As mentioned in previous answer you need to use the r mode, however you don't need to specify that since that's the default mode.
In order to be able to read the file itself, you'll need to download it first, then read its content and treat the data as you want. The following example downloads the GCS file to a temporary folder, opens that downloaded object and gets all its data:
storage_client = storage.Client()
bucket = storage_client.get_bucket("<BUCKET_NAME>")
blob = bucket.blob("<CSV_NAME>")
blob.download_to_filename("/tmp/test.csv")
with open("/tmp/test.csv") as file:
data = file.read()
<TREAT_DATA_AS_YOU_WISH>
This example is thought to run inside GAE.
If you want to open a read only file you should use 'r' mode, 'wb' means write binary:
with open(filename, 'r') as file_obj:
I am using the following code to upload a SQLITE3 Database file. For some reason, the script does not completely upload the file (the uploaded filesize is less than the original)
FTP = ftplib.FTP('HOST','USERNAME','PASSWORD')
FTP.cwd('/public_html/')
FILE = 'Database.db';
FTP.storbinary("STOR " + FILE, open(FILE, 'r'))
FTP.quit()
When I go to open the uploaded file in SQLite Browser, it says it is an invalid file.
What am I doing incorrectly?
In the open() call, you need to specify that the file is a binary file, like so:
FTP.storbinary("STOR " + FILE, open(FILE, 'rb'))