Download a file to a google driver folder using selenium in heroku - python

Can i use the url to a google drive folder as a download_path in selenium
for instance like this .Heroku error says file not found
mudopy.download_path(r"https://drive.google.com/drive/folders/1IPNuefeIXXCKm8Xxm3u1ekltxTF3dCT0")

According to the code of mudopy module, it saves the downloaded file into a local directory. So you can later upload that file to a google drive. Not sure if heroku allows to store something locally.
Moreover, the function mudopy.download_path does not download anything yet, but only creates a local file that stores the path to the folder where you want to save the downloaded files. And looks like Heroku does't even permit creating this file.

Related

Unable to render aspx files when uploaded to SharePoint programmatically

I am new to SharePoint. I have written a simple python script that basically connects to SharePoint and uploads files (aspx and other frontend files) from a folder on my local machine to a specific folder on SharePoint site.
To facilitate the script to communicate with the SharePoint, I have a created an App principal under SharePoint using the SharePoint App-Only model. I have done this by calling the appregnew.aspx, example: https://spo.test.com/sites/MYSITE/\_layouts/15/appregnew.aspx , below is the sample page when 'appregnew.aspx' is called
Then, I have provided the below permissions to the App principal through 'appinv.aspx', example - https://spo.test.com/sites/MYSITE/\_layouts/15/appinv.aspx
<AppPermissionRequests AllowAppOnlyPolicy="true">
<AppPermissionRequest Scope="http://sharepoint/content/sitecollection/web" Right="FullControl"/>
</AppPermissionRequests>
Next, I use the Client ID and Client Secret under the Python script to establish communication with SharePoint and to upload files to a specific folder (folder already exists and is not created by the program) on SharePoint, example path to which files are uploaded: https://spo.test.com/sites/MYSITE/Shared%20Documents/TeamDocs2
Note: This script uses Python library 'Office365-REST-Python-Client' to communicate with SharePoint
The script can successfully authenticate itself and also upload the files to the folder on SharePoint. But then when I manually go to the SharePoint folder and click on the aspx file, example : index.aspx; the file gets downloaded instead of getting rendered.
There is no issue with the file i.e. it is not corrupted because when I manually upload the same file onto the same folder, then there is no issue, the file gets rendered.
In regards to the permissions for the App principal, I've already given 'FullControl' at the scope 'sitecolletion/web' level. I also tried changing the scope from 'http://sharepoint/content/sitecollection/web' to 'http://sharepoint/content/sitecollection', this didn't work as well
Please can somebody help me with this. Thanks in advance
The reason the .aspx page is being downloaded is related to security risk mitigation in SharePoint. If you consider that Javascript (.js) files and .aspx files are executable files in the browser, then it should also be self evident that allowing users to upload such files to SharePoint could pose risk. Because of this, Microsoft has disabled custom script on all modern sites by default. You can choose to overrule this setting, but it should be done with extreme caution.

How to download a file from google drive without showing this action in the change history

If I have access to a folder with some files in google drive, it's possible to download this folder without showing any changes in history? The owner of the folder should not see that the file was downloaded. If it's possible in python please tell me how to do it.
I used to think about some script (with using Google API) or some service/site to resolve this problem.

dump files downloaded by google Colab in temporary location to google drive

I have a json file with over 16k urls of images, which I parse using a python script and use urllib.request.urlretrieve in it to retrieve images. I uploaded the json file to google drive and run the python script in google Colab.
Though the files were downloaded (I checked this using a print line in the try block of urlretrieve) and it took substantial time to download them, I am unable to see where it has stored these files. When I had run the same script on my local machine, it stored the files in the current folder.
As an answer to this question suggests, the files may be downloaded to some temporary location, say, on some cloud. Is there a way to dump these temporary files to google drive?
(*Note I had mounted the drive in the colab notebook, still the files don't appear to be stored in google drive)
Colab stores files in some temp location which is new every time you run the notebook. If you want your data to persist across sessions you need to store it in GDrive. For that you need to map some GDrive folder in your notebook and use it as path. Also, you need to give the Colab permissions to access your GDrive
After mounting GDrive you need to move files from the Colab to GDrive using command:
!mv /content/filename /content/gdrive/My\ Drive/

Download directory from Nextcloud with python requests (WebDav)

I am running a nextcloud instance and I am trying to download a directory with an API call using the library requests.
I can download a zip file using an API call. Now what I would like to to is to have an unzipped directory on my nextcloud instance and download it via an API call. It doesn't matter to me if I get a zip file back when I do the API call, I just want to have it unzipped in the cloud.
For instance, I can put an unzipped directory there and when I download it in my browser, nextcloud gives me back a zip file. This behaviour I want in an API call.
Now if I put a zipped file on there I can download the file like so:
import os
import requests
response = requests.request(
method="get",
url=f"https://mycloud/remote.php/dav/files/setup_user/{name_dir}/",
auth=("my_user", my_token),
)
if response.status_code == 200:
with open("/path/to/my/dir/file_name.zip"), "wb") as file:
file.write(response.content)
That writes me my zipfile which is in the cloud to a local file_name.zip file. My problem is now that if I have an unzipped directory in the cloud it doesn't work. Doesn't work meaning that I get a file back which has the content:
This is the WebDAV interface. It can only be accessed by WebDAV clients such as the Nextcloud desktop sync client.
I also tried to do this with wget wget --recursive --no-parent https://path/to/my/dir and I got the same file with the same content back.
So I assume that the WebDav API of nextcloud doesn't allow me to do it in the way I want to do it. Not I am wondering what I am doing wrong or if what I want is doable. Also I don't get why in the browser this works fine. I just select the unzipped folder and can download it with a click. In the nextcoud community it has been suggested to use Rclone (https://help.nextcloud.com/t/download-complete-directory-from-nextcloud-instance/77828), but I would prefer to not use a dependency that I have to set up on every machine where I want to run this code.
Any help is very much appreciated! Thanks a bunch in advance!
PS: In case anyone wonders why I would want to do this: It's way more convenient when I would like to change just a single file in my dir in the cloud. Otherwise I have to unzip, change file, zip and upload again.

Python and Dropbox: How to change SmartSync setting for locally created files?

I have a working Python script that generates and saves hi-res image files to a local Dropbox folder (synced through the Windows Dropbox app). Is there a way in Python to change the SmartSync setting for the newly created image from "Local" to "Online Only" so that I can save space on my local hard drive? I know I could use the Dropbox API v2 to just upload the file and then delete the temporary local files after, but I'm wondering if there is a way to directly change the file settings since it already gets saved to the synced Dropbox folder.
Thanks!
No, unfortunately Dropbox doesn't offer an API for managing Smart Sync settings like this, but I'll pass this along as a feature request.

Categories