This is my code to download and unzip files from google drive.
fileId = drive.CreateFile({'id': '1tQq-ihnTbRj6GlObBrm17Ob6j1XHHJL2'})
print (fileId['title'])
fileId.GetContentFile('tweets_research.zip')
!unzip tweets_research.zip -d ./
But there are already some files and I want to replace them. It is giving me
this option.
But it doesn't matter whatever I press on my keyboard it's not working.
Use the -o option to overwrite files, e.g.,
!unzip -o tweets_research.zip -d ./
You can echo your choice as input to your command by using a pipe.
in case of rename:
!echo "r"| unzip tweets_research.zip -d ./
Related
curl -F user=aditya -F password=1234 -F date=20220516 -F format=csv -F report= sales -F type=IT -F family=SAAS -F version=4 https://www.yahoo.jsp > file.zip
Note: I have changed the data here but format is same, I need the file to be downloaded from website and saved it in my Desktop, I have 4 classification of files under which multiple subclassification is there?
I have a bash script that extracts a tar file:
tar --no-same-owner -xzf "$FILE" -C "$FOLDER"
--no-same-owner is needed because this script runs as root in Docker and I want the files to be owned by root, rather than the original uid/gid that created the tar
I have changed the script to a python script, and need to add the --no-same-owner flag functionality, but can't see an option in the docs to do so
with tarfile.open(file_path, "r:gz") as tar:
tar.extractall(extraction_folder)
Is this possible? Or do I need to run the bash command as a subprocess?
I want to download climate data from the CHELSA database.
One way to do so programmatically is to use wget, following their guidelines:
Download the file (envidatS3paths.txt), install wget and then run the command: wget --no-host-directories --force-directories --input-file=envidatS3paths.txt .
However, for each file that are downloaded, I would like to perform a operation on them (basically, trimming the data because they are quite big).
I looked at the wget manual, but I could not find anything related to an intermediary script to run inbetween downloads.
I could possibly run a second background command to finds any new downloaded file and trim it, but I wonder if the first solution could be more straightforward.
you can run a for loop over the input file and for each file run wget -O $new_file_name $url
try something like this -
bash
for url in $(cat envidatS3paths.txt); do wget -O $(echo $url | sed "s/\//_/g").out $url ; done
python
for url in opened_file:
subprocess.Popen(f'wget -O {url.rsplit('\')[1]} {url}')
I'm trying to download some data using a bash script in a Jupyter notebook and having some problems.
I added quotes to the file paths after I received a 'SyntaxError: unexpected character after line continuation character' error.
However, I'm stumped on how to fix the same error on the Wget command.
This is the contents of the cell as I have it now.
%%bash
FILE=apple2orange
URL="https\://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/$FILE.zip"
ZIP_FILE="./datasets/$FILE.zip"
TARGET_DIR="./datasets/$FILE/"
wget -N \$URL -O \$ZIP_FILE
mkdir $TARGET_DIR
unzip $ZIP_FILE -d ./datasets/
rm $ZIP_FILE
I have changed a little on your script. Now it looks like this:
%%bash
FILE=apple2orange
URL="https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/$FILE.zip"
ZIP_FILE="./datasets/$FILE.zip"
TARGET_DIR="./datasets/$FILE/"
mkdir -p $TARGET_DIR
wget -N $URL -O $ZIP_FILE
unzip $ZIP_FILE -d ./datasets/
rm $ZIP_FILE
In bash strings : doesn't need to be escaped. That was the error I think.
It works on my end.
Have a try.
I want to use Fabric to chown all the files in a directory - including hidden files. Since Fabric uses the sh shell and not bash and sh doesn't know shopt, I can't do:
local('shopt -s dotglob')
local('sudo chown -R name dir')
I don't think there is a way to use the bash shell in Fabric. Is there another way to do this?
How about using another strategy to recursively chown everything in the directory, including hidden files and directories:
local('sudo find dir -exec chown name {} \;')
Hope that helps.