I have to use cURL on Windows using python script. My goal is: using python script get all files from remote directory ... preferably into local directory. After that I will compare each file with the files stored locally. I am able to get one file at a time but I need to get all of the files from remote directory.
Could someone please advice how to get multiple files?
I use this command:
curl.exe -o file1.txt sftp:///dir1/file1.txt -k -u user:password
thanks
I haven't tested this, but I think you could just try launching each shell command as a separate process to run them simultaneously. Obviously, this might be a bad idea if you have a large set of files, so you might need to manage that more carefully. Here's some untested code, and you'd need to edit the 'cmd' variable in the get_file function, of course.
from multiprocessing import Process
import subprocess
def get_file(filename):
cmd = '''curl.exe -o {} sftp:///dir1/{} -k -u user:password'''.format(filename, filename)
subprocess.check_output(cmd, shell=True, stderr=subprocess.STDOUT) # run the shell command
files = ['file1.txt', 'file2.txt', 'file3.txt']
for filename in files:
p = Process(target=get_file, args=(filename,)) # create a process which passes filename to get_file()
p.start()
Related
I have to use The following command line tool: ncftp,
After that I need to
execute the following commands:
"open ftp//..."
"get -R Folder", but I need to do this automatically . How do I achieve this using Python or command line
You can use the Python subprocess module for this.
from subprocess import Popen, PIPE
# if you don't want your script to print the output of the ncftp
# commands, use Popen(['ncftp'], stdin=PIPE, stdout=PIPE)
with Popen(['ncftp'], stdin=PIPE) as proc:
proc.stdin.write(b"open ...\n") # must terminate each command with \n
proc.stdin.write(b"get -R Folder\n")
# ...etc
See the documentation for subprocess for more information. It can be a little tricky to get the hang of this library, but it's very versatile.
Alternatively, you can use the non-interactive commands ncftpget (docs) and ncftpput (docs) from the NcFTP package.
I recommend reading through the documentation on these commands before proceeding.
In the comments, you said you needed to get some files, delete those files and after upload some new files. Here's how you can do that:
$ ncftpget -R -DD -u username -p password ftp://server path/to/local/directory path/to/remote/directory/Folder
$ ncftpput -R -u username -p password ftp://server path/to/remote/directory path/to/local/directory/Folder
-DD will delete all files after downloading, but it will leave the directory and any subdirectories in place
If you need to delete the empty folder, you can run the ncftpget command again without -R (but the folder must be completely empty, i.e. no subdirectories, so rinse and repeat as necessary).
You can do this in a bash script or using subprocess.run in Python.
I have a bash script that I can run flawlessly in my Rpi terminal in its folder:
./veye_mipi_i2c.sh -r -f mirrormode -b 10
it works like this: Usage: ./veye_mipi_i2c.sh [-r/w] [-f] function name -p1 param1 -p2 param2 -b bus
options:
-r read
-w write
-f [function name] function name
-p1 [param1] param1 of each function
-p2 [param1] param2 of each function
-b [i2c bus num] i2c bus number
When I try to run it in Python (2) via my Spyder editor with os.system, I get a "0" return which I interpret as "succesfully executed" but in fact the script has not been executed and the functions have not been performed. I know this because the script is suppose to change the camera functioning and by checking the images I take afterwards, I can see that nothing has changed.
import os
status = os.system('/home/pi/VeyeMipi/Camera_Folder/veye_mipi_i2c.sh -w -f mirrormode -p1 0x04 -b 10')
print status
Any idea, what causes this? The bash script uses two other scripts that lie in the same folder location (read and write). Could it be, that it cannot execute these additional scripts when startet through Python? It does not make sense to me, but so do a lot of things....
Many thanks
Ok, I understand that my question was not exemplary because of the lack of a minimal reproducible example, but as I did not understand what the problem was, I was not able to create one.
I have found out, what the problem was. The script I am calling in bash requires two more scripts that are in the same folder. Namely the "write" script and "read" script. When executing in terminal in the folder, no problem, because the folder was the working directory.
I tried to execute the script within Spyder editor and added the file location to the PATH in the user interface. But still it would not be able to execute the "write" script in the folder.
Simply executing it in the terminal did the trick.
It would help if you fix your scripts so they don't depend on the current working directory (that's a very bad practice).
In the meantime, running
import subprocess
p = subprocess.run(['./veye_mipi_i2c.sh', '-r', '-f', 'mirrormode', '-b', '10'], cwd='/home/pi/VeyeMipi/Camera_Folder')
print(p.returncode)
which changes the directory would help.
Use subprocess and capture the output:
import subprocess
output = subprocess.run(stuff, capture_output=True)
Check output.stderr and output.stdout
I have hundreds of XML files and I would like to parse it into CSV files. I already code this program.
To execute the python program I use this command (on VScode MS):
python ConvertXMLtoCSV.py -i Alarm120.xml -o Alarm120.csv
My question is, how change this script to integrate a sort of for loop to execute this program for each xml files ?
UPDATE
If my files and folders are organized like in the picture:
I tried this and execute the file .bat in windows10 but it does nothing:
#!/bin/bash
for xml_file in XML_Files/*.xml
do
csv_file=${xml_file/.xml/.csv}
python ConvertXMLtoCSV.py -i XML_Files/$xml_file -o CSV_Files/$csv_file
done
Ideally the for loop would be included inside your ConvertXMLtoCSV.py itself. You can use this to find all xml files in a given directory:
for file in os.listdir(directory_path):
if file.endswith(".xml"):
# And here you can do your conversion
You could change the arguments given to the script to be the path of the directory the xml files are located in and the path for an output folder for the .csv files. For renaming, you can leave the files with the same name but give the .csv extension. i.e.
csv_name = file.replace(".xml", ".csv")
If you want to keep your Python script as-is (process one file), and add the looping externally in bash, you could do:
#!/bin/bash
for xml_file in *.xml
do
csv_file=${xml_file/.xml/.csv}
python ConvertXMLtoCSV.py -i $xml_file -o $csv_file
done
After discussion, it appears that you wish to use an external script so as to leave the original ConvertXMLtoCSV.py script unmodified (as required by other projects), but that although you tagged bash in the question, it turned out that you were not in fact able to use bash to invoke python when you tried it in your setup.
This being the case, it is possible to adapt Rolv Apneseth's answer so that you do the looping in Python, but inside a separate script (let's suppose that this is called convert_all.py), which then runs the unmodified ConvertXMLtoCSV.py as an external process. This way, the ConvertXMLtoCSV.py will still be set up to process only one file each time it is run.
To call an external process, you could either use os.system or subprocess.Popen, so here are two options.
Using os.system:
import os
import sys
directory_path = sys.argv[1]
for file in os.listdir(directory_path):
if file.endswith(".xml"):
csv_name = file.replace(".xml", ".csv")
os.system(f'python ConvertXMLtoCSV.py -i {file} -o {csv_name}')
note: for versions of python too old to support f-strings, that last line could be changed to
os.system('python ConvertXMLtoCSV.py -i {} -o {}'.format(file,csv_name))
Using subprocess.Popen:
import subprocess
import sys
directory_path = sys.argv[1]
for file in os.listdir(directory_path):
if file.endswith(".xml"):
csv_name = file.replace(".xml", ".csv")
p = subprocess.Popen(['python', 'ConvertXMLtoCSV.py',
'-i', file,
'-o', csv_name])
p.wait()
You could then run it using some command such as:
python convert_all.py C:/Users/myuser/Desktop/myfolder
or whatever the folder is where you have the XML files.
I have been trying to use the output of a system command to use it as a part of the command in the next portion. However, I cannot seem to join it up properly and hence not being able to run the second command properly. The OS used is KALI LINUX and python 2.7
#IMPORTS
import commands, os, subprocess
os.system('mkdir -p ~/Desktop/TOOLS')
checkdir = commands.getoutput('ls ~/Desktop')
if 'TOOLS' in checkdir:
currentwd = subprocess.check_output('pwd', shell=True)
cmd = 'cp -R {}/RAW ~/Desktop/TOOLS/'.format(currentwd)
os.system(cmd)
os.system('cd ~/Desktop/TOOLS')
os.system('pwd')
The errors are:
cp: missing destination file operand after ‘/media/root/ARSENAL’
Try 'cp --help' for more information.
sh: 2: /RAW: not found
/media/root/ARSENAL
It seems that the reading of the first command is alright but it can't join with the RAW portion. I have read many other solutions, but they seem to be for shell scripting instead.
Assuming you haven't called os.chdir() anywhere prior to the cp -R, then you can use a relative path. Changing the code to...
if 'TOOLS' in checkdir:
cmd = 'cp -R RAW ~/Desktop/TOOLS'
os.system(cmd)
...should do the trick.
Note that the line...
os.system('cd ~/Desktop/TOOLS')
...will not do what you expect. os.system() spawns a subshell, so it will just change the working directory for that process and then exit. The calling process's working directory will remain unchanged.
If you want to change the working directory for the calling process, use...
os.chdir(os.path.expanduser('~/Desktop/TOOLS'))
However, Python has all this functionality built-in, so you can do it without spawning any subshells...
import os, shutil
# Specify your path constants once only, so it's easier to change
# them later
SOURCE_PATH = 'RAW'
DEST_PATH = os.path.expanduser('~/Desktop/TOOLS/RAW')
# Recursively copy the files, creating the destination path if necessary.
shutil.copytree(SOURCE_PATH, DEST_PATH)
# Change to the new directory
os.chdir(DEST_PATH)
# Print the current working directory
print os.getcwd()
In Python I'm using subprocess to call gsutil copy and move commands, but am currently unable to select multiple extensions.
The same gsutil command works at the terminal, but not in python:
cmd_gsutil = "sudo gsutil -m mv gs://xyz-ms-media-upload/*.{mp4,jpg} gs://xyz-ms-media-upload/temp/"
p = subprocess.Popen(cmd_gsutil, shell=True, stderr=subprocess.PIPE)
output, err = p.communicate()
If say there are four filetypes to move but the bucket is empty, the returning gsutil error from terminal is:
4 files/objects could not be transferred.
Whereas the error returned when run through subprocess is:
1 files/objects could not be transferred.
So clearly subprocess is mucking up the command somehow...
I could always inefficiently repeat the command for each of the filetypes, but would prefer to get to the bottom of this!
It seems, /bin/sh (the default shell) doesn't support {mp4,jpg} syntax.
Pass executable='/bin/bash', to run it as a bash command instead.
You could also run the command without the shell e.g., using glob or fnmatch modules to get the filenames to construct the gsutil command. Note: you should pass the command as a list in this case instead.