I have my NAS server located and i am able to get/upload files on it. Now i have suituation where i need to read .png files location from server and pass it on UI thread to show the image. Right now i am only aware of method get which needs local location to save. I don't want file to be save on my local machine but just i shall be able to show that image on my application.
I have gone through this http://docs.paramiko.org/en/2.1/api/sftp.html but didn't found relevant method to use
Code is :-
import paramiko
paramiko.util.log_to_file(r'D:\TechnoThrone\Builds\paramiko.log')
# Open a transport
host = "stedgela01.TechnoThrone.com"
port = 2222
transport = paramiko.Transport((host, port))
# Auth
password = "xxx"
username = "xxxx"
transport.connect(username = username, password = password)
# Go!
sftp = paramiko.SFTPClient.from_transport(transport)
# Download
filepath = '/A/B/C/pic_ex.png'
localpath = r'D:\picfolder\pic_ex.png'
sftp.get(filepath, localpath)
I don't quite get the problem, so I will try to guess a bit.
You should be able to look at content in paths in the remote server without need to download locally the file.
get is not the right method if you don't want to download because as per documentation:
get(remotepath, localpath, callback=None) Copy a remote file
(remotepath) from the SFTP server to the local host as localpath. Any
exception raised by operations will be passed through. This method is
primarily provided as a convenience.
Parameters: remotepath (str) – the remote file to copy localpath
(str) – the destination path on the local host callback (callable) –
optional callback function (form: func(int, int)) that accepts the
bytes transferred so far and the total bytes to be transferred
There are other methods which can get filenames in a remote directory and attributes of those without the need to download.
These are listdir, listdir_attr and listdir_iter for example.
For example, listdir_attr will
Return a list containing SFTPAttributes objects corresponding to files
in the given path. The list is in arbitrary order. It does not include
the special entries '.' and '..' even if they are present in the
folder.
The returned SFTPAttributes objects will each have an additional
field: longname, which may contain a formatted string of the file’s
attributes, in unix format. The content of this string will probably
depend on the SFTP server implementation.
Parameters: path (str) – path to list (defaults to '.')
Returns: list
of SFTPAttributes objects
You could use something along these lines:
list_png_files = []
for file_attributes in sftp.listdir_attr("remote_path"):
if file_attributes.filename.endswith('.png'):
list_png_files.append(file_attributes.filename)
Check if it will give you relative or absolute path of course.
Similar you could try with listdir, etc.
def _list_remote_dir(self, dir="."):
sftp_session = self.ssh_session.open_sftp() # ssh_session is paramiko.SSHClient()
sftp_session.chdir(".")
cwd = sftp_session.getcwd()
print(cwd)
remote_file_attrs = sftp_session.listdir_attr()
for i in remote_file_attrs:
logging.info(i.filename)
this snipet may help, you got the current working directory, and you got the file name, you can get the abs-path of files on server.
Related
How to download all files in a folder from GCS cloud bucket using python client api?
Files like .docx and .pdf
use a downloaded credentials file to create the client, see documentation
this docs tells you to export the file location, but I personally prefer the method used below as it allows for different credentials within the same application.
IMHO separation of what each serviceaccount can access increases security by tenfold. It's also usefull when dealing with different projects in the same app.
Note that you'll also have to give the serviceaccount the permission Storage Object Viewer, or one with more permissions.
Always use the least needed to due to security considerations
requirements.txt
google-cloud-storage
main.py
from google.cloud import storage
from os import makedirs
# use a downloaded credentials file to create the client, see
# https://cloud.google.com/storage/docs/reference/libraries#setting_up_authentication
# this docs tells you to export the file location, but I personally
# prefer the method used below as it allows for different credentials
# within the same application.
# IMHO separation of what each serviceaccount can access increases
# security by tenfold. It's also usefull when dealing with different
# projects in the same app.
#
#
# Note that you'll also have to give the serviceaccount the
# permission "Storage Object Viewer", or one with more permissions.
# Always use the least needed to due to security considerations
# https://cloud.google.com/storage/docs/access-control/iam-roles
cred_json_file_path = 'path/to/file/credentials.json'
client = storage.Client.from_service_account_json(cred_json_file_path)
def download_blob(bucket: storage.Bucket, remotefile: str, localpath: str='.'):
"""downloads from remotepath to localpath"""
localrelativepath = '/'.join(remotefile.split('/')[:-1])
totalpath = f'{localpath}/{localrelativepath}'
filename = f'{localpath}/{remotefile}'
makedirs(totalpath, exist_ok=True)
print(f'Current file details:\n remote file: {remotefile}\n local file: {filename}\n')
blob = storage.Blob(remotefile, bucket)
blob.download_to_filename(filename, client=client)
def download_blob_list(bucketname: str, bloblist: list, localpath: str='.'):
"""downloads a list of blobs to localpath"""
bucket = storage.Bucket(client, name=bucketname)
for blob in bloblist:
download_blob(bucket, blob, localpath)
def list_blobs(bucketname: str, remotepath: str=None, filetypes: list=[]) -> list:
"""returns a list of blobs filtered by remotepath and filetypes
remotepath and filetypes are optional"""
result = []
blobs = list(client.list_blobs(bucketname, prefix=remotepath))
for blob in blobs:
name = str(blob.name)
# skip "folder" names
if not name.endswith('/'):
# do we need to filter file types?
if len(filetypes) > 0:
for filetype in filetypes:
if name.endswith(filetype):
result.append(name)
else:
result.append(name)
return result
bucketname = 'bucketnamegoeshere'
foldername = 'foldernamegoeshere'
filetypes = ['.pdf', '.docx'] # list of extentions to return
bloblist = list_blobs(bucketname, remotepath=foldername, filetypes=filetypes)
# I'm just using the bucketname for localpath for download location.
# should work with any path
download_blob_list(bucketname, bloblist, localpath=bucketname)
Have a peculiar issue that I can't seem to fix on my own..
I'm attempting to FTP a list of files in a directory over to an iSeries IFS using Python's ftplib library.
Note, the files are in a single subdirectory down from the python script.
Below is an excerpt of the code that is giving me trouble:
from ftplib import FTP
import os
localpath = os.getcwd() + '/Files/'
def putFiles():
hostname = 'host.name.com'
username = 'myuser'
password = 'mypassword'
myftp = FTP(hostname)
myftp.login(username, password)
myftp.cwd('/STUFF/HERE/')
for file in os.listdir(localpath):
if file.endswith('.csv'):
try:
file = localpath + file
print 'Attempting to move ' + file
myftp.storbinary("STOR " + file, open(file, 'rb'))
except Exception as e:
print(e)
The specific error that I am getting throw is:
Attempting to move /home/doug/Files/FILE.csv
426-Unable to open or create file /home/doug/Files to receive data.
426 Data transfer ended.
What I've done so far to troubleshoot:
Initially I thought this was a permissions issue on the directory containing my files. I used chmod 777 /home/doug/Files and re-ran my script, but the same exception occured.
Next I assumed there was an issue between my machine and the iSeries. I validated that I could indeed put files by using ftp. I was successfully able to put the file on the iSeries IFS using the shell FTP.
Thanks!
Solution
from ftplib import FTP
import os
localpath = os.getcwd() + '/Files/'
def putFiles():
hostname = 'host.name.com'
username = 'myuser'
password = 'mypassword'
myftp = FTP(hostname)
myftp.login(username, password)
myftp.cwd('/STUFF/HERE/')
for csv in os.listdir(localpath):
if csv.endswith('.csv'):
try:
myftp.storbinary("STOR " + csv, open(localpath + csv, 'rb'))
except Exception as e:
print(e)
As written, your code is trying to execute the following FTP command:
STOR /home/doug/Files/FILE.csv
Meaning it is trying to create /home/doug/Files/FILE.csv on the IFS. Is this what you want? I suspect that it isn't, given that you bothered to change the remote directory to /STUFF/HERE/.
If you are trying to issue the command
STOR FILE.csv
then you have to be careful how you deal with the Python variable that you've named file. In general, it's not recommended that you reassign a variable that is the target of a for loop, precisely because this type of confusion can occur. Choose a different variable name for localpath + file, and use that in your open(..., 'rb').
Incidentally, it looks like you're using Python 2, since there is a bare print statement with no parentheses. I'm sure you're aware that Python 3 is recommended by now, but if you do stick to Python 2, it's recommended that you avoid using file as a variable name, because it actually means something in Python 2 (it's the name of a type; specifically, the return type of the open function).
I have a remote server with some files.
smb://ftpsrv/public/
I can be authorized there as an anonymous user. In java I could simply write this code:
SmbFile root = new SmbFile(SMB_ROOT);
And get the ability to work with files inside (it is all I need, one row!), but I can't find how to manage with this task in Python 3, there are a lot of resources, but I think they are not relevant to my problem, because they are frequently tailored for Python 2, and old other approaches. Is there some simple way, similar to Java code above?
Or can somebody provide a real working solution if, for example, I want to access file fgg.txt in smb://ftpsrv/public/ folder. Is there really a handy lib to tackle this problem?
For example on site:
import tempfile
from smb.SMBConnection import SMBConnection
# There will be some mechanism to capture userID, password, client_machine_name, server_name and server_ip
# client_machine_name can be an arbitary ASCII string
# server_name should match the remote machine name, or else the connection will be rejected
conn = SMBConnection(userID, password, client_machine_name, server_name, use_ntlm_v2 = True)
assert conn.connect(server_ip, 139)
file_obj = tempfile.NamedTemporaryFile()
file_attributes, filesize = conn.retrieveFile('smbtest', '/rfc1001.txt', file_obj)
# Retrieved file contents are inside file_obj
# Do what you need with the file_obj and then close it
# Note that the file obj is positioned at the end-of-file,
# so you might need to perform a file_obj.seek() if you need
# to read from the beginning
file_obj.close()
Do I seriously need to provide all of these details: conn = SMBConnection(userID, password, client_machine_name, server_name, use_ntlm_v2 = True)?
A simple example of opening a file using urllib and pysmb in Python 3
import urllib
from smb.SMBHandler import SMBHandler
opener = urllib.request.build_opener(SMBHandler)
fh = opener.open('smb://host/share/file.txt')
data = fh.read()
fh.close()
I haven't got an anonymous SMB share ready to test it with, but this code should work.
urllib2 is the python 2 package, in python 3 it was renamed to just urllib and some stuff got moved around.
I think you were asking for Linux, but for completeness I'll share how it works on Windows.
On Windows, it seems that Samba access is supported out of the box with Python's standard library functions:
import glob, os
with open(r'\\USER1-PC\Users\Public\test.txt', 'w') as f:
f.write('hello') # write a file on a distant Samba share
for f in glob.glob(r'\\USER1-PC\Users\**\*', recursive=True):
print(f) # glob works too
if os.path.isfile(f):
print(os.path.getmtime(f)) # we can get filesystem information
I'm really confused, I'm sure I'm missing something simple, but I can't understand why setting proxy environment variables works for some functions but not others. Is it that the libraries respond to these variables differently?
For example, I'm round-tripping a file via ftp. When I download with wget, I set the proxy environment variables and it downloads, but then say I want to put it back using ftplib, it gets [Errno 11001], do I need to specifically pass these proxy details through ftplib?
Say I set it up this way, I can download the file just fine:
# setup proxy
os.environ["ftp_proxy"] =
"http://****:****#proxyfarm.****.com:8080"
os.environ["http_proxy"] =
"http://****:****#proxyfarm.****.com:8080"
os.environ["https_proxy"] =
"http://****:****#proxyfarm.****.com:8080"
src = "ftp://****:****#ftp.blackrock.com/****/****.csv"
out = "C:\\outFolder\\outFileName.txt" # out is optional
# create output folder if it doesn't exists
outFolder, _ = os.path.split( out )
try:
os.makedirs(outFolder)
except OSError as exc: # Python >2.5
if exc.errno == errno.EEXIST and os.path.isdir(outFolder):
pass
else: raise
# download
filename = wget.download(src, out)
Now immediately below that, I switch to ftplib, I get the [Errno 11001], do I need to reset the proxy parameters for ftplib specifically?
session = ftplib.FTP('ftp.blackrock.com','****','****')
file = open(filename,'rb') # file to send
session.storbinary('STOR '+ remotePath + filename, file) # send the file
file.close() # close file and FTP
session.quit()
The ftp_proxy (and the others) is a proprietary feature of the wget.
You cannot expect it to work with any other FTP library/software.
I have made a program, and there is a function where it gets a text file called news_2014.txt from a ftp server. I currently have this code:
def getnews():
server = 'my ftp server ip'
ftp= ftplib.FTP(server)
username = 'news2'
password = ' '
ftp.login(username,password)
filename = 'ftp://my ftp server ip/news/news_2014.txt'
path = 'news'
ftp.cwd(path)
ftp.retrlines('RETR' + filename, open(filename, "w").open)
I wanna make so the program displays the lines using readlines onto a Tkinter label. But if I try calling the top function, it says:
IOError: [Errno 22] invalid mode ('w') or filename: 'ftp://news/news_2014.txt'
RETR wants just the remote path name, not a URL. Similarly, you cannot open a URL; you need to pass it a valid local filename.
Changing it to filename = 'news_2014.txt' should fix this problem trivially.
The retrlines method retrieves the lines and optionally performs a callback. You have specified a callback to open a local file for writing, but that's hardly something you want to do for each retrieved line. Try this instead:
textlines = []
ftp.retrlines('RETR ' + filename, textlines.append)
then display the contents of textlines. (Notice the space between the RETR command and its argument, too.)
I would argue that the example in the documentation is confusing for a newcomer. Someone should file a bug report.