I am looking for a script which will upload files from a unix server to an HTTP location.
I want to use only these modules - cookielib, urllib2, urllib, mimetypes, mimetools, traceback
I would like to make question more clear
here http location mentioned is a common share folder link which can use by number of people by accessing through username/passwd.
for ex: my target location has:
I have http:/commonfolder.mydomain.com/somelocation/345333 location
I have username = someuid ,password = xxxxx
my source location has:
I have a my.txt file contains some data
I want to run shell or python script which will tranfer the file to target location mentioned above. so that many people can start using latest info I am uploading timely.
-shijo
Related
I want to script the download of 8 sheets using the module smartsheet-python-sdk.
Later I want to automate the download scheduled at a specific time once a week with Windows Task Scheduler.
But I have trouble with the Python script downloading the sheets via API.
This is what I have in a .py file.
import os, smartsheet
token=os.environ['SMARTSHEET_ACCESS_TOKEN']
smartsheet_client = smartsheet.Smartsheet(token)
smartsheet_client.errors_as_exceptions(True)
smartsheet_client.Reports.get_report_as_excel(
8729488427892475,
'C:/Desktop',
'MyFileName.xlsx'
)
I notice a couple of issues with the code you've posted.
First, smartsheet_client.Reports.get_report_as_excel should be smartsheet_client.Sheets.get_sheet_as_excel if you're wanting to retrieve a sheet (not a report).
Next, try specifying the full path for where Desktop actually lives on the file system. i.e., instead of just using the alias C:\Desktop -- specify the actual file system path, for example: c:/users/USERNAME/desktop (on a Windows machine) where USERNAME is the name of the currently logged in user. You can also derive the logged-in user dynamically, see How to make Python get the username in windows and then implement it in a script.
The following code successfully downloads the specified sheet to my Desktop, with the filename MyFileName.xlsx. (I'm logged in as kbrandl so the path to my Desktop is specified as: c:/users/kbrandl/desktop.)
sheetId = 3932034054809476
smartsheet_client.Sheets.get_sheet_as_excel(
sheetId,
'c:/users/kbrandl/desktop',
'MyFileName.xlsx'
)
I would like to change the default location use by the browser when uploading a file.
I mean I have a website and when the client clic on upload file, I would like to point to a target directory (ie Downloads) instead of Desktop.
How can I do it ?
My website use Flask / Python
That's probably impossible. It's the client who choose's where to save data not the server.
You'll need to know the computer's folders paths. And that's called Hacking wich is illegal.
I am trying to write a code that will download all the data from a server which holds the .rar files about imaginary cadastrial particles for student projects. What I got for now is the query for the server which only needs to input a specific number of particle and access it as url to download the .rar file.
url = 'http://www.pg.geof.unizg.hr/geoserver/wfs?request=getfeature&version=1.0.0&service=wfs&&propertyname=broj,naziv_ko,kc_geom&outputformat=SHAPE-ZIP&typename=gf:katastarska_cestica&filter=<Filter+xmlns="http://www.opengis.net/ogc"><And><PropertyIsEqualTo><PropertyName>broj</PropertyName><Literal>1900/1</Literal></PropertyIsEqualTo><PropertyIsEqualTo><PropertyName>naziv_ko</PropertyName><Literal>Suma Striborova Stara (9997)</Literal></PropertyIsEqualTo></And></Filter>'
This is the "url" I want to open with the web browser module for a particle "1900/1" but this way I get an error:
This XML file does not appear to have any style information associated with it. The document tree is shown below.
When I manually input this url it downloads the file without a problem.
What is the way I can make this python web application work?
I used a webbrowser.open_new(url) option which does not work.
You're using the wrong tool. webbrowser is for controlling a native web browser. If you just want to download a file, use the requests module (or urllib.request if you can't install Requests).
import requests
r = requests.get('http://www.pg.geof.unizg.hr/geoserver/wfs', params={
'request': 'getfeature',
...
'filter': '<Filter xmlns=...>'
})
print(r.content) # or write it to a file, or whatever
Note requests will handle encoding GET parameters for you -- you don't need to worry about escaping the request yourself.
I am attempting to write a bash shell script that will evaluate the modified date of a file on a remote web site and download the file if it is more recent than the local copy. Part of the script is already written. The part that has been developed uses the header Last-Modified parameter. I need to have an alternative in case the Last-Modified parameter is not available in the header. Does anyone know of a way using bash shell scripting or python to get the last modified date of as file on a website without using the Last-Modified parameter in the header?
Thanks.
James
As others have mentioned here, it's tough to trust the Last-Modified header on when the file was last updated.
If you don't mind downloading the full contents of the file, you could store the md5 hash of the file. If it's different on subsequent calls, you know the contents of the file changed.
From Bash shell, you could do:
curl -s www.google.com | md5
Using the excellent python Requests library:
import requests
import hashlib
r = requests.get('http://www.example.com')
hash = hashlib.md5(r.text).hexdigest()
If you're retrieving data over http, there is no guarantee that what you're requesting corresponds to a physical file or anything else with a concept of a "last modified" date, so within the http protocol there's no way (other than Last-Modified) to know. You will probably want to retrieve the file if you don't have a sufficiently recent local copy - and you will have to decide for your purposes what "sufficiently recent" is.
If you have a user account on the host and can remotely log in via ssh or similar, it may be possible to inspect an actual file for mod date.
As I see it, you are basically maintaining a cache. HTTP has more than just the Last-Modified header to facilitate caching, but the logic not all that simple. W3C has a discussion of how to implement a cache that you may find helpful.
I am writing a script which will run on my server. Its purpose is to download the document. If any person hit the particular url he/she should be able to download the document. I am using urllib.urlretrieve but it download document on the server side not on the client. How to download in python at client side?
If the script runs on your server, its purpose is to serve a document, not to download it (the latter would be the urllib solution).
Depending on your needs you can:
Set up static file serving with e.g. Apache
Make the script execute on a certain URL (e.g. with mod_wsgi), then the script should set the Content-Type (provides document type such as "text/plain") and Content-Disposition (provides download filename) headers and send the document data
As your question is not more specific, this answer can't be either.
Set the appropriate Content-type header, then send the file contents.
If the document is on your server and your intention is that the user should be able to download this file, couldn't you just serve the url to that resource as a hyperlink in your HTML code. Sorry if I have been obtuse but this seems the most logical step given your explanation.
You might want to take a look at the SocketServer module.