Is it possible to upload and manipulate a photo in the browser with GitHub-pages? The photo doesn't need to be stored else than just for that session.
PS. I'm new to this area and I am using python to manipulate the photo.
GitHub pages allows users to create static HTML sites. This means you have no control over the server which hosts the HTML files - it is essentially a file server.
Even if you did have full control over the server (e.g. if you hosted your own website), it would not be possible to allow the client to run Python code in the browser since the browser only interprets JavaScript.
Therefore the most easy solution is to re-write your code in JavaScript.
Failing this, you could offer a download link to your Python script, and have users trust you enough to run it on their computer.
Related
I am trying to make a python gui application.
What I want to do is to open a web browser by clicking a button. (Tkinter)
When the web browser is opened, I do login.
After logging it, it will redirect to the page.
And that page url will consist of code as a param I need to use later in code.
I used webbrowser.open_new('') to open a web browser.
But the limitation was it is only for opening.. there was no way to get the final redirected url I need.
Is there a way I can use to open a web browser and do something on that page and finally get that final url?
I am using python.
There are a few main approaches for automating interactions with browsers:
Telling a program how and what to click, like a human would, sometimes using desktop OS automation tools like Applescript
Parse files that contain browser data (will vary browser to browser, here is Firefox)
Use a tool or library that relies on the WebDriver protocol (e.g. selenium, puppeteer)
Access the local SQLite database of the browser and run queries against it
Sounds like 3 is what you need, assuming you're not against bringing in a new dependency.
I am working on application where I am giving a functionality to user where they can signin from dropbox by using my application but when I am trying to get the file from Dropbox I am unable to do it. When user choose the desired file from dropbox and click on choose and then nothing happen. Can anyone please help out either is it possible to do it or not? If yes, how can we do it? I am at beginner level. Please help me out and explain the whole process to do it in detail.
Are you using public Dropbox files?
Then you just need to fetch the item by URL and download it. If this happens in a browser tool, you'll need JavaScript and not Python to download it.
Or you leave out JS and just use Python to render an HTML page where a button is for a Dropbox file and clicking the button triggers a download of the file. That is a generic HTML task you can search for.
If you need access to sign in Dropbox and view private files, consider using a Python library built around Dropbox.
See the Python Dropbox guide. Are you using a library like that? Please share as your question was vague.
https://www.dropbox.com/developers/documentation/python#
Also please share an explanation of what your logic is or a small code snippet. I can't see what you are doing yet so I don't know where you are missing something or making a mistake.
From the screenshot, I see you're using the Dropbox Chooser. That's a pre-built way to let your end-users select files from their Dropbox accounts and give them to your app.
Make sure you implement the success and cancel callback methods as documented there for the Chooser.
In the success callback, you'll get the information for the selected file(s). That occurs in JavaScript in the browser though, so if you need that on your server, you'll need to write some JavaScript to send that up to your server, e.g., via an AJAX call or a form or whatever means you use in your app.
I am working on a scraper built in RSelenium. A number of tasks are more easily accomplished using Python, so I've set up a .Rmd file with access to R and Python code chunks.
The R-side of the scraper opens a website in Chrome, logs in, and accesses and scrapes various pages behind the login wall. (This is being done with permission of the website owners, who would rather users scrape the data ourselves than put together a downloadable.)
I also need to download files from these pages, a task which I keep trying in RSelenium but repeatedly come back to Python solutions.
I don't want to take the time to rewrite the code in Python, as it's fairly robust, but my attempts to use Python result in opening a new driver, which starts a new session no longer logged in. Is there a way to have Python code chunks access an existing driver / session being driven by RSelenium?
(I will open a separate question with my RSelenium download issues if this solution doesn't pan out.)
As far as I can tell, and with help from user Jortega, Selenium does not support interaction with already open browsers, and Python cannot access an existing session created via R.
My solution has been to rewrite the scraper using Python.
I'm writing a Python script that works with images on the user's computer, my plan is to display these images on a webpage via bottle.py (which can change if need be) on localhost. The images and the script could be located anywhere, however serving from localhost doesn't allow me to display images with a file:/// path for security reasons, so I'm stuck as to how I can achieve this. I basically want bottle's static_file function but with multiple files and within a HTML template. Is this at all possible without moving the images?
No, the browser sandbox won't and shouldn't allow this. Think about it this way: if you were able to display images from the user's computer without his explicit approval, they would be part of the DOM and what would stop a script from manipulating them or sending to a server without the user's knowledge?
I want to download a zip file using python.
With this type of url,
http://server.com/file.zip
this is quite simple by using urllib2.urlopen and writing it in a local file.
But in my case I have this type of url:
http://server.com/customer/somedata/download?id=121&m=zip,
the download is launched after a form validation.
It could be useful to precise that in my case I want to deploy it on heroku, so I can't use spynner that is built with C++. This download is launched after a scraping that uses scrapy.
From a browser the download works well, I get a good zip file with its name. Using python I just get html and header data...
Is there any way to get a file from this type of url in python ?
This Site is serving JavaScript which then invokes the download.
You have no choice but to: a) evaluate the JavaScript in a simulated Browser environment or b) parse manually what the JS does, and re-implement that in python. e.g. string extraction of the URL and download key, possibly invoking an AJAX request, and finally download the file
I generally recommend Mechanize for webpage related automation, but it cannot deal with JavaScript either, so I guess you can stick with Scrapy if you want to go for plan b).
When you do the download in the browser, open up the network tab of the developer console and record what HTTP method (probably POST), the POST parameters, the cookie, and everything else that is part of the validation; then use a library to replicate that.