I'm using a python script based on this Mint API to pull personal finance info every hour.
To connect to Mint I do mint = mintapi.Mint(email, password) which opens a Chrom instance via selenium and logs into Mint, and creates an object of <class 'mintapi.api.Mint'>
To refresh info I just need to do mint.initiate_account_refresh().
But every time I run the script, it does the whole logging in thing again.
Can I store the mint object on disk somehow so I can skip that step and just do the account refresh?
Ah the wonders of open source.
Curious, I went and looked at the mintapi you linked, to see if there was anything obvious and simple I could do to recreate the object instance without the arduous setup.
It turns out there isn't, really. :(
Here is what is called when you instantiate the Mint object:
def __init__(self, email=None, password=None):
if email and password:
self.login_and_get_token(email, password)
As you can see, if you don't give it a truthy email and password, it doesn't do anything. (As a sidenote, it should really be checking is None, but whatever).
So, we can avoid having do go through the setup process nice and easily, but now we need to find out how to fake the setup process based on previous data.
Looking at .login_and_get_token(), we see the following:
def login_and_get_token(self, email, password):
if self.token and self.driver:
return
self.driver = get_web_driver(email, password)
self.token = self.get_token()
Nice and simple, again. If it already has a token, it's done, so it goes away. If not, it sets a driver, and sets .token by calling .get_token().
This makes the whole process really easy to override. Simply instantiate a Mint object with no arguments like so:
mint = mintapi.Mint()
Then set the .token on it:
mint.token = 'something magical'
Now you have an object that is in an almost ready state. The problem is that it relies on self.driver for basically every method call, including your .initiate_account_refresh():
def initiate_account_refresh(self):
self.post(
'{}/refreshFILogins.xevent'.format(MINT_ROOT_URL),
data={'token': self.token},
headers=JSON_HEADER)
...
def post(self, url, **kwargs):
return self.driver.request('POST', url, **kwargs)
This looks like it's a simple POST that we could replace with a requests.post() call, but I suspect that seeing as it's doing it through the web browser that it's relying on some manner of cookies or session storage.
If you wanted to experiment, you could subclass Mint like so:
class HeadlessMint(Mint):
def post(self, url, **kwargs):
return requests.post(url, **kwargs)
But my guess is that there will be more issues with this that will surface over time.
The good news is that this mintapi project looks reasonably simple, and rewriting it to not rely on a web browser doesn't look like an unreasonable project for someone with a little experience, so keep that in your back pocket.
As for pickling, I don't believe that will work, for the same reason that I don't believe subclassing will - I think the existence of the browser is important. Even if you pickle your mint instance, it will be missing its browser when you try to load it.
The simplest solution might very well be to make the script a long-running one, and instead of running it every hour, you run it once, and it does what it needs to, then sleeps for an hour, before doing it again. This way, you'd log in once at the very beginning, and then it could keep that session for as long as it's running.
To store objects in Python you can use the pickle module.
Let's say you have an object mint
import pickle
mint = Something.Somefunc()
with open('data.pickle','wb') as storage:
pickle.dump(mint,storage)
The object will be saved as a sequence of binary bytes in a file named data.pickle.
To access it just use the pickle.load() function.
import pickle
with open('data.pickle','rb') as storage:
mint = pickle.load(storage)
>>>mint
>>><class 'something' object>
NOTE:
Although it doesn't matter here but the pickle module has a flaw that it can execute some function objects while loading them from a file, so don't use it when reading pickle stored object from a third party source.
Use a library pickle to save and load object
SAVE
import pickle
mint = mintapi.Mint(email, password)
with open('mint .pkl', 'wb') as output:
pickle.dump(mint , output, pickle.HIGHEST_PROTOCOL)
LOAD
import pickle
with open('mint.pkl', 'rb') as input:
mint= pickle.load(input)
mint.initiate_account_refresh()
Related
In the google appengine datastore, there is a BlobKey (labled as csv). The key is in the following format: encoded_gs_file:we1o5o7klkllfekomvcndhs345uh5pl31l. I would like to provide a download button to save this information. My question is, what is the endpoint that i can use to access this. More information about the BlobKey is below.
The web app is being run using dev_appserver.py and uses python 2.7 (Django) as the backend. Currently, a button exists, but when clicking on it, it returns a 404 error. The download link that the button provides is:
https://localhost:8080/data?key=encoded_gs_file:dwndjndwamwljioihkm
My question is, how can i use the blobkey to generate a url that can be downloaded; or how can i check my code base to find how the url that i can use is being generated?
class BlobstoreDataServer(blobstore_handlers.BlobstoreDownloadHandler):
def get(self):
k = str(urllib.unquote(self.request.params.get('key','')))
logging.debug(k)
blob_info = blobstore.BlobInfo.get(k)
logging.debug(blob_info)
if (not blob_info) or (not blob_info.size):
self.error(404)
return
self.response.headers['X-Blob-Size'] = str(blob_info.size)
self.response.headers['Content-Type'] = blob_info.content_type
self.response.headers['Content-Disposition'] = (u'attachment; filename=%s' % blob_info.filename).encode('utf-8')
self.send_blob(blob_info)
Edit: New Images
Do you have a Request Handler for the route /data that does something like this?
from google.appengine.ext import blobstore
class DisplayBlob(blobstore_handlers.BlobstoreDownloadHandler):
def get(self):
blob_key = self.request.GET['key']
self.send_blob(ndb.BlobKey(blob_key))
self.response.headers['Content-Type'] = 'text/plain'
EDIT:
Ok so the 404 is probably being thrown by you by this line: self.error(404) right? Add a logging.warn('BlobstoreDataServer is throwing 404') right before to make sure. Also are you seeing this line logging.debug(k) print (I want to confirm that BlobstoreDataServer is even getting hit)? You may need to do logging.getLogger().setLevel(logging.DEBUG) to see it.
So that means blobstore.BlobInfo.get(k) is returning None. Focus on making sure that is working first, you can do this in the interactive console.
Go to http://localhost:8000/blobstore
Open one of them and copy the Key (encoded_gs_file:dwndjndwamwljioih...)
Go to the Interactive console (http://localhost:8000/console) and enter this code and hit 'EXECUTE' and make sure it is able to find it:
If that step didn't work, then then something is up with your dev_appserver.py's blobstore emulator
If that works, then just manually paste that same key at the end of your download link:
https://localhost:8080/data?key=<paste_encoded_gs_file_key_here>
If this step didn't work then something is up with your download handler, maybe this line is transforming the key somehow str(urllib.unquote(self.request.params.get('key','')))
If this step worked then something is up with your code that generates this link https://localhost:8080/data?key=..., maybe you're actually writing to a different gcs_filename than what you are constructing a different BlobKey for.
I've served a directory using
python -m http.server
It works well, but it only shows file names. Is it possible to show created/modified dates and file size, like you see in ftp servers?
I looked through the documentation for the module but couldn't find anything related to it.
Thanks!
http.server is meant for dead-simple use cases, and to serve as sample code.1 That's why the docs link right to the source.
That means that, by design, it doesn't have a lot of configuration settings; instead, you configure it by reading the source and choosing what methods you want to override, then building a subclass that does that.
In this case, what you want to override is list_directory. You can see how the base-class version works, and write your own version that does other stuff—either use scandir instead of listdir, or just call stat on each file, and then work out how you want to cram the results into the custom-built HTML.
Since there's little point in doing this except as a learning exercise, I won't give you complete code, but here's a skeleton:
class StattyServer(http.server.HTTPServer):
def list_directory(self, path):
try:
dirents = os.scandir(path)
except OSError:
# blah blah blah
# etc. up to the end of the header-creating bit
for dirent in dirents:
fullname = dirent.path
displayname = linkname = dirent.name
st = dirent.stat()
# pull stuff out of st
# build a table row to append to r
1. Although really, it's sample code for an obsolete and clunky way of building servers, so maybe that should be "to serve as sample code to understand legacy code that you probably won't ever need to look at but just in case…".
I have what seems like a very easy problem with an easy solution just beyond my reach.
My setup:
A) Driver file (runs the test script)
B) Connection file (using Requests)
C) Parameters file
The paramenters file has 6 variables with things like server IP, login, pass etc.
The Driver file has a praser which reads the properties file and fills in the blanks.
driver.py paramtersfile.csv
This works fine. However, I added a PORT variable to the parameters file which needs to be seen by B) Connection file. This connections file is never called explicitly, rather just imported into the driver file for its connection and cookie methods.
How do I carry over the parsed variables (from sys.argv) from paramtersfile.csv to the Connections file (or any other file which is used to run my script?
Thank you stackoverflow community
Edit:
I got it to work using the obvious way of passing on the arguments into the class (self.foo) of whatever module/file I needed.
My question from before was along the lines of this idea:
You do something like
loadproperties(propertiesfile)
then from any other python script you could just do
import propertyloader
which would load a list of immutable properties into the current space
Seems very convenient to just do
url = propertyloader.url
instead of
class Connect (host, port, pass, url):
self.url = url
loader = requests(secure, url)
blah blah blah...
Seems like a headache free way of sharing common parameters between different parts of the script.
Maybe there's still a way of doing this (extra credit question)
From the driver.py file, import the connections file as a module and pass the arguments you've parsed to the methods inside the module. Something like this:
#===inside driver.py===
import connections
params = parseFile(sys.argv) #get parameters from the csv file that is passed to the command line
connections.connect(params) #pass them to whatever method you need to call from connections
EDIT: It sounds like you're not writing your code in a modular way. You should stop thinking about your code in terms of files, but instead think of them in terms of modules: bits and pieces of interchangeable code that you can use in many different places. The main flaw with your design that I see (forgive me if I didn't understand correctly) is that you're hard-coding a value inside the connections file that you use to create connection objects. I'm guessing this is what your code looks like (or at least captures the spirit of your code adequately):
#connections.py
MY_SERVER = ??? #WHAT DO I SET THIS TO?
class Connection:
def __init__(self):
self.server = MY_SERVER
def connect():
connection = Connection() #create a new connection object
The above code is not designed well since you're defining a variable MY_SERVER that shouldn't be defined there in the first place! The connections class doesn't know or care what server it should use, it should work with any server. So where do you get the server variable? You pass it in via a constructor or a method. You could do something like this:
#connections.py
class Connection:
def __init__(self, server):
self.server = server
def connect(server):
connection = Connection(server) #create a new connection object with the server passed to the method
With this design, the Connection object becomes much more flexible. It is basically saying "I am a connection object that can handle any server. If you want to use me, just tell me what server you want to use!".
This way, in your drivers file you can first parse the server from your csv, and then simply call the method connections.connect by passing it the server you want!
Is it possible to run python code in Eclipse (PyDev) and use variables computed in previously executed code (similar to using console and interpret code in real time as we enter it)?
Details: I want to use python for experimenting with signal processing and to the signal are applied 2 computationally intensive filters in a row. Each filter take some time and it would be nice to remember the result of the first filter without the need to recompute it at each launch.
Or just do: Password Protection Python
import pickle
reading a "cache" / database:
with open('database.db', 'rb') as fh:
db = pickle.load(fh)
adding to it:
db = {}
db['new_user'] = 'password'
with open('database.db', 'wb') as fh:
pickle.dump(db, fh)
Decorate your functions with Simple Cache and it will save parameters/result hash to disk. I should point out that it works only when arguments are of an immutable type (no lists, dictionaries...). Otherwise, you could handle cache results with the API exposed by Simple Cache or use pickle to serialize results to disk and load it later (which is what simple_cache does, actually).
I'm trying to uninstall collective.carousel's archetypes schemaextender (I'm only interessed in the portlet from that package, not adding Carousel source to every PloneFormGen field etc).
I've tried to unregister the adapter using a import-step, but have so far failed.
def unregister_carousel_extender(site):
from collective.carousel.schemaextender import ContentTypeExtender
from archetypes.schemaextender.interfaces import ISchemaExtender
from Products.ATContentType.interfaces import IATContentType
sm = site.getSiteManager()
sm.unregisterAdapter(factory=ContentTypeExtender, provided=(ISchemaExtender,), required=(IATContentType), name=u'')
I've also spent time in pdb without any success. I'm able to get hold of the registered adapters and can see that collective.carousel.schemaextender.ContentTypeExtender is registered as an unnamed adapter.
You can't unregister on an import step. Import steps only run when you import the profile. In contrast zcml declarations will be parsed and executed when you start your instance. So make sure you unregister after the adapter has been registered, every time.
The 'required' parameter needs to be a sequence of interfaces rather than a single interface. So, required=[IATContentType] or required=(IATContentType,) (note the comma!) rather than required=(IATContentType).
You can check the return value from unregisterAdapter to find out if it was successful...if it's False, it didn't find the adapter you specified (which usually means one of the parameters is incorrect).
What you want is to undo some zcml of collective.carousel when Zope starts up. You can do that with the z3c.unconfigure package.
(Note that I am not sure if the portlet of collective.carousel still works correctly when you have unconfigured this part of the zcml.)