Class not refreshing on 2nd url call - python

I have a web page with a link to a url eg./customer/showitem?id=7, which displays details of a specific customer in a child-window using method showitem() in class customer. The method may set the value of a customer class attribute that controls an alert which is displayed when the page is loaded (eg. self.onloadalert="Warning! Customer is in debt.").
If the customer window is closed, then opened again (perhaps with a different id eg. /customer/showitem?id=8), details of the new customer are displayed correctly, but the onload warning above still appears because customer.onloadalert has not changed since the last call (I've verified this it via. debugging). It looks as though even though the method runs from scratch on the 2nd url call, the customer class (and all its attribute values) still persists from the previous call.
I can solve the problem for this particular attribute by resetting it at the beginning of showitem(), but what about other customer.attributes? (especially if there are a lot of them) - I can't reset them all by name! How can I ensure that the class reloads (hence re-initialises) for each url call?
I am using CherryPy (3.20rc1), although I guess the question is applicable to other frameworks that use the same /class/method?params url format.
Any help would be appreciated.
Alan

If you want data to persist for just one request, stick it on the cherrypy.request object:
cherrypy.request.onloadalert="Warning!"
The cherrypy.request object is completely destroyed and recreeated for each request, even though it's safely importable. Figuring out how is left as an exercise for the reader. ;)

Related

Calling python functions / files from PHP - request.Session - Login/Logout functionality

I am working on my scrapping project which requires login to the page, and two questions just appeared in front of me. Ok, maybe two blocks of questions. (please have mercy with me as I am beginner learning new things and probably I already poorly designed the whole project).
The design is: as the page requires login, I wanted to use python request.Session() method object and with "with context manager" keep to be logged in.
So my first question is:
1, If I want to have, lets say main.php file, which will call python file with login function (the one with context manager), do I have to have the rest of the functions in this same python file (all the functions for the specific scraping) ? more precisely all functions within this "with block" ? What if I want to have each important function in separate python file (one for login.py, another update.py, table.py, etc)? If I call other functions than login function, how the session will be still valid and I wont get logged out ? or should I just call the functions with "s" object somehow ?
Second question:
2, It's related due the sessions. I went through some questions and not sure if I found the correct answer. But when I logged into the page, through network tab on chrome I saw some session ID. But when I checked session ID via "what_in_session = s.cookies.get_dict()",
ID was different. When I moved on the page the session id in network was still same, however, via python was different every time.
Am I doing something wrong ? Or is it correct behavior ? I found this but I am not sure: Why are the IDs different in each request?.
I guess, once the "with request.Session" block is used, I dont need to care about the rest ? Then back to the first question, how to mange these python functions ?
Hope that you'll understand the questions. (if not I can try to rephrase them)
many thanks

Posting data to database through a "workflow" (Ex: on field changed to 20, create new record)

I'm looking to post new records on a user triggered basis (i.e. workflow). I've spent the last couple of days reasearching the best way to approach this and so far I've come up with the following ideas:
(1) Utilize Django signals to check for conditions on a field change, and then post data originating from my Django app.
(2) Utilize JS/AJAX on the front-end to post data to the app based upon a user changing certain fields.
(3) Utilize a prebuilt workflow app like http://viewflow.io/, again based upon changes triggers by users.
Of the three above options, is there a best practice? Are there any other options I'm not considering for how to take this workflow based approach to post new records?
The second approach of monitoring the changes in the front end and then calling a backend view to update go database would be a better approach because processing on the backend or any other site would put the processing on the server which would slow down the site whereas second approach is more of a client side solution thereby keeping server relieved.
I do not think there will be a data loss, you are just trying to monitor a change, as soon as it changes your view will update the database, you can also use cookies or sessions to keep appending values as a list and update the database when site closes. Also django gives https errors you could put proper try and except conditions in that case as well. Anyways cookies would be a good approach I think
For anyone that finds this post I ended up deciding to take the Signals route. Essentially I'm utilizing Signals to track when users change a fields, and based on the field that changes I'm performing certain actions on the database.
For testing purposes this has been working well. When I reach production with this project I'll try to update this post with any challenges I run into.
Example:
#receiver(pre_save, sender=subTaskChecklist)
def do_something_if_changed(sender, instance, **kwargs):
try:
obj = sender.objects.get(pk=instance.pk) #define obj as "old" before change values
except sender.DoesNotExist:
pass
else:
previous_Value = obj.FieldToTrack
new_Value = instance.FieldToTrack #instance represents the "new" after change object
DoSomethingWithChangedField(new_Value)

Lock the system

I am writing a mini-CRM system that two users can login at the same time and they can answer received messages. However, the problem is that they might response the same message because messages can only disappear when they click "Response" button. Is there any suggestion to me to lock the system?
This sounds like a great case for an 'optimistic locking' approach. Here are two methods I've used with much success. Often, I combine the two methods to ensure no data is lost by mis-matched object instances on POSTs.
The easy way: Add a version field to your model. On POST, check the POSTed version number vs. the object's version number. If they don't match, raise a validation error. If they do match, increment the version by 1.
More elegant approach: Django's Generic Relations (part of the content types framework). A table which stores the content_type and object_id of the object that's locked, along with the user who 'owns' that lock. Check this lock on GET requests, and disable POSTing if it's 'locked' by another user. 'Release' the lock on a page unload, session end, or browser exit. You can get very creative with this approach.
Add some boolean field (answered, is_answered.. etc) and check on every "Response" click if it answered.
Hope it will help.

Way to alter behavior based on caller using property getter without reflection or making caller a member in Python

So I'm writing a test framework for a large website, and am currently working on modeling the UI driver. I have a UI Session that abstracts a number of emulators and drivers for browsers and mobile devices. this object also holds state information about the browser session, including a history of page objects, the top of which is the current page. My problem is that I want my page objects to lazy load data from the browser (properties set up with getters that check whether we've extracted the data, if not, whether this is the current page, and if so, pulls the data from the browser and caches it in the page object), but that means they need access to the abstraction layer provided by the session.
The first idea of course was just sticking the session in the page, but that creates a circular reference, which is bad. I also considered trying to define multiple interfaces for page, but I couldn't see how that would really work too well. What I really want is to be able to intercept the call in session then have session redirect the call down to page to a hidden getter that also takes session as an argument, but I haven't been able to figure out how to do that.
Is there a good solution or design pattern that I could use here to get the info and access I need where I need it without having circular references or giving up lazy loading? I don't want to make the interface more complicated by forcing users to sometimes call methods and sometimes call properties or members to access the same info (the idea is to hide the fancy footwork behind the scenes and make using the framework dead simple).
I actually do have a CurrentPage class that all it does is store the current_page in a class variable which is set by session on navigate events, so I suppose I could make current_page a magic global-esque accessor that would be the only way to actually get data off the page and have it use a special interface and store the session object as well, but that seems clunky to me.
Anyone have any suggestions?
You said:
"The first idea of course was just sticking the session in the page, but that creates a circular reference, which is bad."
But that's not true. Circular references won't get garbage collected instantly the way non-circular ones will, and __del__ methods won't get called, but as long as you're aware of both of those limitations and don't have needs that conflict with them, circular references are fine. See:
http://docs.python.org/reference/datamodel.html#object.__del__
Having pages store a reference to the session should be fine.
Edit: if session and page are in different files and circular imports become an issue, you can use an approach like this:
# session.py
import page
class Session(object):
def __init__(self):
self.page_list = []
def addPage(self, url):
self.page_list.append(page.Page(self, url))
# page.py
class Page(object):
def __init__(self, session, url):
self.session = session
self.url = url
As long as you don't need to construct Sessions inside of page.py you won't need to import it at all, since there won't be a need for capital-S Session (the class). If that changes, you can do something like:
# page.py
class Page(object):
# ...
def something(self):
import session
self.other_session = session.Session()

Downloadlink for a file dynamically created by a Trac- Wikimacro

I've been given the task to write a plugin for Trac.
It should provide burndown data for the ticketcount and estimations filed in the issue tracking system.
The user writes his request as a wikimacro and is provided a link/ button for downloading the burndown as a csv-file, output as a chart is also planned, but has lower priority.
I've got a working solution for processing the data but I'm left with the following problem.
My Question
How can I provide a downloadlink/- button on the Wikipage for a file which is dynamically created by the users request?
I've seen some attempts to send files in the trac source itself and other plugins, but since I'm new to web programming that doesn't really help.
Update1
I've been trying to solve the problem the way Felix suggested, which opened up a new problem for me.
This (stupid) example should demonstrate my problem.
My Macro generates the following URL and adds it as a link to the wikipage.
//http://servername.com/projectname/wiki/page_name?teddy=bear
But the RequestHandler doesn't react, even if the condition returns true.
Edit: This piece of code now shows the working version for the example.
New URL:
#example url
#http://127.0.0.1:8000/prove/files/new
class CustomRequestHandlerModule(Component):
implements(IRequestHandler)
def match_request(self,req):
#old, not working
#return "teddy=bear"== str(req.path_info).split('?')[1]
#new
accept="/files/new"== str(req.path_info)
return accept
def process_request(self,req):
csvfile = self.create_csv()
req.send_response(200)
req.send_header('Content-Type', 'text/csv')
req.send_header('Content-length', len(csvfile))
req.send_header('Content-Disposition','filename=lala.csv')
req.end_headers()
req.write(csvfile)
raise RequestDone
Update2
Inserting loggingstatements shows match_request never gets called.
What am I doing wrong? (Yes, the create_csv() exists already)
Update 3 Thx, for helping =)
If match_request isn't getting called, then process_request never has a chance to execute. Assuming that there's nothing wrong with your plugin that's preventing Trac from loading it correctly, what's probably happening is that another handler is matching the URL before your version of match_request gets called. Try increasing your log level to "Debug" and see if it provides enough information to tell who is processing that request.
Another option is to create a custom "namespace" for your auto-generated files. Try replacing 'wiki' in the generated URLs with something like 'files'. This should prevent any of the built-in handlers from handling the request before your plugin's match_request method gets called.
Basically you need to write your own IRequestHandler which handles a specific URL and returns your dynamically created data. Afterwards you macro should return a url which is configured for your request handler.

Categories