is it possible to make the python as the web server and the front end is codeigniter?
For some reasons:
database security - like when you are saving data. the codeigniter will pass the data to python basehttpserver / or maybe flask (but i have not yet done the flask before)
SQL Injection.
it's like for example. front end codeigniter
form - send data.
back end python web service
receives data - will serve as an API. and the python will be the one in charge of saving data in MySQLdb.
In theory I don't see why this would be impossible.
You could easily write a web application using codeigniter and have the controllers just pass data along to a python based web service. If you're interested in a fully decoupled front-end/back-end, you could also use a queuing layer (such as RabbitMQ) in between the data entry facilities in your CI program, and the persistence web services in Python.
That said, I'm not clear why you would want to. CodeIgniter is PHP, and includes some very excellent data modelling components that integrate fully into the overall framework. Long story short, if you're using CodeIgniter, just have it connect to MySQL and do the data persistence for you.
Likewise, if you'd prefer to code your persistence in Python, why not just use Django? It's a fully realized Python web framework, and also features an excellent ORM and support for MySQL.
I don't really see how either technology gives you clear benefits, provided they are both used properly, for database security. Both have built-in methodologies for "cleansing" user provided data to prevent SQL injection (notes for Django and notes for CodeIgniter)
There are a great many other posts on StackOverflow dealing with preventing SQL injection with CodeIgniter and other frameworks. Just using python, or decoupling your front-end and back-end, will not provide you any additional security or protection guarantees. The only way to do that is to carefully architect your interactions with databases, using all the tools provided you by whatever framework you are using, or creating their equivalents if none are available (or switch to a better toolset).
Edit - expansion
Based on the comments above I figured it was actually worth writing a little more about the potential advantages and real challenges of a decoupled infrastructure.
In principle, it's easy to decouple a front-end from a more isolated backend. You could leverage in either Django or likely CodeIgniter (although I haven't personally seen it done in CI, just in Django) the existing model infrastructure, but deal with model objects in memory only on the frontend, and expand on the existing ORM functionality to use your backend services to actually store and retrieve data from a persistence layer (your database).
Practically, this can become quite a bit of work to do right. To gain the security advantages you desire, your decoupled backend needs to deal with the frontend as if in principle it is "hostile", or at the least, untrustworthy. So, be sure that you implement a method for the frontend to reliably authenticate itself to the backend. Ensure that all traffic is minimally using SSL between the frontend and the backend. Consider carefully your services architecture (the SOA layer in front of your backend logic) and make sure your APIs, where possible, are MECE (Mutually exclusive, collectively exhaustive).
I'm sure I'm missing some basic principles, but having recently participated in the design and build of a system along these lines I can assure you that the complexity can very quickly explode, so careful architectural discipline and adherence to both MECE and MVP (minimum viable product) is critical. A decoupled infrastructure can be an amazing end-product if it fits the need, and in use cases I've worked with it has been extremely effective. It isn't a one-size-fits-all, though, and hopefully some of the extra description here can help you make a more informed choice.
Hopefully this helps round out the topic answer. Basic principles: Design for what you need. Don't conflate complicated with secure. Simple can be secure, as can complex, but complexity breeds room for hard-to-plug security vulnerabilities, and simple gives the illusion of security by seeming easy. No approach guarantees a positive outcome, so don't try to cut corners; spend as much time as you can in research and design to minimize your time building, refactoring, and fixing.
Related
I was tasked with building a framework to unify half a dozen of python webapps that share some common functionality between them.
The current setup consists of an apache web server, serving up a specific cgi script (for each app) that talks to a simple XMLRPC service (again, a different service for each app), which in turn runs db queries, writes files to disk, runs external scripts, etc..
The idea was to do away with redundant code (such as 6 different RPC services creating and managing 6 different connections to the same db, executing very similar queries, etc), as well as the whole XMLRPC thing. Also, keep things in python land and maintain a double layer of abstraction - the client-side app can't talk directly to the api that runs the queries and writes to the disk; it must talk to an intermediate api (currently the cgi scripts) that in turn talk to the processes that perform all the querying / disk writing.
Since this is my first major Python undertaking, I am somewhat struggling with finding the correct, DRY, modular and pythonic way of architecting this framework.
Ideally, I would like to construct a REST api that the code on the client-side (currently implemented in React.js) can talk to directly.
This api should then be able to interface with another, internal REST api (currently, this is handled by the XMLRPC processes) that will do all the heavy-lifting (aforementioned db querying - perhaps using an ORM, executing tasks, writing files, and the like).
However, I'm not sure how I should implement the inter-api communication - should it be over HTTP (even if they're living on the same server?) or some other protocol I'm not aware of?
Also, how should I go about consolidating the common bits of code?
should there be a separate module / process that keeps a live connection to our db, and the disparate apis connect to it?
How can I implement an ORM with such a scheme, as I don't want new developers querying the db directly. I would like them to have an explicit model that they talk to instead.
I know this is a long and convoluted way of saying "I'm stuck", and for that I apologize and would happily provide more detail if asked, however what I'm asking for are more of a set of general guidelines:
How should different REST apis (implemented using Flask + SQLAlchemy, for example) talk to each other, whether they live on the same server, or on different ones?
How should I abstract away common tasks like connecting to a db into a separate process?
Finally, what are some good, common-sense rules for keeping this whole thing modular?
I am currently working on a complex web interface and backend, that will need to address several issues.
Scalablility
multiple deployments of varying load demands
Very structured authorization groups
Different views for different user groups
admin panel
user/content management
Large managed database
current
long term stored data (histories)
Data Updates
Polling
Ex. Search queries, static pages/files, report generation per request
Pushing (likely websockets)
Ex. Real-time notifications
Varying protocols
Ex. HTTP, SSL, Websockets
I would like to use Python, because I have grown to really enjoy the language, and I am considering some combo of Django and Twisted.
I have some experience with Django, which I love for its MVT style of application programming, its authorization models, its admin panel, and its database API. However, it is not so strong in some of the data requirements that I need, in particular, the real-time aspects.
Now, I have not really used Twisted before, but I have seen many interesting things to it. In particular the async aspects, and the ability to run many protocols.
The problems in getting the two to work together are obvious in that Django is a blocking server and Twisted is designed to be non-blocking. I have seen some topics stating using the two together is possible and have had success with it. It also seems possible to run both and proxy them to accept different urls, but getting the authentication over the two may become tricky?
Having said all of that, I would like to ask if I am on the right track for implementing this system, as well as suggestions on how to use the two together, alternatives, or if I should just kick one out (at this point, I guess it'd have to be Django, because the real time stuff is necessary). I should mention that I have written some of the preliminary data models and views in Django already.
I am quite experienced on the client side of things (JS,CSS,HTML), but I am not so savvy in the server side of things. Any input would be helpful, thanks.
You can definitely use Twisted with Django. Several projects have used the two together to good effect. twistd web --wsgi provides a basic way to get it set up, and there's a great example with more bells and whistles, like static content by Alex Clemesha on github.
I need to create a project that has a web frontend to manage synchronous task execution (ala fabric), async tasks (AMQP), and long-polling/ajax for tabular viewing of results and queues/large, frequently changing datasets (think tail -f syslog). I have an existing Python codebase for a lot of the implementation-specific stuff.
After looking at a bunch of existing frameworks, the obvious answer appears to be Django+Celery. However, I do not want to "learn Django", nor do I need 95% of it's functionality. I just need simple auth, maybe sqlalchemy, easy ajax, amqp, xmlrpc would be helpful.
I would consider using Mongrel2, but I have a strong preference for RabbitMQ over 0MQ (for a few implementation-specific reasons).
I originally spent a great deal of time learning Twisted, and ended up getting a few hundred useful LOC out of it, but I found that I was twisting (lol) too much of my platform code to fit it's callback model. It actually 'fit the bill' very well (except with it's own amqp implementation), but it was so frustrating, and I went through so many iterations of code (one for each 'twisted ahah moment'), that it's 100% out.
Can somebody please help me wade through the mire? Tornado? Pylons? repoze? Pyramid? Flask? Bottle? CherryPy? Web2py? Paster/Webob? Anything else# http://wiki.python.org/moin/WebFrameworks?
Edit:
To be clear, integration with RabbitMQ (or another amqp provider) is of the utmost importance, and is really the crux of problem.
I don't have a full vision of python web frameworks but just want to share my point of view on 2 of them :
Bottle is light and works fine. If you want something easy to learn and easy to use that may be the right choice. I used it for quite simple front-end apps running locally and i liked it very much.
Tornado seems to me as a very good non-blocking server for real-time web app. Combined with tornadio it makes ajax-long-polling quite easy. However, it may be a little harder to learn than Bottle. I would recommend to have a look to the chat app in the example folder of tornadio.
I hope it helps
If you are going to use AMQP long term then I would steer clear of Celery because they use AMQP in a wierd way that suggests the developers did not understand the AMQP model.
bottle is a nice framework for knocking together RESTful apps (I use it to create mock servers for testing) and if you already have the code that does the real work, you may be surprised at how short a bottle app can be.
I'm currently building Python apps using RabbitMQ and using amqplib by way of kombu. I originally chose kombu in case I wanted to swap libraries and use pika or something else, but now I wish that I had just gone with amqplib and built a proper Pythonic AMQP model on top of that.
Do spend some time on the RabbitMQ site reading some of the blogs and slide presentations on AMQP before you get too deep into coding or you won't really understand the AMQP model and will make things harder for yourself.
Please don't use xmlrpc unless you have to talk to other apps. Bottle makes simple RESTful apps so simple, that XMLRPC is just uneccessary complexity.
A couple of suggestions.
CherryPy is a great low level framework. It doesn't provide a lot of functionality, but it provide a very easy system for mapping http requests to function calls.
web.py is another extremely lightweight and easy to use framework. It is more comprehensive than CherryPy, including templates and other features.
Plain wsgi is not a bad choice if your needs are extremely simple. It is a little more complicated to do simple stuff than CherryPy or Web.py. WSGI is the lowest common denominator, these days most web frameworks are built on top of it.
is there a difference between using FAPWS3 and MOD_WSGI when dealing with Django?
FAPWS3 seems alot faster when serving requests toward Python scripts. I would like to know if I'm missing out anything. :)
Any ideas?
The underlying web server is not the bottleneck, it is your application and database access. The differences between any underlying web server are going to very minimal or non existent in the context of an actual full application stack. You cannot base decisions on hello world type tests as they are pretty meaningless. Decisions should therefore be based on the quality and stability of the hosting solutions under load, as well as ease of configuration and support, including your own competence to manage a particular setup. If you have no idea how to configure and support a particular web server properly, eg., Apache, then why would you use it.
here is the best explanation what i ever seen in the web at the moment.
http://nichol.as/benchmark-of-python-web-servers
Quote from nichol.as
When you are just interested in quickly hosting your threaded
application you really can’t go wrong with Apache ModWSGI. Even though
Apache ModWSGI might put a little more strain on your memory
requirements there is a lot to go for in terms of functionality. For
example, protecting part of your website by using a LDAP server is as
easy as enabling a module. Standalone CherryPy also shows great
performance and functionality and is really a viable (fully Python)
alternative which can lower memory requirements.
When you are a little more adventurous you can look at uWSGI and
FAPWS3, they are relatively new compared to CherryPy and ModWSGI but
they show a significant performance increase and do have lower memory
requirements.
I'm looking to implement data synchronization between servers and distributed clients. The data source on the server is mysql with django on top. The client can vary. Updates can take place on either client or server, and the connection between server and client is not reliable (eg. changes can be made on a disconnected cell phone, should get sync'd when the cell phone has a connection again).
S. Lott suggests using a version control design pattern in this question, which makes sense. I'm wondering if there are any existing packages / implementations of this I can use. Or, should I directly make use of svn/git/etc?
Are there other alternatives? There must be synchronization frameworks or detailed descriptions of algorithms out there, but I'm not having a lot of luck finding them. I'd appreciate if you point me in the right direction.
Perhaps using plain old rsync is enough.
AFAIK there isnt any generic solution to this mainly due to the diverse requirements for synchronization.
In one of our earlier projects we implemented a Spring batching based sync mechanism which relies on last updated timestamp field on each of the tables (that take part in sync).
I have heard about SyncML but dont have much experience with that.
If you have a single server and multiple clients, you could think of a JMS based approach.
The data is bundled and placed in Queues (or topics) and would be pulled by clients.
In your case, since updates are bi-directional, you need to handle conflict detection as well. This brings additional complexities.