Is there a way to receive and process packets intercepted in http-toolkit programmatically using python?
Is there any internal API I access?
Ideally I would like to receive the packets in a JSON or HAR format.
Within HTTP Toolkit itself, this isn't possible right now, but it is planned in future. You can +1 on the issue to vote for it here: https://github.com/httptoolkit/httptoolkit/issues/37. With that, you'd be able to add your own scripts within HTTP Toolkit which could process or store packets elsewhere any way you like, including sending them to a Python process.
In the meantime, this may be possible using Mockttp. Mockttp is the internals of HTTP Toolkit as an open-source JavaScript library that you can use to build your own fully scriptable proxy, and once that's working you can easily add logic to forward packets to Python on top of that. There's a getting started guide here: https://httptoolkit.tech/blog/javascript-mitm-proxy-mockttp/.
Related
Currently I am designing and programming a piece of software that controls a series of devices. The software is planned to have a REST interface through which you would be able to control the software (and the devices) remotely.
Now, a very basic abstraction of the architecture could look something like this:
As you can note, the system is composed of a master controller, which then handles and monitors different modules that are not dependent on each other. Front End module is an example in the diagram, while others are general abstractions of modules, but they could be anything (Database module, MessageBus module, etc).
For the actual REST interface there are both data retrieval, data storage, as well as control commands that are being implemented.
My "problem" is that I can't decide on how these "commands" should be propagated down the line.
Some cases of possible commands:
Command requesting to turn on/off, restart, control a device handled by another module
Command requesting to restart/reload the software
Command to retrieve data from another module
Now I see a few possible ways of the actual logic implementation:
All received REST commands are dispatched through a message bus. In this case each request should receive a unique identifier which could be then used to retrieve the status of the request
All received REST commands make direct calls to other modules
Both of these have pros and cons:
The second method of doing everything could very easily fall into spaghetti code and would be hard to debug and expand upon, since there's a lot of multithreading utilization through different modules. But it is possibly the fastest way of handling a command and retrieving data. Especially since the project requires speed and responsiveness.
The first method lacks the advantages of the second method, however it would help to keep the code and architecture clean and clear of dependencies from other modules. Furthermore, a Console channel is also planned which could in theory use the same methodology for implementation.
There's another method that I thought of while brainstorming about the problem:
To force the REST channel to forward the incoming requests to the actual FronEnd module and then "wait" until it receives a response. The FrontEnd module would then have to directly call other modules for any information or actions requested.
This method, however, is not that "different" from the method nr 2.
Could anyone offer any advice? Perhaps ideas on the implementation or design decisions?
In case you are wondering, the software is being written in Python, but I don't think this is relevant to the question.
So basically we have decided to ditch the RESTful way and simply went for an approach using sockets (or websockets in particular).
The commands sent through the websockets are formatted as JSON and resemble REST in a way (basically a request contains a "URI", a "Action" [get, put, post, etc.] and a "body").
A command comes to the front end control part of the system and then is pushed to a message bus where another part of the system has already subscribed for these commands. After it processes the data or executes a command, the data is returned through the message bus and dispatched to the client through the websocket.
A coworker and I are currently using OpenCPU to expose analytics we write in R to other applications via a REST API. There has been a need recently to leverage some python libraries in a similar manner. From the OpenCPU description:
OpenCPU is a system for embedded scientific computing and reproducible research. The OpenCPU server provides a reliable and interoperable HTTP API for data analysis based on R. You can either use the public servers or host your own.
Basically we update an R library on the server and it automatically exposes the updated and new functions at REST endpoints. It takes care of marshalling the data from JSON to S3 and then back to JSON. There is no need to manually configure routes with OpenCPU.
My question then, assuming we are operating in a secure environment, is does an equivalent for python exist? I've tried searching but have had little luck thus far.
Thanks!
Have you looked at IPython? I'm not sure about supporting library updates.
I am trying to develop a python PyQt program that allow user to enter data about personal particulars and review them at a later time for processing purpose.
The program will be used by less than 5 persons at the same time. So, i am thinking to use Sqlite3 database as i believe it should be able to cope for that amount of traffic.
The frame work i have in mind is that, the clients will have their own copy of my python pyqt program on each machine. Whenever they perform any operations that required data read/write, it will connect to the server thorough internet and read/write from the sqlite.db on the server.
Basically, the server will be nothing but a remote data storage.
Currently, i am able to create the required GUI for data inputs by using various widgets like QlineEdit, QCombobox, QTextEdit and so on.
But i have never done network programming before, thus i have no idea how to implement a server that store the sqlite data file for my software. So my questions are
(1) if i have a PC that has 24/7 internet connection, how do i set it up so that it can act as a server that store the data file for my software?
(2)In what way can/should my program communicate to that server through internet.
Even if you can't give me exact answer, i would appreciate if you can provide me some information of so that i look up and study about it.
Any constructive advice will be appreciated.
FYI: all the PCs will be running windows XP SP3 32 bits.
There are different ways for a client to communicate with a server.
You can use
XMLRPC to create an object with methods that are called on the server side
You can use HTTP and REST for the server with the library requests or urllib for the client
For the latter you can use flask, bottle, django or other frameworks to create a website that serves the content
(tutorials)
You can use Pyro to remotely access the objects on the server. Useful if the clients should also communicate with eachother.
You can create your own protocol. You will learn a lot and value the other options.
The list is not complete
I suggest that you have a look at XMLRPC if that fits. For number 2 I can say that many APIs use such a HTTP-interface (twitter, github, facebok, google). It is easy to use also for other people.
Security is important. I am not an expert. If you send username and password in plain text then use SSL to encrypt the connection. If you can not get ssl to work with python you can use stunnel.
I'm interested in something based on Jabber but I didn't find a free/opensource one so I'm thinking of writing one.
I've installed a Jabber server and now thinking about the ways in which I can write the client. I'm thinking of one of either these two methods.
1) An ajax call made to a jabber script running on the webserver that takes care of connecting to the server. But then I thought because of the dependencies involved in the jabber client, it might end up consuming too much memory when a few clients connect.
2) The other method is to run a client running as a daemon that takes care of all the heavy lifting. This way I need to have only one instance of the client that sends a spoofed message (sender's name as that of whatever the user entered on the site). A simple script running on the webserver talks to this daemon over some sort of API (XMLRPC or Msgpack maybe?)
I think #2 is better but I'm not sure. Are there other ways I can implement this? I'm considering using Perl or Python for this.
Jabber is usually called XMPP nowadays, and there are dozens of clients and servers, something for every language. If you are using Javascript (you mention Ajax), you probably want Strophe. Most servers are modular, so you only load the features you need (consider Tigase, ejabberd, or xmpppy). Writing your own is even worse an idea than it sounds.
BOSH
Install prosody because it is really eaSily installed and has BOSH support built-in. You could skip this but then you need to find out how to use BOSH via ejabberd.
use strophe.js to implement this(using BOSH). New browsers support cross-domain request(CORS -> read Proxy-less BOSH part). The old browsers you could use proxy or use flash in the middle as proxy.
read Professional XMPP Programming with JavaScript and jQuery to learn strophe. It even has chapters explaining how to create chat.
Node.js
Or you could consider installing node.js to create your chat system using socket.io.
Here is what I would like to do, and I want to know how some people with experience in this field do this:
With three POST requests I get from the http server:
widgets and layout
and then app logic (minimal)
data
Or maybe it's better to combine the first two or all three. I'm thinking of using pyqt. I think I can load .ui files. I can parse json data. I just think it would be rather dangerous to pass code over a network to be executed on the client. If someone can hijack the connection, or can change the apps setting to access a bogus server, that is nasty.
I want to do it this way because it keeps all the clients up-to-date. It's sort of like a webapp but simpler because of Qt. Essentially the "thin" app is just a minimal compiled python file that loads data from a server.
How can I do this without introducing security issues on the client? Is https good enough? Is there a way to get pyqt to run in a sandbox of sorts?
PS. I'm not stuck on Qt or python. I do like the concept though. I don't really want to use Java - server or client side.
Your desire to send "app logic" from the server to the client without sending "code" is inherently self-contradictory, though you may not realize that yet -- even if the "logic" you're sending is in some simplified ad-hoc "language" (which you don't even think of as a language;-), to all intents and purposes your Python code will be interpreting that language and thereby execute that code. You may "sandbox" things to some extent, but in the end, that's what you're doing.
To avoid hijackings and other tricks, instead, use HTTPS and validate the server's cert in your client: that will protect you from all the problems you're worrying about (if somebody can edit the app enough to defeat the HTTPS cert validation, they can edit it enough to make it run whatever code they want, without any need to send that code from a server;-).
Once you're using https, having the server send Python modules (in source form if you need to support multiple Python versions on the clients, else bytecode is fine) and the client thereby save them to disk and import / reload them, will be just fine. You'll basically be doing a variant of the classic "plugins architecture" where the "plugins" are in fact being sent from the server (instead of being found on disk in a given location).
Use a web-browser it is a well documented system that does everything you want. It is also relatively fast to create simple graphical applications in a browser. Examples for my reasoning:
The Sage math environment has built their graphical client as an application that runs in a browser, together with a local web-server.
There is the Pyjamas project that compiles Python to Javascript. This is IMHO worth a try.
Edit:
You could try PyPy's sandbox interpreter, as a secure Python interpreter for the code that was transferred over a network.
An then there is the most simple solution: Simply send Python modules over the network, but sign and/or encrypt them. This is the way all Linux distributions work. You store a cryptographic token on the local computer. The server signs/encrypts the code before it sends it, with a matching token. GPG should be able to do it.