SAP RFC server with Python, is it possible? - python

I have a service that can make SAP RFC requests to some server. Assume that I can not modify this service, but need to handle such requests and process their data.
So I want to develop my own server that will process RFC requests, I prefer Python but can do it with C++ too.
I read that it should be possible to do with PyRFC
https://sap.github.io/PyRFC/server.html#server-server
but there are "gateway parameters" and I don't know what I should use here, and in my concept I do not need SAP Gateway here, I just want to process requests in my standalone server.
Is it possible to develop own standalone server for processing RFC requests with Pyhton or C++?
Or it can be used only with SAP Gateway? In this case what I need to do in SAP Gateway side?

You will need a SAP Gateway server/service anyway - and by that, I don't mean the SAP Gateway product that is used to provide OData services, but the sapgw process that is part of the SAP NetWeaver Application Server ABAP installation. This process is required because your custom RFC server registers itself at a sapgw server (specifying an identifier in the process), and the sapgw instance will route outgoing (!) calls to your implementation based on the identifier that is specified in the RFC destination as well.
As for direct communication between non-SAP systems (so non-outgoing RFC calls) - that should be possible as well, but I strongly believe that the calling service will have to be adapted to the fact that it's not talking to a "real" ABAP back-end. You'll also have to mock the DDIC Repository access, the user authentication process, ... - realistically speaking, it might be easier to simply install an ABAP system and do the processing there.

Related

Simple way for message passing in distributed system

I am implementing a small distributed system (in Python) with nodes behind firewalls. What is the easiest way to pass messages between the nodes under the following restrictions:
I don't want to open any ports or punch holes in the firewall
Also, I don't want to export/forward any internal ports outside my network
Time delay less than, say 5 minutes, is acceptable, but closer to real time would be nice, if possible.
1+2 → I need to use a third party, accessible by all my nodes. From this follows, that I probably also want to use encryption
Solutions considered:
Email - by setting up separate or a shared free email accounts (e.g. Gmail) which each client connects to using IMAP/SMTP
Google docs - using a shared online spreadsheet (e.g. Google docs) and some python library for accessing/changing cells using a polling mechanism
XMPP using connections to a third party server
IRC
Renting a cheap 5$ VPS and setting up a Zero-MQ publish-subscribe node (or any other protocol) forwarded over SSH and having all nodes connect to it
Are there any other public (free) accessible message queues available (or platforms that can be misused as a message queue)?
I am aware of the solution of setting up my own message broker (RabbitMQ, Mosquito) etc and make it accessible to my nodes somehow (ssh-forwardning to a third host etc). But my questions is primarily about any solution that doesn't require me to do that, i.e. any solutions that utilizes already available/accessible third party infrastructure. (i.e. are there any public message brokers I can use?)
How about Mosquitto: message broker that implements the MQ Telemetry Transport protocol versions 3.1 and 3.1.1. MQTT provides a lightweight method of carrying out messaging using a publish/subscribe model. This makes it suitable for "machine to machine" messaging. It supports encryption. Time to setup: approximatively 15 mins you should be up and running. Since it is a message broker, you can write your own code to ensure you can communicate with 3rd party solutions. Also, it achieves soft real-time, but depending on your setup you can achieve hard real-time. After you look into Mosquitto have a look at Paho, which is a port of Mosquito to Eclipse Foundation.
Paho also provides a Python Client, which offers support for both MQTT v3.1 and v3.1.1 on Python 2.7 or 3.x. It also provides some helper functions to make publishing one off messages to an MQTT server very straightforward. Plenty of documentation and examples to get you up and running.
I would recommend RabbitMQ or Redis (RabbitMQ preferred because it is a very mature technology and insanely reliable). ZMQ is an option if you want a single hop messaging system instead of a brokered messaging system such as RabbitMQ but ZMQ is harder to use than RabbitMQ. Depending on how you want to utilize the message passing (is it a task dispatch in which case you can use Celery or if you need a slightly more low-level access in which case use Kombu with librabbitmq transport )
Found https://www.cloudamqp.com/ which offers a free plan with a cloud based installation of RabbitMQ. I will try that and see if it fulfill my needs.

Using Python Twisted Web HTTPChannel vs Resource

I have a specific protocol specification I am writing to that utilizes a Station to Station Protocol for authentication. Thus far I've been using Twisted web and the Resource class along with render_GET, render_POST, but once the authentication step is complete with my client the communication is encrypted, even at the HTTP protocol level. That is I lose visibility to GET /resource and only receive encrypted bytes on the TCP session. I tried looking at intercepting the render method but it is not called if there is no HTTP method parsed. Is it possible to use Resource still or do I need to "downshift" to using HTTPChannel and its ability to hand me raw data?

How to communicate between client device and webserver?

I've build a little device based on the raspberry pi. Now I want to configure it using my web server. The idea is that I enter all the details on my django web page and then the device just pulls that off the server.
But there are two problems I'm not sure how to solve:
I have multiple devices for multiple users so some kind of Login must be provided.
The device also sends pictures from time to time. Right now it's using FTP with a general login, but I want to personalize that too for every device. The uploads will need a resume function so http is out!
So the basic question is: Should I get started with sockets or is there a better and safer way to do it? Maybe there is some kind of open source library that's been tested a lot?
Instead of hand coding sockets, I would suggest using HTTP with BASIC authentication to communicate between the device and the web server. You can uniquely assign an id/pwd to each device, and BASIC authentication is well supported by all web servers and client side libraries.
There are some security concerns with BASIC authentication even if you use HTTPS, but it maybe acceptable in your particular case here.
Maybe you could use SSH, with Fabric for instance. Here an example.

Python JSON-RPC_2.0 TCP Server Client Explained

I'm having a difficult time fully understanding the nature of a TCP server/client relationship when a JSON string is sent to the server. The information I need may be out there, but I'm perhpas not using the correct search paramaters as I'm looking.
I've built a Python TCP, JSON-RPC Server from the following examples:
https://github.com/joshmarshall/jsonrpclib
http://code.activestate.com/recipes/552751-json-rpc-server-and-client/
In both cases, I can communicate with the Python server from a Python console on a different computer, sending commands from one (the client) to the other (server). In all of the examples, I've had to install the libraries mentioned above on both the client and the server machines in order to facilitate the TCP communication.
So the background to my situation and question is, when does JSON enter the mix? This is what I want to do:
Setup a Python TCP server that accepts a JSON string from a remote client inside (or outside) the network. The server parses the JSON string, fetches the method and parameters from the objectified string, and executes the method. The server then sends a JSON string result to the calling client. In this case, the client is a mobile application (iPad, Android, etc) with a JavaScript library that I'll use to send the requests to the server.
Why would I need a Python client? From what I can gather, the client just needs to open a connection to the server and then send the JSON string, right? Why do all the code samples include Python client examples? Are they assuming a server machine is going to talk to a server machine, so they have included client code to help generate the JSON string that will be sent to the server?
If I assume that a Python client isn't really needed for anything, I've been sending JSON strings to the Python server from the iPad, but in each case the server is reporting a "Bad request syntax" error. I'll pen a new question on that issue if I'm understanding the current question correctly.
Insight is appreciated.
The JSON encoding is the lingua franca of your RPC protocol, so you can indeed use any client you like. The implementations you found for JSON-RPC use the HTTP protocol, a very specific communication protocol built on top of TCP/IP, but you can implement the same protocol over raw TCP-IP sockets if so required.
The examples include both the Python client and the server because they illustrate how to implement the JSON-RPC standard in Python, not in JavaScript or C or Lisp. They focus on the implementation in one language. The JSON-RPC standard however, is language agnostic. It doesn't matter what language you write either the server or the client in, as long as they use the same standard.

I need a message/queuing solution for my web-based system

I am looking for a message/queuing solution for my web based system running on Ubuntu.
The system was built on the following technologies:
Javascript (Extjs framework) - Frontend
PHP
Python (Daemon service which interacts with the encryption device)
Python pyserial - (Serial port interactions)
MySQL
Linux - Ccustom bash scripts(to update DB/mail reports)
The system serves the following purpose:
Capture client information on a distributed platform
Encrypt/decrypt sensitive transactions using a Hardware device
System breakdown:
The user gains access to the system using a web browser
The user captures client information and on pressing "submit" button
The data is sent to the encryption device and the system enters a wait state
The data is then encrypted on the device and sent back to the browser
The encrypted data is saved to the DB
System exits wait state and displays DONE message
Please note: I have already taken care of waiting/progress messages so lets omit that.
What I have done so far:
I created a python daemon which monitors a DB view for any new requests
The daemon service executes new requests on the device using pyserial and updates
the requests table with a "response" ie. the encrypted content
I created a polling service in PHP which frequently checks if there is a "response" in >the requests table for the specific request
Created the Extjs frontend with appropriate wait/done status messages
The problem with the current setup:
Concurreny - We expect > 20 users at any time submitting encryption/decryption requests
using a database as a message/queuing solution is not scalable due to table locking and only 1 listening process which monitors for requests
Daemon service - Relying on a daemon service is a bit risky and the DB overhead seems a bit high polling the view for new requests every second
Development - It would simplify my development tasks by just sending requests to a encrypt/decrypt service instead of doing this whole process of inserting a request in the db,polling for the response and processing the request in the daemon service.
My Question:
What would be the ideal message/queening solution in this situation? Please take into >account my system exclusively runs on a Ubuntu O/S.
I have done a few Google services and came accross something called a "Stomp" server but it prove somewhat difficult to setup and lacked some documentation. Also I prefer the advice from individuals who have some experience in setting up something like this instead of some "how to" guide :)
Thank You for your time
I believe the popular RabbitMQ implementation of AMQP offers a PHP extension (here) and you can definitely access AMQP in Python, e.g. via Qpid. RabbitMQ is also easy to install on Ubuntu (or Debian), see e.g. here.
Whether via RabbitMQ or otherwise, adopting an open messaging and queueing protocol such as AMQP has obvious and definite advantages in comparison to more "closed" solutions (even if technically open source, such solutions just won't offer as many implementations, and therefore flexibility, as a widely adopted open, standard protocol).
I would do:
The web component connects to the encryption daemon/service, sends the data and waits for the answer
The encryption daemon/service would:
On startup, start a thread (SerialThread) of each of the available serial devices
All 'serial threads' would then do a SerialQueue.get (blocking waiting for messages)
A multi threaded TCP server, check ThreadingMixIn from http://docs.python.org/library/socketserver.html
The TCP Server threads would receive the plain data and put it on the SerialQueue
A random SerialThread (Python's Queue class manages the multi thread required locking for you) would receive the request, encrypt and return the encrypted data to the TCP Server thread
The TCP Server thread would write the data back to the web component
I am using this logic on a project, you can check the source at http://bazaar.launchpad.net/~mirror-selector-devs/mirror-selector/devel/files/head:/mirrorselector/, on my case the input is an URL, the processing is to scan for an available mirror, the output is a mirror url.

Categories