What's the difference between hyperledger indy-sdk and Libvcx? - python

I've been looking into the hyperledger indy framework and I wanted to start to build an app to get started but I noticed that there's the sdk that uses Libindy but there's also the Libvcx that is on top of Libindy but I don't know which one to use since they both seem to do the same.

As you've said, LibVCX is built on top of LibIndy.
LibIndy
Provides low level API to work with credentials and proofs. It provides operations to create create credential requests, credentials, proofs. It also exposes operations for communication with Hyperldger Indy ledger.
What Libindy doesn't handle is the credential exchange. If you write backend which issues credential and a mobile app which can request and receive credentials using Libindy, you'll have to come up with some communication protocol to do so. Is it gonna be HTTP? ZMQ? How are you going to format messages? This is what LibVCX does for you. You will also have to come up with solution how will you securely deliver messages and credentials from server to client when the client is offline.
LibVCX
LibVCX is one of several implementations of Hyperledger Aries specification. LibVCX is built on top of LibIndy and provides consumer with OOP-style API to manage connections, credentials, proofs, etc. It's written in Rust and has API Wrappers available for Python, Javascript, Java, iOS.
LibVCX was designed with asynchronicity in mind. LibVCX assumes existence of so called "Agency" between the 2 parties communicating - a proxy which implements certain Indy communication protocol, receives and forwards messages. Therefore your backend server can now issue and send a credential to someone whom it has talked to days ago. The credential will be securely stored in the agency and the receiver can check whether there's any new messages/credentials addressed for him at the agency.
You can think of agency as a sort of mail server. The message is stored there and the client can pull its messages/credentials and decrypt them locally.
What to use?
If you want to leverage tech in IndySDK perhaps for a specific use case and don't care about Aries, you can use vanilla libindy.
If you want to be interoperably exchange credentials with other apps and agents, you should comply with Aries protocol. LibVCX is one of the ways to achieve that.

The indy-sdk repository is the Indy software that enables building components (called agents) that can interact with an Indy ledger and with each other.
In 2019, at a "Connect-a-thon" in Utah, USA, developers from a variety of organizations gathered to demonstrate interoperability across a set of independently developed agent implementations. At that time, a further idea developed that led to the creation of Hyperledger Aries. What if we had agents that could use DIDs and verifiable credentials from multiple ecosystems? Aries is a toolkit designed for initiatives and solutions focused on creating, transmitting, storing and using verifiable digital credentials. At its core are protocols enabling connectivity between agents using secure messaging to exchange information.
Libvcx is a c-callable library built on top of libindy that provides a high-level credential exchange protocol. It simplifies creation of agent applications and provides better agent-2-agent interoperability for Hyperledger Indy infrastructure.
You need LibVCX if you want to be interoperably exchange credentials with other apps and agents, in others words if you want to be comply with Aries protocol.
In this case LibVCX Agency can be used with mediator agency which enables asynchronous communication between 2 parties.

Related

Backend APIs can only be called from frontend and secure APIs [duplicate]

After reading around it appears that trying to protect a publically accessible API (API used by an app/site that does not need a user to log in) appears to be fruitless, e.g. store key in app, user can reverse engineer app.
My question relates to how one can protect as much as possible and slow down abuse of a public accessible API...
Rate-limiting? Check request origin (although can be spoofed).... anything else?
Also if the site is SSR, could it just be protected by the server's IP?
YOUR QUESTIONS?
After reading around it appears that trying to protect a publically accessible API (API used by an app/site that does not need a user to log in) appears to be fruitless, e.g. store key in app, user can reverse engineer app.
Security is all about defense in depth, thus is all about adding as many layers as you can afford and required by law in order to mitigate the risk, therefore any defense you add it's one more layer that will prevent that simple/dumb automated scripts from accessing it, while at same time will increase the level of skills and effort necessary for an hacker poking around to overcome all defenses.
Rate-limiting?
This is kind of mandatory for any API to employ, otherwise an automated script can easily extract an huge amount of data in seconds. The more strict this rate limit is applied, the greater may be the chances for other layers of defense to detect that unauthorized access to the API may be happening and try to mitigate/block it. To bear in mind that the rate limits can be bypassed by adapting the attack to make requests that do not trigger it, and in some cases easily automated for that Software that give back in the response header the throttling values being applied.
Check request origin (although can be spoofed)....
Yest it is easily bypassed, but why not? It will be one more layer of defense that filters out some dumb automated scripts/bots.
Also if the site is SSR, could it just be protected by the server's IP?
No matter if a SSR site or any other type of app, when used from a mobile phone the IP address can change during the load of a page or mobile app screen, because when the phone switches between masters the IP will change. Also to bear in mind that in an office or public wifi all the users on it will have the same IP.
The use of it as a blocking measure on its own needs to be carefully evaluated, and normally requires fingerprinting the request in order to reduce the risk of blocking other valid users sharing the same network.
I would use it very carefully to avoid/block/throttle requests only when I could establish that hey are known bad IPs, that you can collect from your own requests history and/or from public datasets.
WHO IS IN THE REQUEST VS WHAT IS MAKING THE REQUEST
A common misconception around developers of any seniority is about not being aware that the who is in the request is not the same as what is making the request, therefore we will first clear it out...
The Difference Between WHO and WHAT is Accessing the API Server
While your API is not using user authentication is important to be aware of this distinction in order to better perform informed decisions around the security measures to be adapted in the API server.
I wrote a series of articles about API and Mobile security, and in the article Why Does Your Mobile App Need An Api Key? you can read in more detail the difference between who and what is accessing your API server, but I will quote here the main takes from it:
The what is the thing making the request to the API server. Is it really a genuine instance of your mobile app, or is it a bot, an automated script or an attacker manually poking around your API server with a tool like Postman?
The who is the user of the mobile app that we can authenticate, authorize and identify in several ways, like using OpenID Connect or OAUTH2 flows.
The best way to remember the difference is to think about the who as the user your API server will be able to Authenticate and Authorize access to the data, and think about the what as the software making that request in behalf of the user.
DEFENDING THE API SERVER
My question relates to how one can protect as much as possible and slow down abuse of a public accessible API...
For Mobile APIs
For an API serving only mobile apps you can use the Mobile App Attestation concept as I describe in my answer to the question How to secure an API REST for mobile app?.
For Web APPs
For an API that only serves a Web app I would recommend you to read my answer to the question secure api data from calls out of the app?.
DO YOU WANT TO GO THE EXTRA MILE?
anything else?
It seems you already have done some research but you may not know yet the OWASP guides and top risks.
For Web Apps
The Web Security Testing Guide:
The OWASP Web Security Testing Guide includes a "best practice" penetration testing framework which users can implement in their own organizations and a "low level" penetration testing guide that describes techniques for testing most common web application and web service security issues.
For Mobile Apps
OWASP Mobile Security Project - Top 10 risks
The OWASP Mobile Security Project is a centralized resource intended to give developers and security teams the resources they need to build and maintain secure mobile applications. Through the project, our goal is to classify mobile security risks and provide developmental controls to reduce their impact or likelihood of exploitation.
OWASP - Mobile Security Testing Guide:
The Mobile Security Testing Guide (MSTG) is a comprehensive manual for mobile app security development, testing and reverse engineering.
For APIS
OWASP API Security Top 10
The OWASP API Security Project seeks to provide value to software developers and security assessors by underscoring the potential risks in insecure APIs, and illustrating how these risks may be mitigated. In order to facilitate this goal, the OWASP API Security Project will create and maintain a Top 10 API Security Risks document, as well as a documentation portal for best practices when creating or assessing APIs.
Just for a quick answer
I can suggest at least one via solution which is to use an API Gateway (like Kong, Express Gateway or AWS API Gateway, etc.).
The API Gateway allows you to create API consumers (e.g: buyer mobile app, seller mobile app, buyer TV app). For each API consumer, an auth key (or even OAuth 2.0 credentials) will be generated and asigned respectively.
You then can use the auth key or OAuth 2.0 credentials (ID and secret) to access the APIs securely (even for the public ones, as API Gateways will only allow access from valid API consumers).
How about request from web?
You can configure API Gateway to detect requests from web and instead of using auth key or Oauth mechanism, it can use domain white-listing CORS protection mechanism.
How about user access token (generated from successful login)? Any conflict here?
IMHO (emphasized), after getting user access token (e.g: a JWT), for every authenticated-user-only requests you will send both API Gateway auth key token user auth access token. Referencing to Exadra's answer above, we can consider API Gateway key is to verify the "what" (mobile app) while user access token is to verify the "who" (login user).

Fine-grained authorisation with ZODB

I have been looking into using ZODB as a persistence layer for a multiplayer video game. I quite like how seamlessly it integrates with arbitrary object-oriented data structures. However, I am stumbling over one issue, where I can't figure out, whether ZODB can resolve this for me.
Apparently, one can use the ClientStorage from ZEO to access a remote data storage used for persistence. While this is great in a trusted local network, one can't do this without proper authorization and authentication in an open network.
So I was wondering, if there is any chance to realize the following concept with ZODB:
On the server-side I would like to have a ZEO server running plus a simulation of the game world that might operate as a fully authorized client on the ZEO server (or use the same file storage as the ZEO server).
On the client side I'd need very restricted read/write access to the ZEO server, so that a client can only view the information its user is supposed to know about (e.g. the surrounding area of their character) and can only modify information related to the actions that their character can perform.
These restrictions would have to be imposed by the server using some sort of fine-grained authorisation scheme. So I would need to be able to tell the server whether user A has permissions to read/write object B.
Now is there way to do this in ZODB or third-party solutions for this kind of problem? Or is there a way to extend ZEO in this way?
No, ZEO was never designed for such use.
It is designed for scaling ZODB access across multiple processes instead, with authentication and authorisation left to the application on top of the data.
I would not use ZEO for anything beyond a local network anyway. Use a different protocol to handle communication between game clients and game server instead, keeping the ZODB server side only.

Identity management and authentication for REST APIs

We have built a Python based REST API. We are planning to give to other developers as well. Is there a Python library which could manage authentication keep track of API calls made be each client etc?
Well, ideally you'll be using public-key cryptography, supplying the developers with an API key and secret both. If the service is accessible via HTTPS to a limited number of consumers, you might be tempted to defer to issuing a simple API key alone, but your committing yourself to remain a small, closed and insecure service forever.
As for managing API calls themselves, since you have the RESTful interfaces developed already, I would suggest that you begin decorating the functions or methods to extract the service consumer and keep track of API calls in MonogoDB--it's simple and perfectly tuned for such a requirement. It would also allow you to start throttling consumer connections at the application level where, in time, you can develop the system to encompass some low-level solutions for managing service connections, such as iptables.

Quickbooks integration: IPP/IDS: can these by used for actual data exchange?

Poking around options for integrating an online app with Quickbooks, I've made a lot of headway with QBWC, but it's fairly ugly. From an end user perspective the usability of QBWC is pretty low.
Intuit is now pushing Intuit Partner Platform (IPP) and Intuit Data Services (IDS). I can't quite figure out what these are about:
Is IPP limited to using Flex, or can it work with existing web apps?
Are there APIs for actual data exchange? Is it possible to interact with desktop Quickbooks using IPP or IDS?
If there is sample code, particularly in Python, some pointers would be great.
Is IPP limited to using Flex, or can it work with existing web apps?
It is not limited to Flex. You can use IPP/IDS from any web application, as long as you federate your application (allow logins using SAML via workplace.intuit.com).
There are two "types" of IPP applications:
Native apps Native applications are applications written in Flex which utilize the Flex bindings for IPP. These applications run on Intuit's servers.
Federated apps Federated applications are applications written in your language of choice, running on your servers, which utilize the language bindings of your choice to talk to IPP. All of the communication with IPP is via HTTP XML requests, so pretty much any language out there can talk to IPP without any problems. You'll need to implement a SAML gateway which allows your users to log in via workplace.intuit.com.
Are there APIs for actual data exchange?
Yes. IPP is actually made of up two parts that both provide different sorts of data exchange.
IPP core stuff This involves user management, roles/permissions, access to QuickBase data stores, etc.
IDS (Intuit Data Services) This involves actually exchanging data with QuickBooks. Right now, a subset of QuickBooks data is supported, but Intuit is rapidly adding support for accessing more data within QuickBooks. You can add/modify/delete/query QuickBooks data and the data is automatically synced back to the end-users QuickBooks file.
Is it possible to interact with desktop Quickbooks using IPP or IDS?
That depends on what you mean by "interact". Yes, you can exchange data with their QuickBooks data file. No, you can't do things like automatically open up a particular window within QuickBooks or something like that.
If there is sample code, particularly in Python, some pointers would be great.
There are many open-source IPP DevKits on code.intuit.com that should be helpful. In particular, you'll probably want to check out this one:
Python DevKit
You'll also need to implement a SAML gateway for authentication, and there is sample code for that as well:
SAML Gateways
I'm the project admin for the QuickBooks PHP DevKit: QuickBooks PHP DevKit
There's a ton of additional information on the code.intuit.com website and tons of additional technical documentation on IPP/IDS with Federated applications on developer.intuit.com.

Abstraction and client/server architecture questions for Python game program

Here is where I am at presently. I am designing a card game with the aim of utilizing major components for future work. The part that is hanging me up is creating a layer of abstraction between the server and the client(s). A server is started, and then one or more clients can connect (locally or remotely). I am designing a thick client but my friend is looking at doing a web-based client. I would like to design the server in a manner that allows a variety of different clients to call a common set of server commands.
So, for a start, I would like to create a 'server' which manages the game rules and player interactions, and a 'client' on the local CLI (I'm running Ubuntu Linux for convenience). I'm attempting to flesh out how the two pieces are supposed to interact, without mandating that future clients be CLI-based or on the local machine.
I've found the following two questions which are beneficial, but don't quite answer the above.
Client Server programming in python?
Evaluate my Python server structure
I don't require anything full-featured right away; I just want to establish the basic mechanisms for abstraction so that the resulting mock-up code reflects the relationship appropriately: there are different assumptions at play with a client/server relationship than with an all-in-one application.
Where do I start? What resources do you recommend?
Disclaimers:
I am familiar with code in a variety of languages and general programming/logic concepts, but have little real experience writing substantial amounts of code. This pet project is an attempt at rectifying this.
Also, I know the information is out there already, but I have the strong impression that I am missing the forest for the trees.
Read up on RESTful architectures.
Your fat client can use REST. It will use urllib2 to make RESTful requests of a server. It can exchange data in JSON notation.
A web client can use REST. It can make simple browser HTTP requests or a Javascript component can make more sophisticated REST requests using JSON.
Your server can be built as a simple WSGI application using any simple WSGI components. You have nice ones in the standard library, or you can use Werkzeug. Your server simply accepts REST requests and makes REST responses. Your server can work in HTML (for a browser) or JSON (for a fat client or Javascript client.)
I would consider basing all server / client interactions on HTTP -- probably with JSON payloads. This doesn't directly allow server-initiated interactions ("server push"), but the (newish but already traditional;-) workaround for that is AJAX-y (even though the X makes little sense as I suggest JSON payloads, not XML ones;-) -- the client initiates an async request (via a separate thread or otherwise) to a special URL on the server, and the server responds to those requests to (in practice) do "pushes". From what you say it looks like the limitations of this approach might not be a problem.
The key advantage of specifying the interactions in such terms is that they're entirely independent from the programming language -- so the web-based client in Javascript will be just as doable as your CLI one in Python, etc etc. Of course, the server can live on localhost as a special case, but there is no constraint for that as the HTTP URLs can specify whatever host is running the server; etc, etc.
First of all, regardless of the locality or type of the client, you will be communicating through an established message-based interface. All clients will be operating based on a common set of requests and responses, and the server will handle and reject these based on their validity according to game state. Whether you are dealing with local clients on the same machine or remote clients via HTTP does not matter whatsoever from an abstraction standpoint, as they will all be communicating through the same set of requests/responses.
What this comes down to is your protocol. Your protocol should be a well-defined and technically sound language between client and server that will allow clients to a) participate effectively, and b) participate fairly. This protocol should define what messages ('moves') a client can do, and when, and how the server will react.
Your protocol should be fully fleshed out and documented before you even start on game logic - the two are intrinsically connected and you will save a lot of wasted time and effort by competely defining your protocol first.
You protocol is the abstraction between client and server and it will also serve as the design document and programming guide for both.
Protocol design is all about state, state transitions, and validation. Game servers usually have a set of fairly common, generic states for each game instance e.g. initialization, lobby, gameplay, pause, recap, close game, etc...
Each one of these states has important state data related with it. For example, a 'lobby' state on the server-side might contain the known state of each player...how long since the last message or ping, what the player is doing (selecting an avatar, switching settings, going to the fridge, etc.). Organizing and managing state and substate data in code is important.
Managing these states, and the associated data requirements for each is a process that should be exquisitely planned out as they are directly related to volume of work and project complexity - this is very important and also great practice if you are using this project to step up into larger things.
Also, you must keep in mind that if you have a game, and you let people play, people will cheat. It's a fact of life. In order to minimize this, you must carefully design your protocol and state management to only ever allow valid state transitions. Never trust a single client packet.
For every permutation of client/server state, you must enforce a limited set of valid game messages, and you must be very careful in what you allow players to do, and when you allow them to do it.
Project complexity is generally exponential and not linear - client/server game programming is usually a good/painful way to learn this. Great question. Hope this helps, and good luck!

Categories