I am toying around with home automation, and I am planning to use Azure Service Bus as my "core" for message handling. With the .NET SDKs everything works perfectly and is fast enough (milliseconds for send + receive). However, I am now using the "azure.servicebus" module with Python (Debian on a Raspberry Pi), and the receive_subscription_message call is far from fast. It varies between near instant to lagging a minute behind.
My code is as follows:
from azure.servicebus import ServiceBusService, Message, Queue
bus_service = ServiceBusService(
service_namespace='mynamespace',
shared_access_key_name='Listener1',
shared_access_key_value='...')
msg = bus_service.receive_subscription_message('messages', 'ListenerTest.py', peek_lock=True)
msg.delete()
I have toyed around with peek_lock True and False, but the behaviour is the same.
Has anyone else been able to get this stable / near instant?
Please make sure there has indeed messages in the subscription, also please be aware that .NET SDK by default uses a Service Bus specific protocol instead of http, but the Python SDK uses http polling (basically check if there're messages in the subscription once in a while). We can find the brief info at https://github.com/Azure/azure-sdk-for-python/blob/master/doc/servicebus.rst:
ServiceBus Queues are an alternative to Storage Queues that might be useful in scenarios where more advanced messaging features are needed (larger message sizes, message ordering, single-operaiton destructive reads, scheduled delivery) using push-style delivery (using long polling).
Per my understanding this might explain why you see the message received either instantly or up to a minute. Based on behavior that you described, you might want to use AMQP, which is based on bi-directional TCP, and thus does not require polling. To use AMQP, you may want to leverage the standard Proton-Python library, I'd like to suggest you to check https://msdn.microsoft.com/en-us/library/azure/jj841070.aspx for a sample. But please note the tips from that article:
Note that at the time of this writing, the SSL support in Proton-C is
only available for Linux operating systems. Because Microsoft Azure
Service Bus requires the use of SSL, Proton-C (and the language
bindings) can only be used to access Microsoft Azure Service Bus from
Linux at this time. Work to enable Proton-C with SSL on Windows is
underway so check back frequently for updates.
Related
this is somehow intended to (maybe) be(come) a guideline post in terms of MetaTrader 4/5 and its corresponding language MQL4 both set into context with sending data to external servers.
In my special case I am building a Django/Python based web application that will process FOREX trading data for further use.
Thus, I am in search for a proper solution to send data from the MetaTrader 4/5 Terminal to an external server periodically (e.g. every 60 seconds) formatted as json or csv (if possible). In particular, the data to be sent is the accounts trade history and the running + pending trades set.
After doing the research I basically found the following approaches:
1.) Using webrequest() in MQL4 wrapped in an expert advisor
As the official MQL4 docu suggests, the webrequest() function sends an http:// request to a specified server. This is a related SO thread:How to post from MetaTrader Terminal 5 MQL 5 a request to my nodejs server, which is running locally on my MT5 host?
and the official documentation:
https://docs.mql4.com/common/webrequest
This could be wrapped into an expert advisor to execute the request periodically on defined events given.
Which kind of data from the MT4/5 terminal can be populated into the data array?
How can that data be formatted? Is it possible to format it as json straight away or should that be done on the server-side?
2.) Using ZeroMQ
This is a setup I found in this thread: How to send a message in MQL4/5 from MetaTrader Terminal to python using ZeroMQ?
How would this be accomplished within the MetaTrader environment? Will this still be an expert advisor or some kind of DLL solution? What is the ZeroMQ's role within the setup?
What are the pros and cons compared to the webrequest() function?
3.) Others?
Are there any other possible approaches on achieving this like e.g. with APIs or MQL4 scripts?
Since this is a rare topic, I am loooking forward to any however small idea and input.
Welcome to the club - my above cited answer has received within 1,5 years zero-votes so far
Nevertheless, having used ZeroMQ since v2.11+, thanks to the R&D work published by Austen CONRAD - thanks and deep respect for his persistence.
Q : "How would this be accomplished within the MetaTrader environment?"
One simply #import-s the DLL and starts to use ZeroMQ-API-wrapper calls. A few details went a bit more complex, after MetaTrader has silently changed internal representation of string to cease to be a string ( becoming a struct in "New"-MQL4.56789, but you will learn how to live with this "Always On Watch"-style, so as to survive in production )
Q : "Will this still be an expert advisor or some kind of DLL solution?"
ZeroMQ can be used in either and/or all of:
Expert Advisor-type of MQL4-code
Custom Indicator-type of MQL4-code
Script-type of MQL4-code
and can even provide a proxy-signaling/messaging-layer, so as to communicate among these otherwise separate and not-cooperating processes inside the MT4-Terminal Ecosystem.
Example:
I have MT4-Terminal processes cooperate with external AI/ML-based market analyser, which auto-detects windows-of-opportunity plus having an external CLI-console, as a remote-keyboard for MT4-Terminal hosted Control Panel, showing system health-state and listening to a remote-keyboard, used for a remote CLI-command control ( for both configuration and maintenance tasks of the whole multi-party distributed trading system )
Q : "What is the ZeroMQ's role within the setup?"
ZeroMQ provides an independent, smart and industry-standard, smart and low-latency signaling/messaging layer among any kind of needed nodes ( grid-computing, GPU-computing, CLI-treminal, AI/ML-decision making, system-wide consolidated central logging, anything one may need )
Try to setup and use a remote tipc://-transport-class for cross-cluster computing paradigm with any other approach.
Try to setup and use an M:N-redundant Strategy Trading, operated accross a mix of tcp:// + tipc:// + norm:// + vmci:// transport-classes, used so as to interconnect ( A x M + N x B )-nodes' exo-systems.
Try to setup a system, that asks MetaTrader to do some work from outside, without this technology ( webrequest() is not ready for any "questions from outside", is it? )
Q : "What are the pros and cons compared to the webrequest() function?"
Feel free to read about this in Stack Overflow answers.
Integration with Python, support for Market and Signals services in Wine (Linux/MacOS) and highly optimized strategy tester in MetaTrader 5 build 2085
MetaQuotes Software Corp, 14 June 2019
In the new MetaTrader 5 version, we have added an API which enables request of MetaTrader 5 terminal data through applications, using the Python high-level programming language. The API contains multiple libraries for machine learning, process automation, as well as data analysis and visualization.
MetaTrader 5 integration with Python
MetaTrader package for Python is designed for efficient and fast obtaining of exchange data via interprocessor communication, directly from MetaTrader 5. The data received via this pathway can be used for statistical calculations and machine learning.
Thus, I am in search for a proper solution to send data from the
MetaTrader 4/5 Terminal to an external server periodically (e.g. every
60 seconds) formatted as json or csv (if possible).
For this, metatrader.live may help. The question is just to attach an (open-sourced) expert advisor and have the data online through JSON or WebSockets or something. OR You can use it as a transport layer only for your own logic. Easy enough. And yes, I'm the author :)
I would like developers to give a comment about :) thank you
I am building an application that needs an exact timestamp for multiple devices at the same time. Device time cannot be used, because they are not at the same time.
Asking time from the server is Ok but it is slow and depend on connection speed, and if making this as serverless/regioness then serverside timestamp cannot be used because of time-zones?
On demo application, this works fine with one backend at pointed region but still it needs to respond faster to clients.
Here is simply image to see this in another way?
In the image, there are no IDs etc but the main idea is there
I am planning to build this on nodejs server and later when more timing calculations translate it to python/Django pack...
Thank you
Using server time is perfectly fine and timezone difference due to different regions can be easily adjusted in code. If you are using a cloud provider, they typically group their services by region instead of distributing them worldwide, so adjusting the timezone shouldn't be a problem. Anyway, check the docs for the cloud service you are planning to use, they usually clearly document the geographic distribution.
Alternatively, you could consider implementing (or using) Network Time Protocol (NTP) on your devices. It's a fairly simple protocol for synchronizing clocks, if you need a source code example take a look at the source for node-ntp-client , a javascript implementation.
I'm adopting Kafka and trying to understand how to monitor it (e.g. is it running out of memory for log storage). I see that it uses Yammer Metrics and exposes them via JMX - this apparently makes sense to people in Java land.
Is there an HTTP API I can build on? Or really any sort of relatively structured output at all?
You can use yahoo's kafka manager to inspection of cluster state. Download kafka manager # https://github.com/yahoo/kafka-manager.
Hope it may help you
you can use Kafka monitoring tool for this
https://github.com/impetus-opensource/ankush
You can get the latest release from following link https://github.com/impetus-opensource/ankush/releases.
You can create as well as monitor your clusters using above tool
Wondering about durable architectures for distributed Python applications.
This question I asked before should provide a little guidance about the sort of application it is. We would like to have the ability to have several code servers and several database servers, and ideally some method of deployment that is manageable and not too much of a pain.
The question I mentioned provides an answer that I like, but I wonder how it could be made more durable, or if doing so requires using other technologies. In particular:
I would have my frontend endpoints be the WSGI (because you already have that written) and write the backend to be distributed via messages. Then you would have a pool of backend nodes that would pull messages off of the Celery queue and complete the required work. It would look sort of like:
Apache -> WSGI Containers -> Celery Message Queue -> Celery Workers.
The apache nodes would be behind a load balancer of some kind. This would be a fairly simple architecture to scale and is, if done correctly, fairly reliable. Code for failure in a system like this and you will be fine.
What is the best way to make durable applications? Any suggestions on how to either "code for failure" or design it differently so that we don't necessarily have to? If you think Python might not be suited for this, that is also a valid solution.
Well to continue on the previous answer I gave.
In my projects I code for failure, because I use AWS for a lot of my hosting needs.
I have implemented database backends that will make sure that the database, region, is accessible and if not it will choose another region from a specified list. This happens transparently to the rest of the system on that node. So, if the east-1a region goes down I have a few other regions that I also host in that it will failover into, such as the west coast. I keep track of currently going database transactions and send them over to the west coast and dump them to a file so I can import them into the old database region once it becomes available.
My front end servers sit behind a elastic load balancer that is distributed across multiple regions and this allows for durable recovery if a region fails. But, it cannot be relied upon so I am looking into solutions such as running a HAProxy and switching my DNS in the case that my ELB goes down. This is a work in progress and I cannot give specifics on my own solutions.
To make your data processing durable look into Celery and store the data in a distributed mongo server to keep your results safe. Using a durable data store to keep your results allows you to get them back in the event of a node crash. It comes at the cost of some performance, but it shouldn't be too terrible if you only rely on soft-realtime constraints.
http://www.mnxsolutions.com/amazon/designing-for-failure-with-amazon-web-services.html
The above article talks mostly about AWS but the ideas apply to any system that you need to keep high availability in and system durability. Just remember that downtime is ok as long as you minimize it for a subset of users.
Does anybody know, how GAE limit Python interpreter? For example, how they block IO operations, or URL operations.
Shared hosting also do it in some way?
The sandbox "internally works" by them having a special version of the Python interpreter. You aren't running the standard Python executable, but one especially modified to run on Google App engine.
Update:
And no it's not a virtual machine in the ordinary sense. Each application does not have a complete virtual PC. There may be some virtualization going on, but Google isn't saying exactly how much or what.
A process has normally in an operating system already limited access to the rest of the OS and the hardware. Google have limited this even more and you get an environment where you are only allowed to read the very specific parts of the file system, and not write to it at all, you are not allowed to open sockets and not allowed to make system calls etc.
I don't know at which level OS/Filesystem/Interpreter each limitation is implemented, though.
From Google's site:
An application can only access other
computers on the Internet through the
provided URL fetch and email
services. Other computers can only
connect to the application by making
HTTP (or HTTPS) requests on the
standard ports.
An application cannot write to the
file system. An app can read files,
but only files uploaded with the
application code. The app must use
the App Engine datastore, memcache or
other services for all data that
persists between requests.
Application code only runs in
response to a web request, a queued
task, or a scheduled task, and must
return response data within 30
seconds in any case. A request
handler cannot spawn a sub-process or
execute code after the response has
been sent.
Beyond that, you're stuck with Python 2.5, you can't use any C-based extensions, more up-to-date versions of web frameworks won't work in some cases (Python 2.5 again).
You can read the whole article What is Google App Engine?.
I found this site
that has some pretty decent information. What exactly are you trying to do?
Here
FRESH!
Look here: http://code.google.com/appengine/docs/python/runtime.html
Your IO Operations are limited as follows (beyond disabled modules):
App Engine records how much of each resource an application uses in a calendar day, and considers the resource depleted when this amount reaches the app's quota for the resource. A calendar day is a period of 24 hours beginning at midnight, Pacific Time. App Engine resets all resource measurements at the beginning of each day, except for Stored Data which always represents the amount of datastore storage in use.
When an app consumes all of an allocated resource, the resource becomes unavailable until the quota is replenished. This may mean that your app will not work until the quota is replenished.
An application can determine how much CPU time the current request has taken so far by calling the Quota API. This is useful for profiling CPU-intensive code, and finding places where CPU efficiency can be improved for greater cost savings. You can measure the CPU used for the entire request, or call the API before and after a section of code then subtract to determine the CPU used between those two points.
Resource| Free Default Quota| Billing Enabled Default Quota
Blobstore |Stored Data| 1 GB| 1 GB free; no maximum
Resource |Billing Enabled| Default Quota
Daily Limit| Maximum Rate
Blobstore API Calls |140,000,000 calls| 72,000 calls/minute
Hmm my table isn't that good, but hopefully still readable.
EDIT: OK, I understand. But sir, you did not have to use the "f" word. :) And you know, it's kinda like the whole 'teach a man to fish' scenario. Google is who I always ask and that's why I'm answering questions here for fun.
EDIT AGAIN: OK that made more sense before the comment was tooked. So I went and answered the question a little more. I hope it helps.
IMO it's not a standard python, but a version specifically patched for app engine. In other words you can think more or less like an "higher level" VM that however is not emulating x86 instructions but python opcodes (if you don't know what they are try writing a small function named "foo" and the doing "import dis; dis.dis(foo)" you will see the python opcodes that the compiler produced).
By patching python you can impose to it whatever limitations you like. Of course you've however to forbid the use of user supplied C/C++ extension modules as a C/C++ module will have access to everything the process can access.
Using such a virtual environment you're able to run safely python code without the need to use a separate x86 VM for every instance.