python - FIX protocol simple test logon - python

I just started trying to connect to my broker through the FIX protocol.
The broker gave me:
an IP:port address to connect to
a "sendercompid"
a "targetcompid"
a password
I would like, as a first test, simply send a logon message to the broker and hopefully receive a message back from it. I would have thought this should be possible with a simple, small python script?
(ie im not interested in installing a fully fledge python engine / or use wrapper for c++ language such as quickfix)
edit:
to be more precise:
I found on SO example of doing (or trying) such thing in PHP, for instance:
$fp = fsockopen($host, $port, $errno, $errstr, 3.0);
if ($fp)
{
$request = "8=FIX.4.49=11235=A49=SENDER56=RECEIVER34=152=20130921-18:52:4898=0108=30141=Y553=user554=pass10=124";
echo $request;
fwrite($fp, "GET / HTTP/1.0\r\n" .
"Host: $host\r\n".
"Connection: close\r\n".
"Content-Length: " . strlen($request) . "\r\n" .
"\r\n" .
$request);
stream_set_timeout($fp, 2, 0);
$response = '';
while (!feof($fp))
{
$response .= fread($fp, 1024);
}
print "Response: ".$response . "<BR>\n";
fclose($fp);
}
Do you know which library i can use to simply communicate (ie send/retrieve) message to the FIX server in the same fashion in python?

Well, there's no standard python library for that.
You mentioned quickfix, what is a big project that seems maintained, and has documentation.
Looking for other third-party libraries, there is a smaller one, yet only for python2.6 or 2.7, named fixlib and currently hosted on github (the PyPI and bitbucket versions seem to be abandoned; the github version has been active 6 months ago). Major inconvenient: there is no documentation.
Looking at the code of these two libraries shows they are not exactly "small", so if you don't want to use any of them, as you will certainly have to rewrite similar code from scratch, you'd better forget about a "simple and small python script".

If you want to do a test in FIX protocol over a FIX connection you can try using the FIXRobot. FIXRobot allows to easily write the tests in python.

Related

Is it possible when using pysnmp to allow any community string on return

I am using Python 3.6.8 with pysnmp 4.4.12 and the device I am polling works fine with easysnmp or snmpwalk/snmpget from the command line. I have figure out the problem. While I am sending community string xxxxxx, the return packets show the community string as public. I changed it temporarily to public to see if that would work and it did. My question is there some way to tell pysnmp to ignore the community on incoming packets?
The original developer Ilya made it clear in many places that PySNMP aims to be standard compliant, so any violation can trigger such issues.
I took a glance on the related files and didn't see any option to skip community name check.

Using an API / WebService in Python instead of C#

I have to use a Webservice, where on my own webserver a script should make GET requests regularly. There exists a documentation with multiple C# examples. This should work (I could not get it running on my windows pc).
https://integration.questback.com/integration.svc
You have created a service.
To test this service, you will need to create a client and use it to call the service. You can do this using the svcutil.exe tool from the command line with the following syntax:
svcutil.exe https://integration.questback.com/Integration.svc?wsdl
This will generate a configuration file and a code file that contains the client class. Add the two files to your client application and use the generated client class to call the Service. For example:
C#
class Test
{
static void Main()
{
QuestBackIntegrationLibraryClient client = new QuestBackIntegrationLibraryClient();
// Use the 'client' variable to call operations on the service.
// Always close the client.
client.Close();
}
}
Since the server is linux based and I don´t know a piece of C# + XML, I wanted to ask if there is an way to make this run on linux server, preferable with Python (I know this question is quite vague, I´m sorry).
Thank you!

Tableau and R Server for Python

I have recently discovered you can use R within Tableau, to return bool, int, long etc. This happens by the following:
install.packages("Rserve")
library(Rserve)
Rserve()
// Should say "Starting RServe..."
Then in Tableau:
// For Tableau under 'Help' > 'Settings and Performance' > 'Manage R Connections'
// Server: 127.0.0.1 and Port:6311
// Make sure that 'RStudio' with 'RServer' is installed and running prior to Tableau connection
However I would like to do the same thing with Python, so Python can be used as a script in Tableau (not using Tableau's api in Python) - anyone know if this is possible? The snippet above was taken from here
There isn't a Script() call for languages other than R as of Tableau 8.2.
You could try using R as a middleman to invoke Python functions via the rPython or RSPython packages. No idea how performant it would be, but might be worth the hassle if you have a significant Python library that isn't available in R.
As of Tableau-10.1, there is a new package/library introduced which is TabPy which will act similar to Rserver for 'R' integration with Tableau.
Worth checking this article : https://www.tableau.com/about/blog/2017/1/building-advanced-analytics-applications-tabpy-64916

Sending messages with Telegram - APIs or CLI?

I would like to be able to send a message to a group chat in Telegram. I want to run a python script (which makes some operations that already works) and then, if some parameters have some values the script should send a message to a group chat through Telegram. I am using Ubuntu, and Python 2.7
I think, if I am not wrong, that I have two ways to do that:
Way One: make the Python script connect to the Telegram APIs directly and send the message (https://core.telegram.org/api).
Way Two: make the Python script call the Telegram's CLI (https://github.com/vysheng/tg), pass some values to this and then the message is sent by the Telegram's CLI.
I think that the first way is longer, so a good idea might be using the Way Two.
In this case I really don't know how to proceed.
I don't know lots about scripts in linux, but I tried to do this:
#!/bin/bash
cd /home/username/tg
echo "msg user#******** messagehere" | ./telegram
sleep 10
echo "quit" | ./telegram
this works at a half: it sends the message correctly, but then the process remains open. And second problem, I have no clue on how to call that from python and how to pass some value to this script. The value that I would like to pass to the script is the "messagehere" var: this would be a 100/200 characters message, defined from inside the python script.
Does anyone has any clues on that?
Thanks for replies, I hope this might be useful for someone else.
Telegram recently released their new Bot API which makes sending/receiving messages trivial. I suggest you also take a look at that and see if it fits your needs, it beats wrapping the client library or integrating with their MTProto API.
import urllib
import urllib2
# Generate a bot ID here: https://core.telegram.org/bots#botfather
bot_id = "{YOUR_BOT_ID}"
# Request latest messages
result = urllib2.urlopen("https://api.telegram.org/bot" + bot_id + "/getUpdates").read()
print result
# Send a message to a chat room (chat room ID retrieved from getUpdates)
result = urllib2.urlopen("https://api.telegram.org/bot" + bot_id + "/sendMessage", urllib.urlencode({ "chat_id": 0, "text": 'my message' })).read()
print result
Unfortunately I haven't seen any Python libraries you can interact directly with, but here is a NodeJS equivalent I worked on for reference.
Since version 1.05 you can use the -P option to accept messages from a socket, which is a third option to solve your problem. Sorry that it is not really the answer to your question, but I am not able to comment your question because I do not have enough reputation.
First create a bash script for telegram called tg.sh:
#!/bin/bash
now=$(date)
to=$1
subject=$2
body=$3
tgpath=/home/youruser/tg
LOGFILE="/home/youruser/tg.log"
cd ${tgpath}
${tgpath}/telegram -k ${tgpath}/tg-server.pub -W <<EOF
msg $to $subject
safe_quit
EOF
echo "$now Recipient=$to Message=$subject" >> ${LOGFILE}
echo "Finished" >> ${LOGFILE}
Then put the script in the same folder than your python script, and give it +x permission with chmod +x tg.sh
And finally from python, you can do:
import subprocess
subprocess.call(["./tg.sh", "user#****", "message here"])
I'm working with pytg which could be found here:
A Python package that wraps around Telegram messenger CLI
it works pretty good. I already have a python bot based on that project
You can use safe_quit to terminate the connection instead since it waits until everything is done before closing the connection and termination the application
#!/bin/bash
cd /home/username/tg
echo "msg user#******** messagehere\nsafe_quit\n" | ./telegram
use this as a simple script and call it from python code as the other answer suggested.
I would recommend the first option.
Once you are comfortable with generating an AuthKey, you should start to get a handle on the documentation.
To help, I have written a detailed step-by step guide of how I wrote the AuthKey generation code from scratch here.
It's in vb.net, but the steps should help you do same in python.

How can a CGI server based on CGIHTTPRequestHandler require that a script start its response with headers that include a `content-type`?

Later note: the issues in the original posting below have been largely resolved.
Here's the background: For an introductory comp sci course, students develop html and server-side Python 2.7 scripts using a server provided by the instructors. That server is based on CGIHTTPRequestHandler, like the one at pointlessprogramming. When the students' html and scripts seem correct, they port those files to a remote, slow Apache server. Why support two servers? Well, the initial development using a local server has the benefit of reducing network issues and dependency on the remote, weak machine that is running Apache. Eventually porting to the Apache-running machine has the benefit of publishing their results for others to see.
For the local development to be most useful, the local server should closely resemble the Apache server. Currently there is an important difference: Apache requires that a script start its response with headers that include a content-type; if the script fails to provide such a header, Apache sends the client a 500 error ("Internal Server Error"), which too generic to help the students, who cannot use the server logs. CGIHTTPRequestHandler imposes no similar requirement. So it is common for a student to write header-free scripts that work with the local server, but get the baffling 500 error after copying files to the Apache server. It would be helpful to have a version of the local server that checks for a content-type header and gives a good error if there is none.
I seek advice about creating such a server. I am new to Python and to writing servers. Here are the issues that occur to me, but any helpful advice would be appreciated.
Is a content-type header required by the CGI standard? If so, other people might benefit from an answer to the main question here. Also, if so, I despair of finding a way to disable Apache's requirement. Maybe the relevant part of the CGI RFC is section 6.3.1 (CGI Response, Content-Type): "If an entity body is returned, the script MUST supply a Content-Type field in the response."
To make a local server that checks for the content-type header, perhaps I should sub-class CGIHTTPServer.CGIHTTPRequestHandler, to override run_cgi() with a version that issues an error for a missing header. I am looking at CGIHTTPServer.py __version__ = "0.4", which was installed with Python 2.7.3. But run_cgi() does a lot of processing, so it is a little unappealing to copy all its code, just to add a couple calls to a header-checking routine. Is there a better way?
If the answer to (2) is something like "No, overriding run_cgi() is recommended," I anticipate writing a version that invokes the desired script, then checks the script's output for headers before that output is sent to the client. There are apparently two places in the existing run_cgi() where the script is invoked:
3a. When run_cgi() is executed on a non-Unix system, the script is executed using Python's subprocess module. As a result, the standard output from the script will be available as an in-memory string, which I can presumably check for headers before the call to self.wfile.write. Does this sound right?
3b. But when run_cgi() is executed on a *nix system, the script is executed by a forked process. I think the child's stdout will write directly to self.wfile (I'm a little hazy on this), so I see no opportunity for the code in run_cgi() to check the output. Ugh. Any suggestions?
If analyzing the script's output is recommended, is email.parser the standard way to recognize whether there is a content-type header? Is another standard module recommended instead?
Is there a more appropriate forum for asking the main question ("How can a CGI server based on CGIHTTPRequestHandler require...")? It seems odd to ask if there is a better forum for asking programming questions than Stack Overflow, but I guess anything is possible.
Thanks for any help.

Categories