Sending Data using Python - python

I Have a Python application which analyses data from multiple sources in real time. Once the data is analyzed the result of the analysis is stored in a database along with a time-stamp of when it was analyzed.
I would like to access the most recent result of this program remotely from another computer.
I was thinking about using python sockets and having a server script running on the main computer which runs the application and then that way I can access the data using a client script on another computer.
Is there a better way of doing this? Or are there any other solutions out there that can address this need?

Your question is very broad.
Most DB servers will provide a method/API to access the data remotely. You can use Python as a client if there is a DBAPI module for your DB that supports remote access over the network. For example if you are using Postgres you could use the psycopg2 module.
If you are using a simple DB such as SQLite then you might be able to use an ODBC driver. Some alternatives are here.
Edit
mongodb provides an API, pymongo.

In the end Redis was the best solution. Considering the original question The goal was to be able to send data in real time from one computer to another. Solutions such as Redis or RabbitMQ successfully accomplish this.
With Redis a server can be setup and it can publish messages to the network, clients can then subscribe to data channels and receive the messages in a queue
This Python library was used as a python Redis client :
https://pypi.python.org/pypi/redis

Related

What is the best approach to getting data into S3 for Elasticsearch and RabbitMQ?

In my company we developed a few games for which for some games the events are being sent to either Elasticsearch and others to RabbitMQ. We have a local CLI which grabs the data from both, compiles the messages into compressed (Gzip) JSON files after which another CLI converts them to SQL statements and throws them into a local SQL Server. We want now to scale up but the current setup is painful and nowhere near real-time for analysis.
I've recently built an application in Python which I was planning to publish to a docker container in AWS. The script grabs data from Elasticsearch, compiles into small compressed JSONS and publishes to an S3 bucket. From there the data is ingested into Snowflake for analysis. So far I was able to get the data in quite quickly and looks promising as an alternative.
I was planning to do something similar with RabbitMQ but I wanted to find an even better alternative which would allow this ingestion process to happen seamlessly and help me avoid having to implement within the python code all sorts of exception calls.
I've researched a bit and found there might be a way to link RabbitMQ to Amazon Kinesis Firehose. My question would be: How would I send the stream from RabbitMQ to Kinesis?
For Elasticsearch, what is the best way to achieve this? I've read about the logstash plugin for S3 (https://www.elastic.co/guide/en/logstash/current/plugins-outputs-s3.html) and about logstash plugin for kinesis (https://www.elastic.co/guide/en/logstash/current/plugins-inputs-kinesis.html). Which approach would be ideal for real-time ingestion?
My answer will be very theotic and need to be adapted tested in real world and adapted to your use case.
For a near realtime behaviour, I would use logstash
with elasticsearch input and a short cron. this post can help https://serverfault.com/questions/946237/logstashs-elasticsearch-input-plugin-should-be-used-to-output-to-elasticsearch
S3 output (support gzip)
maybe jdbc output to your DB
RabbitMq output plugin
You can create more scallable archi by output to RabbitMQ and use other pipeline to listen to the queue and execute other tasks.
From logstash ES -> Rabbit MQ
From logstash RabbitMQ -> SQL
From logstash RabbitMQ -> Kinesis
From logstash RabbitMQ -> AWS
etc....

how to create a multiple connection with advantage database server at same time?

I am trying to integrate alongside an existing application that uses ADS as its database.
When i connect my integration app using the code below it connects fine until i try and run the original application at the same time. It seems to only allow one connection, my application seems to hold the connection and block all others. Yet i can have multiple instances of the original application running conncurrently with no issue. Which leads me to believe that its the way in which i am trying to correct from my c# app. The error im getting when the original app is open and i then try to connect with my integration app is "The Advantage Data Dictionary cannot be opened. axServerConnect" .
Error 7077: The Advantage Data Dictionary cannot be opened. axServerConnect
Anyone any suggestions? How to create a multiple connection at same time?
Python code:
conn = adsdb.connect(DataSource=str(dbpath[0]), ServerType='local',
UserID = config.ADS_USERNAME, password=config.ADS_PASS)
According to this page in ADS documentations, you can use connection pooling by providing pooling=True to your client connection arguments.
I think using this approach, you will be able to open multiple connections at the same time.
Edit
After checking adsdb python script, I think it does not support connection pooling. You probably be able to set that connection pooling in your C# application.

SQLite over a network

I am creating a Python application that uses embedded SQLite databases. The programme creates the db files and they are on a shared network drive. At this point there will be no more than 5 computers on the network running the programme.
My initial thought was to ask the user on startup if they are the server or client. If they are the server then they create the database. If they are the client they must find a server instance on the network. The one way I suppose is to send all db commands from client to server and server implements in the database. Will that solve the shared db issue?
Alternatively, is there some way to create a SQLite "server". I presume this would be the quicker option if available?
Note: I can't use a server engine such as MySQL or PostgreSQL at this point but I am implementing using ORM and so when this becomes viable, it should be easy to change over.
Here's a "SQLite Server", http://sqliteserver.xhost.ro/, but it looks like not in maintain for years.
SQLite supports concurrency itself, multiple processes can read data at one time and only one can write data into it. Also, When some process is writing, it'll lock the whole database file for a few seconds and others have to wait in the mean time according official document.
I guess this is sufficient for 5 processes as yor scenario. Just you need to write codes to handle the waiting.

Common File on Server

I created an app. Its copy will be in two different computers. But a sqlite database file needs to be common for these two apps. I mean, both computers will be able to read and write this database file. For this purpose, I will put this file in a folder on our server which both computers are connected to. How can I get the full path for this file in Python? Or can you suggest any other way as easy as possible for doing this task?
Sqlite over a network share [stackoverflow.com]
I'd recommend against database files on a network drive. The network filesystem usually isn't robust enough to handle random updates like a DB.
As a previous answer suggested, you'd be better off creating a simple client/server model. A server process has sole access to the sqlite db, clients send requests to the server. Don't pass the sqlite db file back and forth.
You might want to use a full network DB such as MySQL or PostgreSQL.
I would have a Python server program running on the server with the database file (using the sockets library). Then have the two clients connect to the server program (again, using the sockets library), and then receive the database file. You can find some examples for the socket library at http://www.prasannatech.net/2008/07/socket-programming-tutorial.html

Writing to a stream on a PostgreSQL trigger using Python

I would like to write a trigger for a PostgreSQL database which, on insertions, would notify a node.js server which would send some data to connected clients.
Currently, my thought is to write a Python row insert trigger for the database which would write data to some file which would then be read by the node.js server.
However, this would be slow, as disk access would be involved. What would be a better way to connect these two applications?
Have you looked at Listen Notify functionality?
http://www.postgresql.org/docs/9.0/interactive/sql-notify.html
Also you will want to test out different options against your needs, instead of assuming one is not fast enough for what you need. Maybe your python approach will work just fine.

Categories