Sharing connection between scripts for node-red application - python

I've been using node-red to trigger communication to a philips hue gateway. I have succeeded in triggering it the way I want. The issue is that I need the action to take place more immediately than my current implementation. The only reason there is a delay is because it needs to establish a connection. I've tried looking online but it doesn't seem that there is a simple way to send this sort of connection descriptor across python scripts. I want to share the descriptor because I could have one script that connects to the gateway and runs an empty while loop. The second script could then just take the connection anytime I run it and do its actions. Apologies if this was answered before but I'm not well versed in python and a lot of the solutions were not making sense. For example, it doesn't seem that redis would be able to solve my issue.
Thanks

As per #hardillb's comment the easiest to control the Phillips Hue is to use one of the existing Node-Red Hue nodes:
https://flows.nodered.org/node/node-red-contrib-node-hue
https://flows.nodered.org/node/node-red-contrib-huemagic
If you have special requirements that require use of the Hue Python SDK ... It is possible to use the node-red-contrib-pythonshell node to run a python script that stays alive (using the node's "Continuous" option) and have Node-Red send messages to the script (using the Stdin option). There's some simple examples in the node's test directory: https://github.com/namgk/node-red-contrib-pythonshell/tree/master/test.

Related

Reload Functions Without Restarting Server [duplicate]

I've developed a set of audio streaming server, all of them are using Twisted, and they are in Python, of course. They work, but a problem keeps troubling me, when I found some bugs there in the running server, or I want add something into the server, I need to stop them and start. Unlike HTTP servers, it's okay to restart them whenever, but not okay with audio streaming servers. Once I restart my streaming server, it means my users will encounter a disconnection.
I did try to setup a manhole (a ssh service for Twisted servers, you can login and type Python code in the console to do something), and connect to the console, reload Python modules on the fly. It works sometimes, but hard to control. You never know how many instances of old class are there in the server, and some of them might be hard to reach, and relationships of class would be very complex. Also, it may works in some situations, but sometimes you really need to restart server, for example, you are running the server with selector reactor, and you want to run it with epoll reactor instead, then you have to restart it. Another example, when the memory usage goes too high, you have to restart them, too.
To build such system, I have an idea comes in my head, I'm thinking is that possible to hand over those connections and data from a process to another. For example:
We have a Server named Broadcasting, and the running instance is under rev.123, and we want replace it with rev.124.
Broadcasting rev.123 is running....
Startup Broadcasting rev.124 ....
Broadcasting rev.124 is stand by
Hand over connections from instance of rev.123 to instance of rev.124
Stop Broadcasting rev. 123 instance
Is this possible? I have no idea that does lifetime of socket handles bound to processes or not, I thought sockets created by a process will be closed when the creator process is killed, but I'm not sure. If it is possible, are there any guidelines or articles for designing such kind of hot code swapping mechanism? And is there something can achieve what I want for Twisted already be done?
Thanks.
I gave a talk about this at PyCon 2004. There's also some effort to add more functionality to help with this to Twisted itself.

Saltstack Manage and Query a Tally/Threshold via events and salt-call?

I have over 100 web servers instances running a php application using apc and we occasionally (order of once per week across the entire fleet) see a corruption to one of the caches which results in a distinctive error log message.
Once this occurs then the application is dead on that node any transactions routed to it will fail.
I've written a simple wrapper around tail -F which can spot the patter any time it appears in the log file and evaluate a shell command (using bash eval) to react. I have this using the salt-call command from salt-stack to trigger processing a custom module which shuts down the nginx server, warms (refreshes) the cache, and, of course, restarts the web server. (Actually I have two forms of this wrapper, bash and Python).
This is fine and the frequency of events is such that it's unlikely to be an issue. However my boss is, quite reasonably, concerned about a common mode failure pattern ... that the regular expression might appear in too many of these logs at once and take town the entire site.
My first thought would be to wrap my salt-call in a redis check (we already have a Redis infrastructure used for caching and certain other data structures). That would be implemented as an integer, with an expiration. The check would call INCR, check the result, and sleep if more than N returned (or if the Redis server were unreachable). If the result were below the threshold then salt-call would be dispatched and a decrement would be called after the server is back up and running. (Expiration of the Redis key would kill off any stale increments after perhaps a day or even a few hours ... our alerting system will already have notified us of down servers and our response time is more than adequate for such time frames).
However, I was reading about the Saltstack event handling features and wondering if it would be better to use that instead. (Advantage, the nodes don't have redis-cli command tool nor the Python Redis libraries, but, obviously, salt-call is already there with its requisite support). So using something in Salt would minimize the need to add additional packages and dependencies to these systems. (Alternatively I could just write all the Redis handling as a separate PHP command line utility and just have my shell script call that).
Is there a HOWTO for writing simple Saltstack modules? The docs seem to plunge deeply into reference details without any orientation. Even some suggestions about which terms to search on would be helpful (because their use of terms like pillars, grains, minions, and so on seems somewhat opaque).
The main doc for writing a Salt module is here: http://docs.saltstack.com/en/latest/ref/modules/index.html
There are many modules shipped with Salt that might be helpful for inspiration. You can find them here: https://github.com/saltstack/salt/tree/develop/salt/modules
One thing to keep in mind is that the Salt Minion doesn't do anything unless you tell it to do something. So you could create a module that checks for the error pattern you mention, but you'd need to add it to the Salt Scheduler or cron to make sure it gets run frequently.
If you need more help you'll find helpful people on IRC in #salt on freenode.

Telnet Connection Pooling

Background: I'm currently trying to develop a monitoring system at my job. All the nodes that need to be monitored are accessible via Telnet. Once a Telnet connection has been made, the system needs to execute a couple of commands on the node and process the output.
My problem is that both creating a new connection and running the commands needs time. It takes app. 10s to get a connection up (the TCP connection is established instantly, but some commands need to be run to prepare the connection for use), and an almost equal amount of time to run the command required.
So, I need to come up with a solution that allows me to execute 10-20 of these 10s long commands on the nodes, without collectively taking more than 1min. I was thinking of creating a sort of connection pooler, which I could send the commands to and then it could execute them in parallel, dividing them over available Telnet sessions. I tried to find something similar that I could use (or even just look at to gain some understanding), but I am unable to find anything.
I'm developing on Ubuntu with Python. Any help would be appreciated!
Edit (Update Info)*:
#Aya #Thomas: A bit more info. I already have a solution in Python that is working, however it is getting difficult to manage the code. Currently I'm using the same approach that you advised, using a per connection Thread. However, the problem is that there is a 10s delay each time a connection is made to a node, and I need to make atleast 10 connections per node per iteration. The time limit for each iteration is 60s, so making a new connection each time is not feasible. It needs to open 10 connections per node at startup and maintain those connections.
What I am looking for is someone who can point out examples of good architecture for something like this?

How to build Twisted servers which are able to do hot code swap in Python?

I've developed a set of audio streaming server, all of them are using Twisted, and they are in Python, of course. They work, but a problem keeps troubling me, when I found some bugs there in the running server, or I want add something into the server, I need to stop them and start. Unlike HTTP servers, it's okay to restart them whenever, but not okay with audio streaming servers. Once I restart my streaming server, it means my users will encounter a disconnection.
I did try to setup a manhole (a ssh service for Twisted servers, you can login and type Python code in the console to do something), and connect to the console, reload Python modules on the fly. It works sometimes, but hard to control. You never know how many instances of old class are there in the server, and some of them might be hard to reach, and relationships of class would be very complex. Also, it may works in some situations, but sometimes you really need to restart server, for example, you are running the server with selector reactor, and you want to run it with epoll reactor instead, then you have to restart it. Another example, when the memory usage goes too high, you have to restart them, too.
To build such system, I have an idea comes in my head, I'm thinking is that possible to hand over those connections and data from a process to another. For example:
We have a Server named Broadcasting, and the running instance is under rev.123, and we want replace it with rev.124.
Broadcasting rev.123 is running....
Startup Broadcasting rev.124 ....
Broadcasting rev.124 is stand by
Hand over connections from instance of rev.123 to instance of rev.124
Stop Broadcasting rev. 123 instance
Is this possible? I have no idea that does lifetime of socket handles bound to processes or not, I thought sockets created by a process will be closed when the creator process is killed, but I'm not sure. If it is possible, are there any guidelines or articles for designing such kind of hot code swapping mechanism? And is there something can achieve what I want for Twisted already be done?
Thanks.
I gave a talk about this at PyCon 2004. There's also some effort to add more functionality to help with this to Twisted itself.

Parallel SSH in Python

I wonder what is the best way to handle parallel SSH connections in python.
I need to open several SSH connections to keep in background and to feed commands in interactive or timed batch way.
Is this possible to do it with the paramiko libraries? It would be nice not to spawn a different SSH process for each connection.
Thanks.
It might be worth checking out what options are available in Twisted. For example, the Twisted.Conch page reports:
http://twistedmatrix.com/users/z3p/files/conch-talk.html
Unlike OpenSSH, the Conch server does not fork a process for each incoming connection. Instead, it uses the Twisted reactor to multiplex the connections.
Yes, you can do this with paramiko.
If you're connecting to one server, you can run multiple channels through a single connection. If you're connecting to multiple servers, you can start multiple connections in separate threads. No need to manage multiple processes, although you could substitute the multiprocessing module for the threading module and have the same effect.
I haven't looked into twisted conch in a while, but it looks like it getting updates again, which is nice. I couldn't give you a good feature comparison between the two, but I find paramiko is easier to get going. It takes a little more effort to get into twisted, but it could be well worth it if you're doing other network programming.
You can simply use subprocess.Popen for that purpose, without any problems.
However, you might want to simply install cronjobs on the remote machines. :-)
Reading the paramiko API docs, it looks like it is possible to open one ssh connection, and multiplex as many ssh tunnels on top of that as are wished. Common ssh clients (openssh) often do things like this automatically behind the scene if there is already a connection open.
I've tried clusterssh, and I don't like the multiwindow model. Too confusing in the common case when everything works.
I've tried pssh, and it has a few problems with quotation escaping and password prompting.
The best I've used is dsh:
Description: dancer's shell, or distributed shell
Executes specified command on a group of computers using remote shell
methods such as rsh or ssh.
.
dsh can parallelise job submission using several algorithms, such as using
fan-out method or opening as much connections as possible, or
using a window of connections at one time.
It also supports "interactive mode" for interactive maintenance of
remote hosts.
.
This tool is handy for administration of PC clusters, and multiple hosts.
Its very flexible in scheduling and topology: you can request something close to a calling tree if need be. But the default is a simple topology of one command node to many leaf nodes.
http://www.netfort.gr.jp/~dancer/software/dsh.html
This might not be relevant to your question. But there are tools like pssh, clusterssh etc. that can parallely spawn connections. You can couple Expect with pssh to control them too.

Categories