Anyone tried Cluster automation with WLST? - python

Most of the production env might need a automation script for Clusters. Reason is whenever there is a need of enhance the sites need to add new cluster to existing domain. When there is a decommissioned a physical site that needs removal of cluster from the domain. There is also possibility of "Growth" of Cluster (adding managed servers to a cluster). Finally there is also need of decommission of machines which requires removal of servers from the cluster.
Menu
1. Add a Cluster
2. Del a Cluster
3. Add a server to Cluster
4. Del a server from Cluster
Please share your thoughts and suggestions...
Thanks in advance.

I'm actually doing exactly that and it works fine.
You'll have to add the inital
edit()
startEdit()
and to save
save()
activate(block='true')
as well as exception handling but the functions are pretty simple:
Add a server to cluster:
managedServer = create(ServerName,'Server')
managedServer.setListenPort(ListenPort)
managedServer.setCluster(Clustername)
managedServer.setMachine(Machinename)
Delete server from cluster (and the server, optional):
serverMBean = getMBean("Servers/"+ServerName)
serverMBean.setCluster(None)
serverMBean.setMachine(None)
delete(ServerName,'Server')
Add a cluster (you can also use the same method as creating a server -> create(name, 'Cluster'):
cd('/')
cmo.createCluster('Cluster-0')
cd('/Clusters/Cluster-0')
cmo.setClusterMessagingMode('unicast')
...
Delete a cluster works the same way as deleting a server, you should power them down first though.
In general you can use the admin console to perform the desired actions and the record button at the top allows you to generate a wlst script which does exactly what you do in the admin console.

Related

Use of ELK with Python

The project that I am working on is a bit confidential, but I will try to explain my issues and be as clear as possible because I need your opinion.
Project:
They asked me to set up a local ELK environment , and to use Python scripts to communicate with this stack (ELK), to store data, retrieve it, analyse it and visualise it thanks to Kibana, and finally there is a decision making based on that data(AI). So as you can see, it is a Data Engineering project with some AI for the decision making process. The issues that I am facing are:
I don't know how to use Python to communicate with the stack, I didn't find resources about it
Since the data is confidential, how can I assure a high security?
How many instances to use?
I am lost because I am new to ELK and my team is not Dev oriented
I am new to ELK, so please any advice would be really helpful!
I don't know how to use Python to communicate with the stack, I didn't
find resources about it
For learning how to interact with your stack use the python library:
You can install using pip3 install elasticsearch and the following links contain a wealth of tutorials on almost anything you would need to be doing.
https://kb.objectrocket.com/category/elasticsearch?filter=python
Suggest you start with these two:
https://kb.objectrocket.com/elasticsearch/how-to-parse-lines-in-a-text-file-and-index-as-elasticsearch-documents-using-python-641
https://kb.objectrocket.com/elasticsearch/how-to-query-elasticsearch-documents-in-python-268
Since the data is confidential, how can I assure a high security?
You can mask the data or restrict index access.
https://www.elastic.co/guide/en/elasticsearch/reference/current/authorization.html
https://nl.devoteam.com/expert-view/field-level-security-and-data-masking-in-elasticsearch/
How many instances to use?
I am lost because I am new to ELK and my team is not Dev oriented
I suggest you start with 1 Elasticsearch node, if you're on AWS use a t3a.large or equivalent and run Elasticsearch, Kibana and Logstash all on the same machine.
For setting it up: https://www.elastic.co/guide/en/elastic-stack-get-started/current/get-started-stack-docker.html#run-docker-secure
If you want to use phyton as your integration tools to Elasticsearch you can use elasticsearch phyton client.
The other options you can use python to create the result and save it in log file or insert to database than Logstash will get your data.
For the security ELK have good security from API authorization user authentication to cluster security. you can see in here Secure the Elastic Stack
I just use 1 instance, but feel free if you think you will need to separate between Kibana and Elasticsearch and Logstash (if you use it) or you can use docker to separate it.
Based on my experience, if you are going to load a lot of data in a short time it will be wise If you separate it so the processes don't interfere with each other.

What's the optimal way to store image data temporarily in a containerized website?

I'm currently working on a website where i want the user to upload one or more images, my flask backend will do some changes on these pictures and then return them back to the front end.
Where do I optimally save these images temporarily especially if there are more then one user at the same time on my website (I'm planning on containerizing the website). Is it safe for me to save the images in the folder of the website or do I need e.g. a database for that?
You should use a database, or external object storage like Amazon S3.
I say this for a couple of reasons:
Accidents do happen. Say the client does an HTTP POST, gets a URL back, and does an HTTP GET to retrieve the result. But in the meantime, the container restarts (because the system crashed; your cloud instance got terminated; you restarted the container to upgrade its image; the application failed); the container-temporary filesystem will get lost.
A worker can run in a separate container. It's very reasonable to structure this application as a front-end Web server, that pushes messages into a job queue, and then a back-end worker picks up messages out of that queue to process the images. The main server and the worker will have separate container-local filesystems.
You might want to scale up the parts of this. You can easily run multiple containers from the same image; they'll each have separate container-local filesystems, and you won't directly control which replica a request goes to, so every container needs access to the same underlying storage.
...and it might not be on the same host. In particular, cluster technologies like Kubernetes or Docker Swarm make it reasonably straightforward to run container-based applications spread across multiple systems; sharing files between hosts isn't straightforward, even in these environments. (Most of the Kubernetes Volume types that are easy to get aren't usable across multiple hosts, unless you set up a separate NFS server.)
That set of constraints would imply trying to avoid even named volumes as much as you can. It makes sense to use volumes for the underlying storage for your database, and it can make sense to use Docker bind mounts to inject configuration files or get log files out, but ideally your container doesn't really use its local filesystem at all and doesn't care how many copies of itself are running.
(Do not rely on Docker's behavior of populating a named volume on first use. There are three big problems with it: it is on first use only, so if you update the underlying image, the volume won't get updated; it only works with Docker named volumes and not other options like bind-mounts; and it only works in Docker proper and not in Kubernetes.)
Other decisions are possible given other sets of constraints. If you're absolutely sure you will never ever want to run this application spread across multiple nodes, Docker volumes or bind mounts might make sense. I'd still avoid the container-temporary filesystem.

Memcached AWS Cluster not keeping the key in the same node

I have created a memcached cluster in the Elasticache tool of AWS.
My program in every call sets keys with some data in cache and every time that I call the server it updates the data. However while testing it with the cluster I found that it seems that is changing the node where the key is located or it is erasing it, so the moment it changes the node /or erase the key, I lose my previous information.As Im only calling to one end point for all the cluster, shouldnt it keep the consistancy of the key over the cluster and not delete the content of the key or restart the key somewhere else ?
Is there any configuration parameter of memcached cluster to force it not to change the reference node for a key?
Now Im using the default configuration parameters of the AWS file default.memcached1.4..and I took a look to the configuration parameters at http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/ParameterGroups.Memcached.html and I dont find any information giving me tips related to how to solve this issue.
(Pd. When I directly point my program to a specific node everything works fine)
That is the way it's supposed to be.
The following diagram illustrates a typical Memcached and a typical
Redis cluster. Memcached clusters contain from 1 to 20 nodes across
which you can horizontally partition your data. Redis
From http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/Clusters.html
The django documentation says something similar.
One excellent feature of Memcached is its ability to share a cache
over multiple servers. This means you can run Memcached daemons on
multiple machines, and the program will treat the group of machines as
a single cache, without the need to duplicate cache values on each
machine
In other words, you cannot directly request data from any given node in the cluster. You have to let django's cache api figure out for you how to retrieve the data.
With redis the behaviour is the opposite. Once you write to the cluster you can query any node in the cluster for the data because it will be replicated to them all. Where as in memcache, it's sharded.

Unable scale up cassandra using latest datastax python driver

We are using multiple cassandra datastax cluster instances(6) to connect to cassandra using python. We are pooling these multiple connections to do some operations. Each operation is independent of other.
It works fine on a small number of operations, but once I try to scale up I get the following errors :
NoHostAvailable: ('Unable to complete the operation against any hosts', {<Host: 127.ption('Pool is shutdown',)})
and sometimes the following warning:
WARNING Heartbeat failed for connection (140414695068880) to 127.0.0.1
I tried changing some cluster object parameters but it did not help.
Following is the configuration of key space in cassandra I am using :
'class': 'SimpleStrategy',
'replication_factor': '1'
I am using lastest versions of cassandra and datastax driver for python. There is only one node is cassandra.
EDIT: More details:
The multiple cluster instances are in different processes (processes are created using the Python multiprocessing module) - one cluster instance per process. Lets call the proccesses Cassandra-Processes (CP). There are a bunch of other process that do some computation and need to look up a Cassandra DB, and write to it, occassionally. The current design is that each of these processes is mapped to one CP, and all DB reads/writes to be done by the process is done via this mapped CP. 'what' exactly is to be read/written is passed into a queue (again from the multiprocessing library) which the mapped CP reads.
We observe that this setup runs for quite sometime - and then suddenly Cassandra begins erroring out.
It's unclear why you're using six cluster instances against a single Cassandra node. Generally, you should use one Cluster instance per application (per remote cluster). You can read about general design considerations for Cassandra drivers here
If you're looking to "scale" with regards to throughput, you might consider using multiprocessing. I discuss this in a blog post here.
Follow-on:
Two things can be inferred from the information we have so far:
The application is pushing more concurrent requests than your connection pool is configured to handle. I say this because the "Pool is shutdown" only occurs when a request is waiting for a connection/stream to become available. You can tune connection pooling to make more available initially using cluster settings. However, if your "cluster" (server node) is overwhelmed, you won't gain much there.
Your connection is being shutdown. This exception only happens when the node is suddenly marked down. In a single node setup this is most likely because of a connection error. Look for clues in the server log, or driver debug log if you're capturing that.
We probably need to know more about your execution model to help more. Is it possible you're running unfettered async requests without occasionally waiting for them to complete?
Remote diagnosis is hard to do without knowing anything on your specific topology, setup and system configuration. This however looks much like a configuration problem or even the python driver. If you google your error message you will find multiple topics on Datastax's Jira describing this or similar problems, I would check that the Python Driver is up to date.
What would help in the first place is to see in detail what you try to do, how your cluster is configured aso.

Amazon EC2 file structure / web app with separate Python backend?

I'm currently running a t2.micro instance on EC2 right now. I have the html/web interface side of it working, along with a MySQL database.
The site allows users to register and stores them in the DB via a PHP script.
I want there to be an actual Python application that queries the MySQL database and returns user data, to then be executed in a Python script.
What I cannot find is whether I host this Python application as a totally separate instance or if it can exist on the same instance, in a different directory. I ultimately just need to query the database, which makes me thing it must exist on the same instance.
Could someone please provide some guidance?
Let me just be clear: this is not a Python web app. This Python backend is entirely separate except making queries against the database.
Either approach is possible, but there are pros & cons to each.
Running separate Python app on the same server:
Pros:
Setting up local access to the database is fairly simple
Only need to handle backups or making snapshots, etc. for a single instance
Cons:
Harder to scale up individual pieces if you need more memory, processing power, etc. in the future
Running the Python app on a separate server:
Pros:
Separate pieces means you can scale up & down the hardware each piece is running on, according to their individual needs
If you're using all micro instances, you get more resources to work with, without any extra costs (assuming you're still meeting all the other 'free tier eligible' criteria)
Cons:
In general, more pieces == more time spent on configuration, administration tasks, etc.
You have to open up the database to non-local access
Simplest: open up the database to access from anywhere (e.g. all remote IP addresses), and have the Python app log in via the internet
Somewhat safer, more complex: set the Python app server up with an elastic IP, open up the database to access only from that address
Much safer, more complex: set up your own virtual private cloud (VPC), and allow connections to the database only from within the VPC. You'd have to configure public access for each of the servers for whatever public traffic you'll have, presumably ports 80 and/or 443.

Categories