I'm writing the script to setup the replaset for mongo in python.
The first part of the script starts the processes and the second should configure the replicaset.
From the command line I ussually do:
config={_id:"aaa",members:[{_id:0,host:"localhost:27017"},{_id:1,host:"localhost:27018"},{_id:2,host:"localhost:27019",arbiterOnly:true}]}
rs.initiate(config)
rs.status();
And then I'm looking from rs.status() that all members are initialized
I want to do the same in python script.
In general i'm looking for a good reference of setup scripts for mongodb (also sharding). I saw the python script in their site, it is a good start point (but it only for single machine and sinle node in replSet). I need to setup all on different machines.
Thanks
If you run rs.initiate (without the (config)) the shell tells you which command it would run. In this case, it would be:
function (c) {
return db._adminCommand({replSetInitiate:c});
}
In python this should be something like:
>>> from pymongo import Connection
>>> c = Connection("morton.local:27017", slave_okay=True)
>>> d.command( "replSetInitiate", c );
With c being your replicaset configuration. http://api.mongodb.org/python/current/api/pymongo/database.html#pymongo.database.Database.command has some more information on calling commands.
Thanks Derick. Here are some remarks to your answer. 'replSetInitiate' is DBA command. Run it agains 'admin' database. As here:
conn = Connection("localhost:27017", slave_okay=True)
conn.admin.command( "replSetInitiate" );
To get the output of rs.status in pymongo we can use like this
def __init__(self):
'''Constructor'''
self.mdb=ReplicaSetConnection('localhost:27017',replicaSet='rs0')
def statusofcluster(self):
'''Check the status of Cluster and gives the output as true'''
print "We are Inside Status of Cluster"
output=self.mdb.admin.command('replSetGetStatus')
Related
I’m using Python 3.6 and Fabric 2.4. I’m using Fabric to SSH into a server and run some commands. I need to set an environment variable for the commands being run on the remote server. The documentation indicates that something like this should work:
from fabric import task
#task(hosts=["servername"])
def do_things(c):
c.run("command_to_execute", env={"KEY": "VALUE"})
But that doesn’t work. Something like this should also be possible:
from fabric import task
#task(hosts=["servername"])
def do_things(c):
c.config.run.env = {"KEY": "VALUE"}
c.run("command_to_execute")
But that doesn’t work either. I feel like I’m missing something. Can anyone help?
I was able to do it by setting inline_ssh_env=True, and then explicitly setting the env variable, ex:
with Connection(host=hostname, user=username, inline_ssh_env=True) as c:
c.config.run.env = {"MY_VAR": "this worked"}
c.run('echo $MY_VAR')
As stated on the site of Fabric:
The root cause of this is typically because the SSH server runs non-interactive commands via a very limited shell call: /path/to/shell -c "command" (for example, OpenSSH). Most shells, when run this way, are not considered to be either interactive or login shells; and this then impacts which startup files get loaded.
You read more on this page link
So what you try to do won't work, and the solution is to pass the environment variable you want to set explicitly:
from fabric import task
#task(hosts=["servername"])
def do_things(c):
c.config.run.env = {"KEY": "VALUE"}
c.run('echo export %s >> ~/.bashrc ' % 'ENV_VAR=VALUE' )
c.run('source ~/.bashrc' )
c.run('echo $ENV_VAR') # to verify if it's set or not!
c.run("command_to_execute")
You can try that:
#task
def qa(ctx):
ctx.config.run.env['counter'] = 22
ctx.config.run.env['conn'] = Connection('qa_host')
#task
def sign(ctx):
print(ctx.config.run.env['counter'])
conn = ctx.config.run.env['conn']
conn.run('touch mike_was_here.txt')
And run:
fab2 qa sign
When creating the Connection object, try adding inline_ssh_env=True.
Quoting the documentation:
Whether to send environment variables “inline” as prefixes in front of command strings (export VARNAME=value && mycommand here), instead of trying to submit them through the SSH protocol itself (which is the default behavior). This is necessary if the remote server has a restricted AcceptEnv setting (which is the common default).
According to that part of the official doc, the connect_kwargs attribute of the Connection object is intended to replace the env dict. I use it, and it works as expected.
My problem here is when I enter the value of p, Nothing happens, It does not pursue execution: is there a way to fix it please?
import sys
from pyspark import SparkContext
sc = SparkContext("local", "simple App")
p =input("Enter the word")
rdd1 = sc.textFile("monfichier")
rdd2= rdd1.map(lambda l : l.split("\t"))
rdd3=rdd2.map(lambda l: l[1])
print rdd3.take(6)
rdd5=rdd3.filter(lambda l : p in l)
sc.stop()
You can use py4j to get input via Java
from py4j.java_gateway import JavaGateway
scanner = sc._gateway.jvm.java.util.Scanner
sys_in = getattr(sc._gateway.jvm.java.lang.System, 'in')
result = scanner(sys_in).nextLine()
print(result)
Depending on your environment/spark version you might need to replace sc with spark.sparkContext
You have to distinguish between to different cases:
Script submitted with $SPARK_HOME/bin/spark-submit script.py
In this case you execute Scala application which in turn starts Python interpreter. Since Scala application doesn't expect any interaction from the standard input, not to mention passing it to Python interpreter, your Python script will simply hang waiting for data which won't come.
Script executed directly using Python interpreter (python script.py).
You should be able to use input directly but at the cost of handling all the configuration details, normally handled by spark-submit / org.apache.spark.deploy.SparkSubmit, manually in your code.
In general all required arguments for your scripts can be passed using commandline
$SPARK_HOME/bin/spark-submit script.py some_app_arg another_app_arg
and accessed using standard methods like sys.argv or argparse and using input is neither necessary nor useful.
I had same problem in Azure DataBricks, I used widgets to get input from the user.
I have a single file script for operations automation (log file downloads, stop/start several containers. User is choosing what to do via command arguments) and want to have fabric functions in the same script as well as argument parsing class and possibly some other. How do I call fabric functions from within the same python script? I do not want to use "fab" as it is.
And as a side note, I'd like to have these calls parallel as well.
This is a model class that would ideally contain all necessary fabric functions:
class fabricFuncs:
def appstate(self):
env.hosts = hosts
run('sudo /home/user/XXX.sh state')
This is launcher section:
if __name__ == "__main__":
argParser().argParse()
fabricFuncs().ihsstate()
argParser sets variables globaly using command line arguments specified (just to clarify what that part does).
Which sadly results in a failure where no hosts are defined (env.hosts should contain that inside the function...or is it too late to declare them there?)
EDIT1:
I have tried launching the fabric function using this:
for h in env.hosts:
with settings(hosts_string=user + "#" + h):
fabricFuncs().ihsstate()
It kind of works. I kind of hoped though, that I will be able to paralelize the whole process using fabric module as it is (via decorators) without wraping the whole thing in threading code.
EDIT2:
I have tried this as well:
execute(fabricFuncs().ihsstate())
Which fails with:
Fatal error: Needed to prompt for the target host connection string (host: None)
Can i put a whole env.hosts variable into "settings" above without iterating over that list with a "for" statement?
EDIT3:
I have tried editing the fab function like this to see if env.hosts are set properly:
class fabricFuncs:
def appstate(self):
env.hosts = hosts
print env.hosts
run('sudo /home/user/XXX.sh state')
And it prints out correctly, but still the "run" command fails with:
Fatal error: Needed to prompt for the target host connection string (host: None)
Use the execute command:
from fabric.api import execute
execute(argParser().argParse())
execute(fabricFuncs().ihsstate())
if you run the script without fab command env.host will set to None.
so if you want to use 'execute' you have to pass also 'hosts' parameter.
try this:
from fabric.api import execute, run
if __name__ == "__main__":
hosts = ["host1", "host2"]
execute(run('sudo /home/user/XXX.sh state'), hosts=hosts)
I have some years of solid experience working with asterisk but am new to python.
I want to connect from a python script and receive some events. I have created a manager user with AMIUSERNAME and AMIPASSWORD as credentials and tested working OK. I have also installed StarPy.
Then I run with the command python ami.py USERNAME PASSWORD the following script:
import sys
from starpy import manager
f = manager.AMIFactory(sys.argv[1], sys.argv[2])
df = f.login('127.0.0.1',5038)
While monitoring the asterisk console and nothing happens.
Does anyone know what I am missing?
I would like to send a Ping action and wait for a Pong response.
I suppose that f.login() returns you an AMIProtocol instance that has a ping() method.
I don't know anything about starpy, so some vague advice:
Start Python as an interactive shell. Execute code and examine results on the spot. help function is your friend; try help(df) after the last line of your script.
Look at the examples directory in starpy distribution. Maybe 90% of the code you need is already there.
The following is pulled from the ami module (and a few other places) in the Asterisk Test Suite. We use starpy extensively throughout the Test Suite, so you may want to check it out for some examples. Assume that the following code resides in some class with member method login.
def login(self):
def on_login_success(self, ami):
self.ami_factory.ping().addCallback(ping_response)
return ami
def on_login_error(self, reason):
print "Failed to log into AMI"
return reason
def ping_response(self, ami)
print "Got a ping response!"
return ami
self.ami_factory = manager.AMIFactory("user", "mysecret")
self.ami_factory.login("127.0.0.1", 5038).addCallbacks(on_login_success, on_login_error)
Make sure as well that your manager.conf is configured properly. For the Asterisk Test Suite, we use the following:
[general]
enabled = yes
webenabled = yes
port = 5038
bindaddr = 127.0.0.1
[user]
secret = mysecret
read = system,call,log,verbose,agent,user,config,dtmf,reporting,cdr,dialplan,test
write = system,call,agent,user,config,command,reporting,originate
Is it possible to dispatch an external (python) script from Trace32 using its PRACTICE II scripting language?
For future googlers, like me, here is how to use the Lauterbach c-API to execute PRACTICE commands from Python. The TRACE32 application has to be open before you run your script. You also have to add 5 lines (including two blank lines) to your config.t32 file:
#You must have an empty line before
RCL=NETASSIST
PACKLEN=1024
PORT=20010
#and after these three parameters
At least the PORT parameter value is arbitary, but it has to match in your config and script. It defines the UDP port over which the API will be available.
This code demonstrates how you can use the the API in Python:
from ctypes import *
node = (c_char_p('NODE='),c_char_p('localhost'))
port = (c_char_p('PORT='),c_char_p('20010'))
plen = (c_char_p('PACKLEN='),c_char_p('1024'))
mydll = cdll.LoadLibrary(r'C:\T32\demo\api\capi\dll\T32api.dll')
error = mydll.T32_Config(*node)
error = mydll.T32_Config(*port)
error = mydll.T32_Config(*plen)
error = mydll.T32_Init()
error = mydll.T32_Attach(1)
#Try a PRACTICE command
cmd = c_char_p('DATA.DUMP 0xFF800000')
mydll.T32_Cmd(cmd)
Check that the T32api.dll is in the directory specified in the script.
Lauterbach provides more documentation for this api. Take a look in the demo\api\capi folder and this document http://www2.lauterbach.com/pdf/api_remote.pdf
Use OS.Screen to make a command prompt session.