How to pass user_data script to Python Openstack Heat-API client - python

How to pass user_data script to Python Heat-API client.
I have the following script in a file I want to pass into an instance as user_data during creating, but I am not sure
how to go about it doing. I am using the Heat API to create the instance. The below code creates the stack with the heat template file with no user_data.
Any pointers would be appreciated.
env.yml
user_data:
#!/bin/bash
rpm install -y git vim
template_file = 'heattemplate.yaml'
template = open(template_file, 'r')
stack = heat.stacks.create(stack_name='Tutorial', template=template.read(), parameters={})

On your yaml Heat template, you should add:
parameters:
install_command:
type: string
description: Command to run from user_data
default: #!/bin/bash rpm install -y git vim
...
myserver:
type: OS::Nova::Server
properties:
...
user_data_format: RAW
user_data: { get_param: install_command }
And pass the new parameter through parameters = {}, from your create line on Python:
heat.stacks.create(stack_name='Tutorial', template=template.read(),
parameters={ 'install_command': '...' })

Related

Specify zmq file ports when starting jupyter or connect to existing kernel using connection file

I would like to use the Docker image jupyter/datascience-notebook to start a Jupyter notebook frontend, but I need to be able to control which ports it chooses to use for its communication. I understand that the server is designed to potentially provision many kernels, not just one, but what I want is for the first kernel to use the ports I specify. I have tried supply arguments like:
docker run --rm -it jupyter/datascience-notebook:latest start-notebook.sh --existing /my/connection/file.json
docker run --rm -it jupyter/datascience-notebook:latest start-notebook.sh --KernelManager.control_port=60018
And it does not seem to care, instead creating the connection file in the usual location under /home/jovyan/.local/share/jupyter/
Any assistance is appreciated.
I ended up doing as suggested in a similar question - IPython notebook: How to connect to existing kernel? this, I could not find a better way.
Subclass LocalProvisioner to override its pre_launch method
from typing import Any, Dict
from jupyter_client import LocalProvisioner, LocalPortCache, KernelProvisionerBase
from jupyter_client.localinterfaces import is_local_ip, local_ips
class PickPortsProvisioner(LocalProvisioner):
async def pre_launch(self, **kwargs: Any) -> Dict[str, Any]:
"""Perform any steps in preparation for kernel process launch.
This includes applying additional substitutions to the kernel launch command and env.
It also includes preparation of launch parameters.
Returns the updated kwargs.
"""
# This should be considered temporary until a better division of labor can be defined.
km = self.parent
if km:
if km.transport == 'tcp' and not is_local_ip(km.ip):
raise RuntimeError(
"Can only launch a kernel on a local interface. "
"This one is not: %s."
"Make sure that the '*_address' attributes are "
"configured properly. "
"Currently valid addresses are: %s" % (km.ip, local_ips())
)
# build the Popen cmd
extra_arguments = kwargs.pop('extra_arguments', [])
# write connection file / get default ports
# TODO - change when handshake pattern is adopted
if km.cache_ports and not self.ports_cached:
lpc = LocalPortCache.instance()
km.shell_port = 60000
km.iopub_port = 60001
km.stdin_port = 60002
km.hb_port = 60003
km.control_port = 60004
self.ports_cached = True
km.write_connection_file()
self.connection_info = km.get_connection_info()
kernel_cmd = km.format_kernel_cmd(
extra_arguments=extra_arguments
) # This needs to remain here for b/c
else:
extra_arguments = kwargs.pop('extra_arguments', [])
kernel_cmd = self.kernel_spec.argv + extra_arguments
return await KernelProvisionerBase.pre_launch(self, cmd=kernel_cmd, **kwargs)
Specify entry point in setup.py
entry_points = {
'jupyter_client.kernel_provisioners': [
'pickports-provisioner = mycompany.pickports_provisioner:PickPortsProvisioner',
],
},
Create kernel.json to overwrite the default one
{
"argv": [
"/opt/conda/bin/python",
"-m",
"ipykernel_launcher",
"-f",
"{connection_file}"
],
"display_name": "Python 3 (ipykernel)",
"language": "python",
"metadata": {
"debugger": true,
"kernel_provisioner": { "provisioner_name": "pickports-provisioner" }
}
}
Dockerfile
# Start from a core stack version
FROM jupyter/datascience-notebook:latest
# Install from requirements.txt file
COPY --chown=${NB_UID}:${NB_GID} requirements.txt .
COPY --chown=${NB_UID}:${NB_GID} setup.py .
RUN pip install --quiet --no-cache-dir --requirement requirements.txt
# Copy kernel.json to default location
COPY kernel.json /opt/conda/share/jupyter/kernels/python3/
# Install from sources
COPY --chown=${NB_UID}:${NB_GID} src .
RUN pip install --quiet --no-cache-dir .
Profit???

Python string formatting weirdness

I have an custom config class in an application and I'd like to override the defaults by reading from environment variables. I'm facing some strange behaviour with Python's str.format() and I'd like to understand why. This code runs successfully depending on the value of the env vars that are passed in. Here it is:
class Config(object):
SQS_QUEUE = '{client}-{env}'
class ClientConfig(Config):
ENV = os.environ.get('ENV', default='dev')
CLIENT = os.environ.get('CLIENT', default='v')
SQS_QUEUE = Config.SQS_QUEUE.format(client=CLIENT, env=ENV)
config = ClientConfig()
print(config.ENV)
print(config.CLIENT)
print(config.SQS_QUEUE)
This is my env var file:
export ENV="prod"
export CLIENT="r"
It is being loaded like so: source .env and I can see that the vars are set by running the env command:
$ env
ENV=prod
CLIENT=r
[...]
When I run the python code above, I would expect the SQS queue variable to be a string with value "r-prod" instead, I'm getting "-prod" which is strange considering both ENV and CLIENT are set (as I can see from the print statement)
EDIT: here's the full output
$ python3 test.py
prod
r
-prod

Build gradle assemble apk using laravel function

I'm currently calling the main python file in larval function, and inside that main python file I'm calling another 2 files ( PowerShell and sub python file) the problem is when the Laravel function is triggered it only call the main python file, however when I call the main python file using terminal all the files are executed like below:
Laravel function:
public function initialize(Request $request)
{
$store_name = $request->get('store_name', 1);
if (empty($store_name)) {
return 'Missing store name';
} else {
$processes = File::get("/root/flask/android_api/processes.txt");
File::put('/root/flask/android_api/url.txt', $store_name );
$process = new Process(['python3.6', '/root/flask/android_api/buildAPK.py']);
$process->run();
if (!$process->isSuccessful()) {
throw new ProcessFailedException($process);
} else {
return 'Starting the processes to build';
}
}
}
and within the main python file I have:
try:
p = subprocess.Popen(["/usr/bin/pwsh",
"/root/flask/android_api/set_apk_builder.ps1", '-ExecutionPolicy',
'Unrestricted',
'./buildxml.ps1'],
stdout=sys.stdout)
p.communicate()
except:
file = open ("/root/flask/android_api/log.txt", "w")
file.write ("fail")
file.close()
import slack
// call(["python", "/root/flask/flask.py"])
os.system('python3.7 /root/flask/flask.py')
Edit:
now I changed the build to be direct from laravel function to generate the apk
using this command:
public function initialize(Request $request)
{
$store_name = $request->get('store_name', 1);
if (empty($store_name)) {
return 'Missing store name';
} else {
return shell_exec('cd /var/www/html/androidProject && chmod +x gradlew && ./gradlew assembledemoDebug');
}
}
however, the command line returns the Gradle command build is starting but it doesn't create a folder and generate the apk
the Current folder structure /var/www/html and inside html is the project folder and laravel project
note: before I call Gradle build command inside Laravel function, I used to call python file and that python file is calling Gradle command but I had the same issue the apk is not created, but when I run the same python file from bash command it works fine
There are 2 ways you can accomplish this:
The first is to include your commands into a .sh file that you should make it executable using this command:
chmod +x file.sh
Then you should call that file from laravel using Symfony process so you can get details of the log process and errors:
use Symfony\Component\Process\Process;
use Symfony\Component\Process\Exception\ProcessFailedException;
$process = new Process('sh /folder_name/file_name.sh');
$process->run();
if (!$process->isSuccessful()) {
throw new ProcessFailedException($process);
}
echo $process->getOutput();
Then include all commands in that file you wish to run.
Second you can run those commands using
$output = shell_exec('ls -lart');
then echo "<pre>$output</pre>";
You'll need to move all your writable files into public directory in Laravel, because that's where everything should be editable.'=
I actually suggest the first one as you don't need to change owner of some folders to www-data to give write permissions.

How to pass parameters to ARM template using azure python sdk?

i run the following command as a bash script.
az group create -n $1-rg -l eastus2
az deployment group create -g $1-rg -n $1-deploy \
-f ./azure/sensor/trafficmirrorstack.json \
-p #./azure/sensor/trafficmirrorstack.parameters.json \
-p CustomerName=$1
az deployment group show -g $1-rg -n $1-deploy
the following seems like it should work:
rg_name = f"{name}-rg"
deploy_name = f"{name}-deploy"
region = list(region_params.keys())[0]
# add resource group
rg_result = resource_client.resource_groups.create_or_update(
rg_name,
{
"location": region
}
)
print(f"Provisioned resource group {rg_result.name} in the {rg_result.location} region")
with open("./sensor/trafficmirrorstack.json") as template_file:
print("!!!!1")
print(f"{0}".format(template_file.read()))
#print(f"{0}".format(template_file.read()))
print("!!!!2")
template = f"{0}".format(template_file.read())
parameters = {"CustomerName": { "value": name}}
deployment_params = {
"mode": "Incremental",
"template": template,
"parameters": parameters
}
# Create deployment
deployment_create_result = resource_client.deployments.begin_create_or_update(
rg_name,
deploy_name,
{"properties": deployment_params},
# deployment_params,
)
deployment_create_result = deployment_create_result.result()
but how do you do the equivalent of "-p #./azure/sensor/trafficmirrorstack.parameters.json -p CustomerName=$1"
thanks in advance
When we use Azure python sdk to deploy arm template, we just can use provide the paramater file URL or define parameter as json with the method resource_client.deployments.begin_create_or_update. we can not use the two method at the same time. For more details, please refer to here
Besides, in azure cli, it will read multiple "parameter" you provide then define one Jobject with these "parameter" as parameters when we deploy arm template. For more details, please refer to here

Append data to config file on linux

I would like to append data to my file.config. Also I'm using linux, which will request permissions to any kind of configuration to the file. In the terminal I can write sudo nano file.config and make the changes.
Expectation: my file.cofig looks like this
#
#info ...
#
#
[Section]
#info
I want to append data right here to the end of the file
I tried using configparser module:
configparser = configparser.ConfigParser()
configparser['Section'] = {'data':'123'}
configFilePath = '/etc/file.conf'
with open(configFilePath, 'a') as file_conf:
configparser.write(file_conf)
This will add information as a dictionary, instead as the regular file.txt
#
#info ...
#
#
[Section]
#info
[Section]
data = 123
As requested in the comments: sudo python3 file.py was requiered (visual studio was not in su mode)

Categories