Nomad : Unable to Run a python script - python

I am trying to submit the following job to my nomad server. The job basically uses a payload which is a python file from my localhost.
job "agent-collector-bot" {
datacenters = ["staging"]
type = "batch"
periodic {
cron = "*/10 * * * *"
prohibit_overlap = true
}
group "python-bot" {
count = 1
task "slack-bot" {
driver = "raw_exec"
config {
command = "python"
args = ["local/agent-collector-slackbot.py"]
}
dispatch_payload {
file = "agent-collector-slackbot.py"
}
}
}
}
Now when i see the job status in nomad it says:
snomad status agent-collector-bot/
ID = agent-collector-bot/periodic-1512465000
Name = agent-collector-bot/periodic-1512465000
Submit Date = 12/05/17 14:40:00 IST
Type = batch
Priority = 50
Datacenters = staging
Status = pending
Periodic = false
Parameterized = false
Summary
Task Group Queued Starting Running Failed Complete Lost
python-bot 1 0 0 0 0 0
Placement Failure
Task Group "python-bot":
* Constraint "missing drivers" filtered 5 nodes
I checked my nomad clients (all 5) are having python on it.. can someone help me out?

The driver specified in the output is raw_exec, not python.
You need to enable it in your client config (nomad raw_exec docs)
client {
options = {
"driver.raw_exec.enable" = "1"
}
}

Related

Why is my Android app crashing every time I run a python script with chaquopy?

I am building an Android app that will allow the user to get a picture either by taking it in real time or uploading it from their saved images. Then, it will go through a machine learning script in python to determine their location. Before I completely connect to the algorithm, I am trying a test program that just returns a double.
from os.path import dirname, join
import csv
import random
filename = join(dirname(__file__), "new.csv")
def testlat():
return 30.0
def testlong():
return 30.0
These returned values are used in a Kotlin file that will then send those values to the Google Maps activity on the app for the location to be plotted.
class MainActivity : AppCompatActivity() {
var lat = 0.0
var long = 0.0
var dynamic = false
private val cameraRequest = 1888
lateinit var imageView: ImageView
lateinit var button: Button
private val pickImage = 100
private var imageUri: Uri? = null
var active = false
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
// Accesses the info image button
val clickMe = findViewById<ImageButton>(R.id.imageButton)
// Runs this function when the info icon is pressed by the user
// It will display the text in the variable infoText
clickMe.setOnClickListener {
Toast.makeText(this, infoText, Toast.LENGTH_LONG).show()
}
if (ContextCompat.checkSelfPermission(applicationContext, Manifest.permission.CAMERA)
== PackageManager.PERMISSION_DENIED) {
ActivityCompat.requestPermissions(
this,
arrayOf(Manifest.permission.CAMERA),
cameraRequest
)
}
imageView = findViewById(R.id.imageView)
val photoButton: Button = findViewById(R.id.button2)
photoButton.setOnClickListener {
val cameraIntent = Intent(MediaStore.ACTION_IMAGE_CAPTURE)
startActivityForResult(cameraIntent, cameraRequest)
dynamic = true
}
/*
The below will move to external photo storage once button2 is clicked
*/
button = findViewById(R.id.button)
button.setOnClickListener {
val gallery = Intent(Intent.ACTION_PICK, MediaStore.Images.Media.INTERNAL_CONTENT_URI)
startActivityForResult(gallery, pickImage)
}
// PYTHON HERE
if (! Python.isStarted()) {
Python.start(AndroidPlatform(this))
}
}
override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
super.onActivityResult(requestCode, resultCode, data)
if (resultCode == RESULT_OK && requestCode == pickImage) {
imageUri = data?.data
imageView.setImageURI(imageUri)
// PYTHON HERE
val py = Python.getInstance()
val pyobj = py.getModule("main")
this.lat = pyobj.callAttr("testlat").toDouble()
this.long = pyobj.callAttr("testlong").toDouble()
/* Open the map after image has been received from user
This will be changed later to instead call the external object recognition/pathfinding
scripts and then pull up the map after those finish running
*/
val mapsIntent = Intent(this, MapsActivity::class.java)
startActivity(mapsIntent)
}
}
}
I set up chaquopy and the gradle is building successfully, but everytime I get to the python part of emulating the app, it crashes. I'm not quite sure why that is; I thought maybe the program was too much for the phone to handle but it is a very basic python script so I doubt that's the issue.
If your app crashes, you can find the stack trace in the Logcat.
In this case, it's probably caused by the line return = 30.0. The correct syntax is return 30.0.

Cross-program invocation with unauthorized signer or writable account

I am trying to create a transaction on python for the program that I wrote, and I am falling into an issue that I can't really pinpoint how to solve it.
program_id = PublicKey("9qm7AEJFHQ8SqJrmfofWK6maWRwKvQwK8uy8w3PVZLQw")
program_id_account_meta = AccountMeta(program_id, False, False)
payer_account_meta = AccountMeta(payer_keypair.public_key, True, False)
vault_account_meta = AccountMeta(PublicKey("G473EkeR5gowVn8CRwTSDop3zPwaNixwp62qi7nyVf4z"), False, False)
accounts = [
program_id_account_meta,
payer_account_meta,
vault_account_meta]
transaction = Transaction()
transaction.add(TransactionInstruction(
accounts,
program_id,
bytes([0])
))
client.send_transaction(transaction, payer_keypair)
When I execute it, I get the error whose image is attached.
I did play with setting the payer_account_meta's writable value to True, and no luck.
I am attaching my solana program's code here, just incase that could be the source of the error, although it seems the error occurs before my program is being even sent the transaction for execution.
entrypoint!(process_instructions);
pub enum Instructions{
CreateAccount {
},
}
impl Instructions{
fn unpackinst(input: &[u8]) -> Result<Self, ProgramError>{
let (&instr, _) = input.split_first().ok_or(ProgramError::InvalidArgument)?;
Ok(match instr{
0 => {
Self::CreateAccount{
}
}
_ => return Err(ProgramError::InvalidInstructionData.into())
})
}
}
pub fn process_instructions(program_id: &Pubkey, accounts: &[AccountInfo], instruction_data: &[u8])-> ProgramResult{
let instruction = Instructions::unpackinst(instruction_data)?;
let account_info_iter = &mut accounts.iter();
match instruction {
Instructions::CreateAccount{
} => {
let payer_account_info = next_account_info(account_info_iter)?;
let vault = next_account_info(account_info_iter)?;
let temp_key = Pubkey::from_str("G473EkeR5gowVn8CRwTSDop3zPwaNixwp62qi7nyVf4z").unwrap();
if vault.key != &temp_key && program_id != program_id {
Err(ProgramError::InvalidAccountData)?
}
let price: u64 = (0.5 * (i32::pow(10,9)) as f64) as u64;
invoke(
&system_instruction::transfer(
&payer_account_info.key,
&temp_key,
price,
),
&[
payer_account_info.clone(),
vault.clone()
]
)?;}} Ok(())}
You're attempting to transfer between accounts that aren't declared as writable, so the runtime is correctly saying that there has been privilege escalation during your invocation of system_instruction::transfer.
Try changing your Python AccountMeta declarations to:
payer_account_meta = AccountMeta(payer_keypair.public_key, True, True)
vault_account_meta = AccountMeta(PublicKey("G473EkeR5gowVn8CRwTSDop3zPwaNixwp62qi7nyVf4z"), False, True)
More info at https://docs.solana.com/developing/programming-model/runtime#policy, specifically:
Only the owner may change account data.
And if the account is writable.
Here, the owner is the system program during transfer, so it can only change the amounts if the accounts are writable.
Also, you need to pass in the system program in order to invoke it during your program, and not pass in your own program. So you'll want to do instead:
from solana.system_program import SYS_PROGRAM_ID
program_id_account_meta = AccountMeta(SYS_PROGRAM_ID, False, False)
and add it last in your list of accounts, which becomes
accounts = [
payer_account_meta,
vault_account_meta,
program_id_account_meta,
]

terraform) how to stop instance after running script(inline)?

I create 30 instance to run crawl script, and want to stop aws instances after complete running this command, python3 aws_crawl.py, but I cannot find how to stop current aws instance by command line.
Please see my main.tf code below:
resource "aws_instance" "crawl_worker" {
ami = "${var.ami_id}"
instance_type = "t3a.nano"
security_groups = ["${var.aws_security_group}"]
key_name = "secret"
count = 10
tags = {
Name = "crawl_worker_${count.index}"
}
connection {
type = "ssh"
host = "${self.public_ip}"
user = "ubuntu"
private_key = "${file("~/secret.pem")}"
timeout = "50m"
}
provisioner "remote-exec" {
inline = [
"python3 aws_crawl.py"
]
}
}

SAS IOM Bridge with Python

I'm making a connection to a remote SAS Workspace Server using IOM Bridge:
import win32com.client
objFactory = win32com.client.Dispatch("SASObjectManager.ObjectFactoryMulti2")
objServerDef = win32com.client.Dispatch("SASObjectManager.ServerDef")
objServerDef.MachineDNSName = "servername"
objServerDef.Port = 8591 # workspace server port
objServerDef.Protocol = 2 # 2 = IOM protocol
objServerDef.BridgeSecurityPackage = "Username/Password"
objServerDef.ClassIdentifier = "workspace server id"
objSAS = objFactory.CreateObjectByServer("SASApp", True, objServerDef, "uid", "pw")
program = "ods listing;proc means data=sashelp.cars mean mode min max; run;"
objSAS.LanguageService.Submit(program)
_list = objSAS.LanguageService.FlushList(999999)
print(_list)
log = objSAS.LanguageService.FlushLog(999999)
print(log)
objSAS.Close()
It work fine. But I cant seem to find the right attribut for CreateObjectByServer, when trying with a Stored Process Server (when I change 'Port' and 'ClassIdentifier'):
import win32com.client
objFactory = win32com.client.Dispatch("SASObjectManager.ObjectFactoryMulti2")
objServerDef = win32com.client.Dispatch("SASObjectManager.ServerDef")
objServerDef.MachineDNSName = "servername"
objServerDef.Port = 8601 # stored process server port
objServerDef.Protocol = 2 # 2 = IOM protocol
objServerDef.BridgeSecurityPackage = "Username/Password"
objServerDef.ClassIdentifier = "stp server id"
objSAS = objFactory.CreateObjectByServer("SASApp", True, objServerDef, "uid", "pw")
objSAS.StoredProcessService.Repository("path", "stp", "params")
_list = objSAS.LanguageService.FlushList(999999)
print(_list)
log = objSAS.LanguageService.FlushLog(999999)
print(log)
objSAS.Close()
When trying the above I get:
AttributeError: CreateObjectByServer.StoredProcessService
I cant seem to find much documentation IOM to Stored Process Servers. Anyone got any suggestions?

Websocket timeout in Django Channels / Daphne

Short question version: what am I doing wrong in my Daphne config, or my Consumer code, or my client code?
channels==1.1.8
daphne==1.3.0
Django==1.11.7
Details below:
I am trying to keep a persistent Websocket connection open using Django Channels and the Daphne interface server. I am launching Daphne with mostly default arguments: daphne -b 0.0.0.0 -p 8000 my_app.asgi:channel_layer.
I am seeing the connections closing after some idle time in the browser, shortly over 20 seconds. The CloseEvent sent with the disconnect has a code value of 1006 (Abnormal Closure), no reason set, and wasClean set to false. This should be the server closing the connection without sending an explicit close frame.
The Daphne CLI has --ping-interval and --ping-timeout flags with default values of 20 and 30 seconds, respectively. This is documented as "The number of seconds a WebSocket must be idle before a keepalive ping is sent," for the former, and "The number of seconds before a WebSocket is closed if no response to a keepalive ping," for the latter. I read this as Daphne will wait until a WebSocket has been idle for 20 seconds to send a ping, and will close the Websocket if no response is received 30 seconds later. What I am seeing instead is connections getting closed after being 20 seconds idle. (Across three attempts with defaults, closed after 20081ms, 20026ms, and 20032ms)
If I change the server to launch with daphne -b 0.0.0.0 -p 8000 --ping-interval 10 --ping-timeout 60 my_app.asgi:channel_layer, the connections still close, around 20 seconds idle time. (After three attempts with updated pings, closed after 19892ms, 20011ms, 19956ms)
Code below:
consumer.py:
import logging
from channels import Group
from channels.generic.websockets import JsonWebsocketConsumer
from my_app import utilities
logger = logging.getLogger(__name__)
class DemoConsumer(JsonWebsocketConsumer):
"""
Consumer echos the incoming message to all connected Websockets,
and attaches the username to the outgoing message.
"""
channel_session = True
http_user_and_session = True
#classmethod
def decode_json(cls, text):
return utilities.JSONDecoder.loads(text)
#classmethod
def encode_json(cls, content):
return utilities.JSONEncoder.dumps(content)
def connection_groups(self, **kwargs):
return ['demo']
def connect(self, message, **kwargs):
super(DemoConsumer, self).connect(message, **kwargs)
logger.info('Connected to DemoConsumer')
def disconnect(self, message, **kwargs):
super(DemoConsumer, self).disconnect(message, **kwargs)
logger.info('Disconnected from DemoConsumer')
def receive(self, content, **kwargs):
super(DemoConsumer, self).receive(content, **kwargs)
content['user'] = self.message.user.username
# echo back content to all groups
for group in self.connection_groups():
self.group_send(group, content)
routing.py:
from channels.routing import route
from . import consumers
channel_routing = [
consumers.DemoConsumer.as_route(path=r'^/demo/'),
]
demo.js:
// Tracks the cursor and sends position via a Websocket
// Listens for updated cursor positions and moves an icon to that location
$(function () {
var socket = new WebSocket('ws://' + window.location.host + '/demo/');
var icon;
var moveTimer = null;
var position = {x: null, y: null};
var openTime = null;
var lastTime = null;
function sendPosition() {
if (socket.readyState === socket.OPEN) {
console.log('Sending ' + position.x + ', ' + position.y);
socket.send(JSON.stringify(position));
lastTime = Date.now();
} else {
console.log('Socket is closed');
}
// sending at-most 20Hz
setTimeout(function () { moveTimer = null; }, 50);
};
socket.onopen = function (e) {
var box = $('#websocket_box');
icon = $('<div class="pointer_icon"></div>').insertAfter(box);
box.on('mousemove', function (me) {
// some browsers will generate these events much closer together
// rather than overwhelm the server, batch them up and send at a reasonable rate
if (moveTimer === null) {
moveTimer = setTimeout(sendPosition, 0);
}
position.x = me.offsetX;
position.y = me.offsetY;
});
openTime = lastTime = Date.now();
};
socket.onclose = function (e) {
console.log("!!! CLOSING !!! " + e.code + " " + e.reason + " --" + e.wasClean);
console.log('Time since open: ' + (Date.now() - openTime) + 'ms');
console.log('Time since last: ' + (Date.now() - lastTime) + 'ms');
icon.remove();
};
socket.onmessage = function (e) {
var msg, box_offset;
console.log(e);
msg = JSON.parse(e.data);
box_offset = $('#websocket_box').offset();
if (msg && Number.isFinite(msg.x) && Number.isFinite(msg.y)) {
console.log((msg.x + box_offset.left) + ', ' + (msg.y + box_offset.top));
icon.offset({
left: msg.x + box_offset.left,
top: msg.y + box_offset.top
}).text(msg.user || '');
}
};
});
asgi.py:
import os
from channels.asgi import get_channel_layer
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "my_project.settings")
channel_layer = get_channel_layer()
settings.py:
CHANNEL_LAYERS = {
'default': {
'BACKEND': 'asgi_redis.RedisChannelLayer',
'ROUTING': 'main.routing.channel_routing',
'CONFIG': {
'hosts': [
'redis://redis:6379/2',
],
'symmetric_encryption_keys': [
SECRET_KEY,
],
}
}
}
The underlying problem turned out to be the nginx proxy in front of the interface server. The proxy was set to proxy_read_timeout 20s;. If there were keepalive pings generated from the server, these were not getting counted toward the upstream read timeout. Increasing this timeout to a larger value allows the Websocket to stay open longer. I kept proxy_connect_timeout and proxy_send_timeout at 20s.

Categories