I'm trying to write a script using boto3 to start an instance and wait until it is started. As per the documentation of wait_until_running, it should wait until the instance is fully started (I"m assuming checks should be OK) but unfortunately it only works for wait_until_stopped and incase of wait_until_running it just starts the instance and doesn't wait until it is completely started. Not sure if I'm doing something wrong here or this is a bug of boto3.
Here is the code:
import boto3
ec2 = boto3.resource('ec2',region_name="ap-southeast-2")
ec2_id = 'i-xxxxxxxx'
instance = ec2.Instance(id=ec2_id)
print("starting instance " + ec2_id)
instance.start()
instance.wait_until_running()
print("instance started")
Thanks to #Mark B #Madhurya Gandi here is the solution that worked in my case:
import boto3,socket
retries = 10
retry_delay=10
retry_count = 0
ec2 = boto3.resource('ec2',region_name="ap-southeast-2")
ec2_id = 'i-xxxxxxxx'
instance = ec2.Instance(id=ec2_id)
print("starting instance " + ec2_id)
instance.start()
instance.wait_until_running()
while retry_count <= retries:
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
result = sock.connect_ex((instance.public_ip_address,22))
if result == 0:
Print "Instance is UP & accessible on port 22, the IP address is: ",instance.public_ip_address
break
else:
print "instance is still down retrying . . . "
time.sleep(retry_delay)
except ClientError as e:
print('Error', e)
I believe this way wait until the instance status is 2/2 passed in the check tests:
Boto3 Doc:
instance-status.reachability - Filters on instance status where the name is reachability (passed | failed | initializing | insufficient-data ).
import boto3
client = boto3.client('ec2')
waiter = client.get_waiter('instance_status_ok')
waiter.wait(
InstanceIds = ["instanceID"],
Filters = [
{
"Name": "instance-status.reachability" ,
"Values": [
"passed"
]
}
]
)
[1]: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/ec2.html#EC2.Waiter.InstanceStatusOk
I believe the right way to do it is as follows:
instance.wait_until_running(
Filters=[
{
'Name': 'instance-state-name',
'Values': [
'running',
]
},
]
)
tested and worked for me
I've tried instance.wait_until_running(). It took time to update the instance to running state. As per amazon docs link, it says that the instances take a minimum of 60seconds to spin up. Here's a sample code that worked for me. Hope it helps!
;to create 5 instances
ec2.create_instances(ImageId='<ami-image-id>', MinCount=1, MaxCount=5)
time.sleep(60)
;print your instances
Related
Goal is to send multiple ping via a Cisco device ( L2 ) to populate the arp table.
Script is done and working but ultra slow since I need to ping 254 address and more depending if /24 or /23 etc...
Where I am confuse, I have test some threading with some basic scripts to understand how it works and everything works by using a function and call it and so far so good.
My Problem is that I don't want to create 200+ ssh connections if I use a function for the whole code.
I would like to use the threading only on the send_command part from netmiko.
My Code :
from netmiko import ConnectHandler
from pprint import pprint
from time import perf_counter
from threading import Thread
mng_ip = input("please enter the subnet to be scan: ")
cisco_Router = {
"device_type": "cisco_ios",
"host": mng_ip,
"username": "XXXX",
"password": "XXXX",
"fast_cli": False,
"secret": "XXXX"}
print(mng_ip)
subnet1 = mng_ip.split(".")[0]
subnet2 = mng_ip.split(".")[1]
subnet3 = mng_ip.split(".")[2]
#subnet4 = mng_ip.split(".")[3]
active_list = []
start_time = perf_counter()
for x in range(1,254):
ip = (subnet1+"."+subnet2+"."+subnet3+"."+str(x))
print ("Pinging:",ip)
with ConnectHandler(**cisco_Router) as net_connect:
net_connect.enable()
result = net_connect.send_command(f"ping {ip} ",delay_factor=2) # <----- this is the part i would like to perform the threading
print(result)
if "0/5" in result:
print("FAILED")
else:
print("ACTIVE")
active_list.append(ip)
net_connect.disconnect()
print("Done")
pprint(active_list)
end_time = perf_counter()
print(f'It took {end_time- start_time: 0.2f} second(s) to complete.')
not sure if it is possible and how it could be done,
Thank you in advance,
I am currently trying to send ble data from the app to my PC using sockets. As of now, I am able to send data to the server, but the server only receives one data and then stops receiving. I am suspecting that there is something wrong with my client side code. Any help would be appreciated
My SocketClass code:
inner class SocketClass{
fun doInBackground(message: String): String {
try {
socket = Socket(SERVER_IP, SERVERPORT)
val outputStream = OutputStreamWriter(socket?.getOutputStream())
val printwriter = BufferedWriter(outputStream)
printwriter.write(message)
printwriter.flush()
printwriter.close()
socket?.close()
} catch (e: Exception) {
e.printStackTrace()
}catch (e: UnknownHostException){
}
return ""
}
}
and in the blereadcharacteristic I call the class in a thread like this:
private fun readDeviceCharacteristic(characteristic: BluetoothGattCharacteristic) {
var hashval:MutableMap<String, String> = mutableMapOf()
var mainThreadList :MutableList<String> = mutableListOf()
var currentTime: String = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss.SSS").withZone(
ZoneId.of("Asia/Seoul")).format(Instant.now()) // Get current Timestamp
if(idlist.size != 0) {
readval = characteristic.getStringValue(0)
valdata = readval.subSequence(3, readval.length).toString()
idNo = readval.subSequence(0, 2).toString()
parseList.add(readval)
mainThreadList = parseList.toMutableList().distinctBy{it.subSequence(0, 2)} as MutableList<String>
if(mainThreadList.size == maxConnLimit) {
bleDataList = mainThreadList
bleDataList.forEach{
hashval[it.subSequence(0, 2).toString()] = it.subSequence(3, it.length).toString()
}
loghash = hashval.toSortedMap()
txtRead = bleDataList.toString()
parseList.clear()
if(isChecked) {
savehash["Time"]?.add("$sec.$millis")
bleDataList.forEach {
savehash[it.subSequence(0, 2).toString()]?.add(
it.subSequence(
3,
it.length
).toString()
)
}
}else{
time = 0
sec = 0
millis = ""
}
if(isInsert){
Thread{
SocketClass().doInBackground("$loghash\r\n\r\n")
}.start()
}
}
}
isTxtRead = true
bleDataList.clear()
}
My server socket python code:
import socket
import sys
host = '19.201.12.12'
port = 30001
address = (host, port)
server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_socket.bind(address)
server_socket.listen(5)
print("Listening for client . . .")
conn, address = server_socket.accept()
print("Connected to client at ", address)
while True:
output = conn.recv(2048);
if output.strip() == "disconnect":
conn.close()
sys.exit("Received disconnect message. Shutting down.")
elif output:
print("Message received from client:")
print(output)
I am able to get the first data sent, but do not receive anything after. Any help would be greatly appreciated thankyou!
Your server code only accepts one client connection. So, here's the timelime.
Python waits at accept.
Kotlin connects
Python moves into while True loop.
Kotlin writes, then closes the socket.
At that point, nothing further will be received by the Python code, but because you never call accept again, you can't accept another connection.
You need to have the accept in a loop. You handle that connection until it closes, then you loop to another accept.
You might consider using the socketserver module, which automates some of this.
I am trying to run a command to an ecs container managed by fargate. I can establish connection as well as execute successfully but I cannot get the response from said command inside my python script.
import boto3
import pprint as pp
client = boto3.client("ecs")
cluster = "my-mundane-cluster-name"
def main():
task_arns = client.list_tasks(cluster=cluster, launchType="FARGATE")
for task_arn in task_arns.get("taskArns", []):
cmd_out = client.execute_command(
cluster=cluster,
command="ls",
interactive=True,
task=task_arn,
)
pp.pprint(f"{cmd_out}")
if __name__ == "__main__":
main()
I replaced the command with ls but for all intents and purposes, the flow is the same. Here is what I get as a reposnse
{
'clusterArn': 'arn:aws:ecs:■■■■■■■■■■■■:■■■■■■:cluster/■■■■■■',
'containerArn': 'arn:aws:ecs:■■■■■■■■■■■■:■■■■■■:container/■■■■■■/■■■■■■/■■■■■■■■■■■■■■■■■■',
'containerName': '■■■■■■',
'interactive': True,
'session': {
'sessionId': 'ecs-execute-command-■■■■■■■■■',
'streamUrl': '■■■■■■■■■■■■■■■■■■',
'tokenValue': '■■■■■■■■■■■■■■■■■■'
},
'taskArn': 'arn:aws:ecs:■■■■■■■■■■■■:■■■■■■■■■:task/■■■■■■■■■/■■■■■■■■■■■■■■■■■■',
'ResponseMetadata': {
'RequestId': '■■■■■■■■■■■■■■■■■■',
'HTTPStatusCode': 200,
'HTTPHeaders': {
'x-amzn-requestid': '■■■■■■■■■■■■■■■■■■',
'content-type': 'application/x-amz-json-1.1',
'content-length': '■■■',
'date': 'Thu, 29 Jul 2021 02:39:24 GMT'
},
'RetryAttempts': 0
}
}
I've tried running the command as non-interactive to see if it returns a response but the sdk says Interactive is the only mode supported currently. I've also tried searching online for clues as to how to do this but no luck.
Any help is greatly appreciated.
The value of the command output is located within the document stream located at streamId. You must initialize a new session and pass it the sessionID to retrieve it's contents.
crude example:
import boto3
import pprint as pp
client = boto3.client("ecs")
ssm_client = boto3.client("ssm")
cluster = "my-mundane-cluster-name"
def main():
task_arns = client.list_tasks(cluster=cluster, launchType="FARGATE")
for task_arn in task_arns.get("taskArns", []):
cmd_out = client.execute_command(
cluster=cluster,
command="ls",
interactive=True,
task=task_arn,
)
session_response = client.describe_sessions(
State='Active'|'History',
MaxResults=123,
NextToken='string',
Filters=[
{
'key': 'InvokedAfter'|'InvokedBefore'|'Target'|'Owner'|'Status'|'SessionId',
'value': cmd_out["session"]["sessionId"]
},
]
)
document_response = client.get_document(
Name=session_response.sessions[0].document_name,
DocumentFormat='YAML'|'JSON'|'TEXT'
)
pp.pprint(document_response)
References
SSM: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/ssm.html
SSM #get_document: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/ssm.html#SSM.Client.get_document
A quick solution is to use logging instead of pprint:
boto3.set_stream_logger('boto3.resources', logging.INFO)
I needed to accomplish a similar task, and it turns out it doesn't work as answered here as far as I can tell. Let me know if that worked for you, and how you implemented it, if so.
For me the solution, was to open a websocket connection given back in the session, and read the output. Like this:
import boto3
import json
import uuid
import construct as c
import websocket
def session_reader(session: dict) -> str:
AgentMessageHeader = c.Struct(
"HeaderLength" / c.Int32ub,
"MessageType" / c.PaddedString(32, "ascii"),
)
AgentMessagePayload = c.Struct(
"PayloadLength" / c.Int32ub,
"Payload" / c.PaddedString(c.this.PayloadLength, "ascii"),
)
connection = websocket.create_connection(session["streamUrl"])
try:
init_payload = {
"MessageSchemaVersion": "1.0",
"RequestId": str(uuid.uuid4()),
"TokenValue": session["tokenValue"],
}
connection.send(json.dumps(init_payload))
while True:
resp = connection.recv()
message = AgentMessageHeader.parse(resp)
if "channel_closed" in message.MessageType:
raise Exception("Channel closed before command output was received")
if "output_stream_data" in message.MessageType:
break
finally:
connection.close()
payload_message = AgentMessagePayload.parse(resp[message.HeaderLength :])
return payload_message.Payload
exec_resp = boto3.client("ecs").execute_command(
cluster=cluster,
task=task,
container=container,
interactive=True,
command=cmd,
)
print(session_reader(exec_resp["session"]))
This is all that to Andrey's excellent answer on my similar question.
For anybody arriving seeking a similar solution, I have created a tool for making this task simple. It is called interloper.
I'm making a connection to a remote SAS Workspace Server using IOM Bridge:
import win32com.client
objFactory = win32com.client.Dispatch("SASObjectManager.ObjectFactoryMulti2")
objServerDef = win32com.client.Dispatch("SASObjectManager.ServerDef")
objServerDef.MachineDNSName = "servername"
objServerDef.Port = 8591 # workspace server port
objServerDef.Protocol = 2 # 2 = IOM protocol
objServerDef.BridgeSecurityPackage = "Username/Password"
objServerDef.ClassIdentifier = "workspace server id"
objSAS = objFactory.CreateObjectByServer("SASApp", True, objServerDef, "uid", "pw")
program = "ods listing;proc means data=sashelp.cars mean mode min max; run;"
objSAS.LanguageService.Submit(program)
_list = objSAS.LanguageService.FlushList(999999)
print(_list)
log = objSAS.LanguageService.FlushLog(999999)
print(log)
objSAS.Close()
It work fine. But I cant seem to find the right attribut for CreateObjectByServer, when trying with a Stored Process Server (when I change 'Port' and 'ClassIdentifier'):
import win32com.client
objFactory = win32com.client.Dispatch("SASObjectManager.ObjectFactoryMulti2")
objServerDef = win32com.client.Dispatch("SASObjectManager.ServerDef")
objServerDef.MachineDNSName = "servername"
objServerDef.Port = 8601 # stored process server port
objServerDef.Protocol = 2 # 2 = IOM protocol
objServerDef.BridgeSecurityPackage = "Username/Password"
objServerDef.ClassIdentifier = "stp server id"
objSAS = objFactory.CreateObjectByServer("SASApp", True, objServerDef, "uid", "pw")
objSAS.StoredProcessService.Repository("path", "stp", "params")
_list = objSAS.LanguageService.FlushList(999999)
print(_list)
log = objSAS.LanguageService.FlushLog(999999)
print(log)
objSAS.Close()
When trying the above I get:
AttributeError: CreateObjectByServer.StoredProcessService
I cant seem to find much documentation IOM to Stored Process Servers. Anyone got any suggestions?
I have the following python boto3 code with a potentially infinite while-loop. Generally, after a few minutes the while-loop succeeds. However, if something fails on the AWS side the program could hang for an indefinite period.
I am sure that this is not the most appropriate way to do this.
# credentials stored in ../.aws/credentials
# region stored in ../.aws/config
# builtins
from time import sleep
# plugins
import boto3
# Assign server instance IDs.
cye_production_web_server_2 = 'i-FAKE-ID'
# Setup EC2 client
ec2 = boto3.client('ec2')
# Start the second web server.
start_response = ec2.start_instances(
InstanceIds=[cye_production_web_server_2, ],
DryRun=False
)
print(
'instance id:',
start_response['StartingInstances'][0]['InstanceId'],
'is',
start_response['StartingInstances'][0]['CurrentState']['Name']
)
# Wait until status is 'ok'
status = None
while status != 'ok':
status_response = ec2.describe_instance_status(
DryRun=False,
InstanceIds=[cye_production_web_server_2, ],
)
status = status_response['InstanceStatuses'][0]['SystemStatus']['Status']
sleep(5) # 5 second throttle
print(status_response)
print('status is', status.capitalize())
Implement a counter in the loop and fail after so many attempts
status = None
counter = 5
while (status != 'ok' and counter > 0):
status_response = ec2.describe_instance_status(
DryRun=False,
InstanceIds=[cye_production_web_server_2, ],
)
status = status_response['InstanceStatuses'][0]['SystemStatus']['Status']
sleep(5) # 5 second throttle
counter=counter-1
print(status_response)
print('status is', status.capitalize())
You could try doing it in a for loop instead with a fixed number of tries.
For example:
MAX_RETRIES = 5
# Try until status is 'ok'
for x in range(MAX_RETRIES):
status_response = ec2.describe_instance_status(
DryRun=False,
InstanceIds=[cye_production_web_server_2, ],
)
status = status_response['InstanceStatuses'][0]['SystemStatus']['Status']
if status != 'ok':
sleep(5) # 5 second throttle
else:
break
Use timeout might be a better idea
import time
systemstatus = False
timeout = time.time() + 60*minute
while systemstatus is not True:
status = ec2.describe_instance_status( \
DryRun = False,
InstanceIds = [instance_id]
)
if status['InstanceStatuses'][0]['SystemStatus']['Status'] == 'ok':
systemstatus = True
if time.time() > timeout:
break
else:
time.sleep(10)