I have this python script that I use to collect info on the EC2 instances for my Beanstalk applications.
It worked perfectly fine for a long time, and then it simply stopped producing results, but it throws no errors and there are no authentication problems.
What am I missing? Was there a change to the API?
The script is below:
import boto3
regions = ['us-east-1','us-west-2']
for region in regions:
ebs_client = boto3.client('elasticbeanstalk', region_name=region)
ec2_client = boto3.client('ec2', region_name=region)
apps = ebs_client.describe_applications()
print(apps)
for app in apps['Applications']:
appname = app['ApplicationName']
print(appname)
envs = ebs_client.describe_environments(ApplicationName=appname)
for env in envs['Environments']:
envname = env['EnvironmentName']
envid = env['EnvironmentId']
[... some more code ...]
Right on the first call to describe_applications it returns a 200 OK status code but with zero results. And I have a lot of Beanstalk apps running in those regions.
Problem solved. It had to do with the credentials used.
I removed them and used the Instance Role instead and it worked.
Still weird, though. Because there should be an error somewhere instead of producing zero results.
Related
So I have created a security group with inbound traffic rule, keypair and am trying to create an instance using this code
instances=ec2.create_instances(ImageId="ami-d38a4ab1",MinCount=1,MaxCount=1, InstanceType="t2.micro",KeyName="my-key",SecurityGroupIds=['sg.##############'])
But I keep getting an error saying:
An error occurred(invalidParameterValue) when calling the RunInstances operation:Value() for paramet groupID is invalid. The value cannot be empty.
I am unsure of what am I doing wrong.
I ran your code (without the Key and Security Group) and it worked perfectly find for me:
import boto3
ec2_resource = boto3.resource('ec2')
instances = ec2_resource.create_instances(
ImageId="ami-d38a4ab1",
MinCount=1,
MaxCount=1,
InstanceType="t2.micro"
)
The error message is saying that the Parameter Group ID is invalid, which suggests that you did now provide us with the actual command you are running.
I´ve made a simple pipeline in Python to read from kafka, the thing is that the kafka cluster is on confluent cloud and I am having some trouble conecting to it.
Im getting the following log on the dataflow job:
Caused by: org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:820)
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:631)
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:612)
at org.apache.beam.sdk.io.kafka.KafkaIO$Read$GenerateKafkaSourceDescriptor.processElement(KafkaIO.java:1495)
Caused by: java.lang.IllegalArgumentException: Could not find a 'KafkaClient' entry in the JAAS configuration. System property 'java.security.auth.login.config' is not set
So I think Im missing something while passing the config since it mentions something related to it, Im really new to all of this and I know nothing about java so I dont know how to proceed even reading the JAAS documentation.
The code of the pipeline is the following:
import apache_beam as beam
from apache_beam.io.kafka import ReadFromKafka
from apache_beam.options.pipeline_options import PipelineOptions
import os
import json
import logging
os.environ['GOOGLE_APPLICATION_CREDENTIALS']='credentialsOld.json'
with open('cluster.configuration.json') as cluster:
data=json.load(cluster)
cluster.close()
def logger(element):
logging.INFO('Something was found')
def main():
config={
"bootstrap.servers":data["bootstrap.servers"],
"security.protocol":data["security.protocol"],
"sasl.mechanisms":data["sasl.mechanisms"],
"sasl.username":data["sasl.username"],
"sasl.password":data["sasl.password"],
"session.timeout.ms":data["session.timeout.ms"],
"auto.offset.reset":"earliest"
}
print('======================================================')
beam_options = PipelineOptions(runner='DataflowRunner',project='project',experiments=['use_runner_v2'],streaming=True,save_main_session=True,job_name='kafka-stream-test')
with beam.Pipeline(options=beam_options) as p:
msgs = p | 'ReadKafka' >> ReadFromKafka(consumer_config=config,topics=['users'],expansion_service="localhost:8088")
msgs | beam.FlatMap(logger)
if __name__ == '__main__':
main()
I read something about passing a property java.security.auth.login.config in the config dictionary but since that example is with java and I´am using python Im really lost at what I have to pass or even if that´s the property I have to pass etc.
btw Im getting the api key and secret from here and this is what I am passing to sasl.username and sasl.password
I faced the same error the first time I tried the beam's expansion service. The key sasl.mechanisms that you are supplying is incorrect, try with sasl.mechanism also you do not need to supply the username and password since you are connection is authenticated by jasl basically the consumer_config like below worked for me:
config={
"bootstrap.servers":data["bootstrap.servers"],
"security.protocol":data["security.protocol"],
"sasl.mechanism":data["sasl.mechanisms"],
"session.timeout.ms":data["session.timeout.ms"],
"group.id":"tto",
"sasl.jaas.config":f'org.apache.kafka.common.security.plain.PlainLoginModule required serviceName="Kafka" username=\"{data["sasl.username"]}\" password=\"{data["sasl.password"]}\";',
"auto.offset.reset":"earliest"
}
I got a partial answer to this question since I fixed this problem but got into another one:
config={
"bootstrap.servers":data["bootstrap.servers"],
"security.protocol":data["security.protocol"],
"sasl.mechanisms":data["sasl.mechanisms"],
"sasl.username":data["sasl.username"],
"sasl.password":data["sasl.password"],
"session.timeout.ms":data["session.timeout.ms"],
"group.id":"tto",
"sasl.jaas.config":f'org.apache.kafka.common.security.plain.PlainLoginModule required serviceName="Kafka" username=\"{data["sasl.username"]}\" password=\"{data["sasl.password"]}\";',
"auto.offset.reset":"earliest"
}
I needed to provide the sasl.jaas.config porpertie with the api key and secret of my cluster and also the service name, however, now Im facing a different error whe running the pipeline on dataflow:
Caused by: org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata
This error shows after 4-5 mins of trying to run the job on dataflow, actually I have no idea how to fix this but I think is related to my broker on confluent rejecting the connection, I think this could be related to the zone execution since the cluster is in a different zone than job region.
UPDATE:
I tested the code on linux/ubuntu and I dont know why but the expansión service gets downloaded automatically so you wont get unsoported signal error, still having some issues trying to autenticate to confluent kafka tho.
I maintain a Python tool that runs automation against a Perforce server. For obvious reasons, parts of my test suite (which are unittest.TestCase classes run with Pytest) require a live server. Until now I've been using a remote testing server, but I'd like to move that into my local environment, and make server initialization part of my pre-test setup.
I'm experimenting with dockerization as a solution, but I get strange connection errors when trying to run Perforce commands against the server in my test code. Here's my test server code (using a custom docker image, Singleton metaclass based on https://stackoverflow.com/a/6798042, and with the P4Python library installed):
class P4TestServer(metaclass=Singleton):
def __init__(self, conf_file='conf/p4testserver.conf'):
self.docker_client = docker.from_env()
self.config = P4TestServerConfig.load_config(conf_file)
self.server_container = None
try:
self.server_container = self.docker_client.containers.get('perforce')
except docker.errors.NotFound:
self.server_container = self.docker_client.containers.run(
'perforce-server',
detach=True,
environment={
'P4USER': self.config.p4superuser,
'P4PORT': self.config.p4port,
'P4PASSWD': self.config.p4superpasswd,
'NAME': self.config.p4name
},
name='perforce',
ports={
'1667/tcp': 1667
},
remove=True
)
self.p4 = P4()
self.p4.port = self.config.p4port
self.p4.user = self.config.p4superuser
self.p4.password = self.config.p4superpasswd
And here's my test code:
class TestSystemP4TestServer(unittest.TestCase):
def test_server_connection(self):
testserver = P4TestServer()
with testserver.p4.connect():
info = testserver.p4.run_info()
self.assertIsNotNone(info)
So this is the part that's getting to me: the first time I run that test (i.e. when it has to start the container), it fails with the following error:
E P4.P4Exception: [P4#run] Errors during command execution( "p4 info" )
E
E [Error]: 'TCP receive failed.\nread: socket: Connection reset by peer'
But on subsequent runs, when the container is already running, it passes. What's frustrating is that I can't otherwise reproduce this error. If I run that test code in any other context, including:
In a Python interpreter
In a debugger stopped just before the testserver.p4.run_info() invokation
The code completes as expected regardless of whether the container was already running.
All I can think at this point is that there's something unique about the pytest environment that's tripping me up, but I'm at a loss for even how to begin diagnosing. Any thoughts?
I had a similar issue recently where I would start postgres container and then immediately run a python script to setup database as per my app requirement.
I had to introduce a sleep command in between the two steps and that resolved the issue.
Ideally you should check if the start sequence of the docker container is done before trying to use it. But for my local development use case, sleep 5 seconds was good enough workaround.
I'd like to copy a blob from one private container to another private container, within the same storage account.
I've written code that uses BlockBlobService, initialised with the storage account name and account key.
I found this worked fine for a few days, but suddenly it is having trouble with the requires_sync option.
from azure.storage.blob import BlockBlobService
blos = BlockBlobService("some-storage-account", "some-storage-key")
blos.copy_blob("some-target-container", "some-target-key", blos.make_blob_url("source-container", "source_key"), requires_sync=True)
This fails with
AzureMissingResourceHttpError: The specified resource does not exist. ErrorCode: CannotVerifyCopySource
blos.copy_blob("some-target-container", "some-target-key", blos.make_blob_url("source-container", "source_key"))
This succeeds fine.
I'm using Python2.7.
In python3, it says requires_sync is an unexpected keyword argument. I only need it to work in 2.7 for now.
EDIT: I've worked around the problem with -
wait = blos.copy_blob("some-target-container", "some-target-key", blos.make_blob_url("source-container", "source_key"))
while wait.status == 'pending':
time.sleep(0.5)
but I'm not sure if this is the best way to go about it.
EDIT: changed while wait.status != 'success' to while wait.status == 'pending'
Please try to install the latest azure-storage-blob 2.0.1 package.
For this error "AzureMissingResourceHttpError: The specified resource does not exist.", you should use a public container as you mentioned, or add sasToken to the blob url(like https://xxx.blob.core.windows.net/f22/gen2.JPG?sasToken).
I tested in python 3.7, it works fine with setting parameter requires_sync=True.
The code:
from azure.storage.blob import BlockBlobService
accountName="yy3"
accountKey="xxxx"
blobs = BlockBlobService(account_name=accountName,account_key=accountKey)
copySource="https://yy3.blob.core.windows.net/f22/gen2.JPG?sasToken"
blobs.copy_blob("aa1","copy_gen2.jpg",copySource,requires_sync=True)
print("completed")
Note that both the destination / source containers are private.
The results as below, the blob file is copied to the destination container:
And here is the source code of copy_blob method, and the requires_sync is a valid parameter.
I'm using Pyramid with Cornice to create an API for a Backbone.js application to consume. My current code is working perfectly for GET and POST requests, but it is returning 404 errors when it receives PUT requests. I believe that this is because Backbone sends them as http://example.com/api/clients/ID, where ID is the id number of the object in question.
My Cornice setup code is:
clients = Service(name='clients', path='/api/clients', description="Clients")
#clients.get()
def get_clients(request):
...
#clients.post()
def create_client(request):
...
#clients.put()
def update_client(request):
...
It seems that Cornice only registers the path /api/clients and not /api/clients/{id}. How can I make it match both?
The documentation gives an example of a service that has both an individual path (/users/{id}) and an object path (/users). Would this work for you ?
#resource(collection_path='/users', path='/users/{id}')
A quick glance at the code for the resource decorator shows that it mainly creates two Service : one for the object and one for the collection. Your problem can probably be solved by adding another Service :
client = Service(name='client', path='/api/clients/{id}', description="Client")