I have a vm with disk 1,2,3,4, I want to do some image operations:
Q1: How can i capture the image only contain system disk and disk 3?
Q2: If I achieve the image production described in Q1, can I use this
image install or reload a vm? How SL api to do with the disk 3 in the
image ?
Q3: Can I make a snapshot image only for disk 3?
Q4: If I achieve the image described in Q3, how can I use this
snapshot to init a disk ?
at moment to create the image template you can specify the block devices that you want in the image template you can do that using the API and the portal.
this is an example using the API
"""
Create image template.
The script creates a standard image template, it makes
a call to the SoftLayer_Virtual_Guest::createArchiveTransaction method
sending the IDs of the disks in the request.
For more information please see below.
Important manual pages:
https://sldn.softlayer.com/reference/services/SoftLayer_Virtual_Guest
https://sldn.softlayer.com/reference/services/SoftLayer_Virtual_Guest/createArchiveTransaction
https://sldn.softlayer.com/reference/datatypes/SoftLayer_Virtual_Guest_Block_Device
License: http://sldn.softlayer.com/article/License
Author: SoftLayer Technologies, Inc. <sldn#softlayer.com>
"""
import SoftLayer
# Your SoftLayer API username and key.
USERNAME = 'set me'
API_KEY = 'set me'
# The virtual guest ID you want to create a template
virtualGuestId = 4058502
# The name of the image template
groupName = 'my image name'
# An optional note for the image template
note = 'an optional note'
"""
Build a skeleton SoftLayer_Virtual_Guest_Block_Device object
containing the disks you want to the image.
In this case we are going take an image template of 2 disks
from the virtual machine.
"""
blockDevices = [
{
"id": 4667098,
"complexType": "SoftLayer_Virtual_Guest_Block_Device"
},
{
"id": 4667094,
"complexType": "SoftLayer_Virtual_Guest_Block_Device"
}
]
# Declare a new API service object
client = SoftLayer.Client(username=USERNAME, api_key=API_KEY)
try:
# Creating the transaction for the image template
response = client['SoftLayer_Virtual_Guest'].createArchiveTransaction(groupName, blockDevices, note, id=virtualGuestId)
print(response)
except SoftLayer.SoftLayerAPIError as e:
"""
# If there was an error returned from the SoftLayer API then bomb out with the
# error message.
"""
print("Unable to create the image template. faultCode=%s, faultString=%s" % (e.faultCode, e.faultString))
You only need to get the block devices ID (or disks), for that you can call this method:
http://sldn.softlayer.com/reference/services/SoftLayer_Virtual_Guest/getBlockDevices
There is some rules for the block devices:
Only block devices of type disk can be captured.
The block device of swap type cannot not be included in the list of block devices to capture.(this is is the disk number 1).
The block device which contains the OS must be included (this is the disk number 0).
The block devices which contain metadata cannot be included in the image.
When you are ordening a new device using this image template you need to keep in mind this:
If you are using the placeOrder method you need to make sure that you are adding the prices for the extra disks.
If you are using the createObject method, the number of disk will be taken from the image template, so it is not neccesary to specify the extra disks.
And also you can use the images templates in reloads, but the reload only afects to the disk wich contains the OS. so If you have a Vitrual machine which contains 3 disks and performs a reload only the disk which contains the OS is afected even if the image template has 3 disks.
In case there are errors in your order due to lack of disk capacity or other issues, at provisioning time there will be errors and the VSI will not be provisioned, likely a ticket will be opened and some softlayer employee will inform you about that.
Regards
Related
I am using python3 to transcribe an audio file with Google speech-to-text via the provided python packages (google-speech).
There is an option to define custom phrases which should be used for transcription as stated in the docs: https://cloud.google.com/speech-to-text/docs/speech-adaptation
For testing purposes I am using a small audio file with the contained text:
[..] in this lecture we'll talk about the Burrows wheeler transform and the FM index [..]
And I am giving the following phrases to see the effects if for example I want a specific name to be recognized with the correct notation. In this example I want to change burrows to barrows:
config = speech.RecognitionConfig(dict(
encoding=speech.RecognitionConfig.AudioEncoding.ENCODING_UNSPECIFIED,
sample_rate_hertz=24000,
language_code="en-US",
enable_word_time_offsets=True,
speech_contexts=[
speech.SpeechContext(dict(
phrases=["barrows", "barrows wheeler", "barrows wheeler transform"]
))
]
))
Unfortunately this does not seem to have any effect as the output is still the same as without the context phrases.
Am I using the phrases wrong or has it such a high confidence that the word it hears is indeed burrows so that it will ignore my phrases?
PS: I also tried using the speech_v1p1beta1.AdaptationClient and speech_v1p1beta1.SpeechAdaptation instead of putting the phrases into the config but this only gives me an internal server error with no additional information on what is going wrong. https://cloud.google.com/speech-to-text/docs/adaptation
I have created an audio file to recreate your scenario and I was able to improve the recognition using the model adaptation. To achieve this with this feature, I would suggest taking a look at this example and this post to better understand the adaptation model.
Now, to improve the recognition of your phrase, I performed the following:
I created a new audio file using the following page with the mentioned phrase.
in this lecture we'll talk about the Burrows wheeler transform and the FM index
My tests were based on this code sample. This code creates a PhraseSet and CustomClass that includes the word you would like to improve, in this case the word "barrows". You can also create/update/delete the phrase set and custom class using the Speech-To-Text GUI. Below is the code I used for the improvement.
from os import pathconf_names
from google.cloud import speech_v1p1beta1 as speech
import argparse
def transcribe_with_model_adaptation(
project_id="[PROJECT-ID]", location="global", speech_file=None, custom_class_id="[CUSTOM-CLASS-ID]", phrase_set_id="[PHRASE-SET-ID]"
):
"""
Create`PhraseSet` and `CustomClasses` to create custom lists of similar
items that are likely to occur in your input data.
"""
import io
# Create the adaptation client
adaptation_client = speech.AdaptationClient()
# The parent resource where the custom class and phrase set will be created.
parent = f"projects/{project_id}/locations/{location}"
# Create the custom class resource
adaptation_client.create_custom_class(
{
"parent": parent,
"custom_class_id": custom_class_id,
"custom_class": {
"items": [
{"value": "barrows"}
]
},
}
)
custom_class_name = (
f"projects/{project_id}/locations/{location}/customClasses/{custom_class_id}"
)
# Create the phrase set resource
phrase_set_response = adaptation_client.create_phrase_set(
{
"parent": parent,
"phrase_set_id": phrase_set_id,
"phrase_set": {
"boost": 0,
"phrases": [
{"value": f"${{{custom_class_name}}}", "boost": 10},
{"value": f"talk about the ${{{custom_class_name}}} wheeler transform", "boost": 15}
],
},
}
)
phrase_set_name = phrase_set_response.name
# print(u"Phrase set name: {}".format(phrase_set_name))
# The next section shows how to use the newly created custom
# class and phrase set to send a transcription request with speech adaptation
# Speech adaptation configuration
speech_adaptation = speech.SpeechAdaptation(
phrase_set_references=[phrase_set_name])
# speech configuration object
config = speech.RecognitionConfig(
encoding=speech.RecognitionConfig.AudioEncoding.FLAC,
sample_rate_hertz=24000,
language_code="en-US",
adaptation=speech_adaptation,
enable_word_time_offsets=True,
model="phone_call",
use_enhanced=True
)
# The name of the audio file to transcribe
# storage_uri URI for audio file in Cloud Storage, e.g. gs://[BUCKET]/[FILE]
with io.open(speech_file, "rb") as audio_file:
content = audio_file.read()
audio = speech.RecognitionAudio(content=content)
# audio = speech.RecognitionAudio(uri="gs://biasing-resources-test-audio/call_me_fionity_and_ionity.wav")
# Create the speech client
speech_client = speech.SpeechClient()
response = speech_client.recognize(config=config, audio=audio)
for result in response.results:
# The first alternative is the most likely one for this portion.
print(u"Transcript: {}".format(result.alternatives[0].transcript))
# [END speech_transcribe_with_model_adaptation]
if __name__ == "__main__":
parser = argparse.ArgumentParser(
description=__doc__, formatter_class=argparse.RawDescriptionHelpFormatter
)
parser.add_argument("path", help="Path for audio file to be recognized")
args = parser.parse_args()
transcribe_with_model_adaptation(speech_file=args.path)
Once it runs, you will receive an improved recognition as the below; however, consider that the code tries to create a new custom class and a new phrase set when it runs, and it might throw an error with a element already exists message if try to re-create the custom class and the phrase set.
Using the recognition without the adaptation
(python_speech2text) user#penguin:~/replication/python_speech2text$ python speech_model_adaptation_beta.py audio.flac
Transcript: in this lecture will talk about the Burrows wheeler transform and the FM index
Using the recognition with the adaptation
(python_speech2text) user#penguin:~/replication/python_speech2text$ python speech_model_adaptation_beta.py audio.flac
Transcript: in this lecture will talk about the barrows wheeler transform and the FM index
Finally, I would like to add some notes about the improvement and the code I performed:
I have used a flac audio file as it is recommended for optimal results.
I have used the model="phone_call" and use_enhanced=True as this was the model recognized by Cloud Speech-To-Text using my own audio file. Also the enhanced model can provide better results, you can see the documentation for more details. Note that this configuration might vary from your audio file.
Consider enable data logging to Google to collect data from your audio transcription requests. Google then uses this data to improve its machine learning models used for recognizing speech audio.
Once I have create the custom class and the phrase set, you can use the Speech-to-Text UI to updae and perform your tests quickly. only contains the
I have used in the phrase set the parameter boost, when you use boost, you assign a weighted value to phrase items in a PhraseSet resource. Speech-to-Text refers to this weighted value when selecting a possible transcription for words in your audio data. The higher the value, the higher the likelihood that Speech-to-Text chooses that word or phrase from the possible alternatives.
I hope this information helps you to improve your recognitions.
I recently started using SimpleITK to modify some Dicom images. I am however unable to modify the meta data. As a matter of fact I can't even access it.
I know thanks to a script i found here: https://github.com/SimpleITK/SimpleITK/pull/262/files?diff=split that metadata is by default not loaded because it slows the process down. I also know that to load the metadata I should use the following method of the reader: ".LoadPrivateTagsOn()".
However whenever i use the '.GetMetaDataKeys()' method on my image object it returns an empty tuple. I expected the code below to give me some keys, but it didn't.
#=========================================================================
#
# Copyright Insight Software Consortium
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0.txt
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#=========================================================================
from __future__ import print_function
import SimpleITK as sitk
import sys, time, os
import numpy as np
# if len( sys.argv ) < 2:
# print( "Usage: python " + __file__ + "<output_directory>" )
# sys.exit ( 1 )
# Create a new series from a numpy array
new_arr = np.random.uniform(-10, 10, size = (3,4,5)).astype(np.int16)
new_img = sitk.GetImageFromArray(new_arr)
new_img.SetSpacing([2.5,3.5,4.5])
directory = r"C:\Users\jeroen\Documents\2eMaster\Reconstruction3D\Projet Femur\Dicom\test"
# Write the 3D image as a series
# IMPORTANT: There are many DICOM tags that need to be updated when you modify an
# original image. This is a delicate opration and requires knowlege of
# the DICOM standard. This example only modifies some. For a more complete
# list of tags that need to be modified see:
# http://gdcm.sourceforge.net/wiki/index.php/Writing_DICOM
writer = sitk.ImageFileWriter()
# Use the study/series/frame of reference information given in the meta-data
# dictionary and not the automatically generated information from the file IO
writer.KeepOriginalImageUIDOn()
# Copy relevant tags from the original meta-data dictionary (private tags are also
# accessible).
tags_to_copy = ["0010|0010", # Patient Name
"0010|0020", # Patient ID
"0010|0030", # Patient Birth Date
"0020|000D", # Study Instance UID, for machine consumption
"0020|0010", # Study ID, for human consumption
"0008|0020", # Study Date
"0008|0030", # Study Time
"0008|0050", # Accession Number
"0008|0060" # Modality
]
modification_time = time.strftime("%H%M%S")
modification_date = time.strftime("%Y%m%d")
# Copy some of the tags and add the relevant tags indicating the change.
# For the series instance UID (0020|000e), each of the components is a number, cannot start
# with zero, and separated by a '.' We create a unique series ID using the date and time.
# tags of interest:
direction = new_img.GetDirection()
print(new_img.HasMetaDataKey("0008|0021"))
series_tag_values = [(k, new_img.GetMetaData(k)) for k in tags_to_copy if new_img.HasMetaDataKey(k)] + \
[("0008|0031",modification_time), # Series Time
("0008|0021",modification_date), # Series Date
("0008|0008","DERIVED\\SECONDARY"), # Image Type
("0020|000e", "1.2.826.0.1.3680043.2.1125."+modification_date+".1"+modification_time), # Series Instance UID
("0020|0037", '\\'.join(map(str, (direction[0], direction[3], direction[6],# Image Orientation (Patient)
direction[1],direction[4],direction[7])))),
("0008|103e", "Created-SimpleITK")] # Series Description
print(new_img.GetMetaDataKeys())
for i in range(new_img.GetDepth()):
image_slice = new_img[:,:,i]
# Tags shared by the series.
for tag, value in series_tag_values:
image_slice.SetMetaData(tag, value)
# Slice specific tags.
image_slice.SetMetaData("0008|0012", time.strftime("%Y%m%d")) # Instance Creation Date
image_slice.SetMetaData("0008|0013", time.strftime("%H%M%S")) # Instance Creation Time
image_slice.SetMetaData("0008|0060", "CT") # set the type to CT so the thickness is carried over
image_slice.SetMetaData("0020|0032", '\\'.join(map(str,new_img.TransformIndexToPhysicalPoint((0,0,i))))) # Image Position (Patient)
image_slice.SetMetaData("0020,0013", str(i)) # Instance Number
# Write to the output directory and add the extension dcm, to force writing in DICOM format.
writer.SetFileName(os.path.join(directory,str(i)+'.dcm'))
writer.Execute(image_slice)
print(new_img.GetMetaDataKeys())
# Re-read the series
# Read the original series. First obtain the series file names using the
# image series reader.
data_directory = directory
series_IDs = sitk.ImageSeriesReader.GetGDCMSeriesIDs(data_directory)
if not series_IDs:
print("ERROR: given directory \""+data_directory+"\" does not contain a DICOM series.")
sys.exit(1)
series_file_names = sitk.ImageSeriesReader.GetGDCMSeriesFileNames(data_directory, series_IDs[0])
series_reader = sitk.ImageSeriesReader()
series_reader.SetFileNames(series_file_names)
# Configure the reader to load all of the DICOM tags (publicprivate):
# By default tags are not loaded (saves time).
# By default if tags are loaded, the private tags are not loaded.
# We explicitly configure the reader to load tags, including the
# private ones.
series_reader.LoadPrivateTagsOn()
image3D = series_reader.Execute()
print(image3D.GetMetaDataKeys())
sys.exit( 0 )
Any help is greatly appreciated!
EDIT: It seems that i also need to run the '.MetaDataDictionaryArrayUpdateOn()' module on my reader. However if I try to do that he always tells me that there is no such method for the 'ImageSeriesReaderClass' even though it is mentioned in the documentation. Any suggestions?
i'm gonna answer my own question here. Thanks to a post I made on github I found the answer. It turns out that the method '.MetaDataDictionaryArrayUpdateOn()' is not implemented in this build (1.0.1).
There are 2 workarounds credits go to the SimpleITK github community.
You can find the post here: https://github.com/SimpleITK/SimpleITK/issues/331
At the next release (somewhere in January) this problem will be resolved.
I am trying to compare faces using AWS Rekognitionthrough Python boto3, as instructed in the AWS documentation.
My API call is:
client = boto3.client('rekognition', aws_access_key_id=key, aws_secret_access_key=secret, region_name=region )
source_bytes = open('source.jpg', 'rb')
target_bytes = open('target.jpg', 'rb')
response = client.compare_faces(
SourceImage = {
'Bytes':bytearray(source_bytes.read())
},
TargetImage = {
'Bytes':bytearray(target_bytes.read())
},
SimilarityThreshold = SIMILARITY_THRESHOLD
)
source_image.close()
target_image.close()
But everytime I run this program,I get the following error:
botocore.errorfactory.InvalidParameterException: An error occurred (InvalidParameterException) when calling the CompareFaces operation: Request has Invalid Parameters
I have specified the secret, key, region, and threshold properly. How can I clear off this error and make the request call work?
Your code is perfectly fine,
image dimensions matters when it comes to AWS Rekognition.
Limits in Amazon Rekognition
The following is a list of limits in Amazon Rekognition:
Maximum image size stored as an Amazon S3 object is limited to 15 MB.
The minimum pixel resolution for height and width is 80 pixels.Maximum images size as raw bytes passed in as parameter to an API is 5 MB.
Amazon Rekognition supports the PNG and JPEG image formats. That is, the images you provide as input to various API operations, such as DetectLabels and IndexFaces must be in one of the supported formats.
Maximum number of faces you can store in a single face collection is 1 million.
The maximum matching faces the search API returns is 4096.
source: AWS Docs
For those still looking for answer,
I had the same problem, while, #mohanbabu pointed to official docs for what should go in to compare_faces, what I realised is that compare_faces looks for faces in both SourceImage and TargetImage. I confirmed this by first detecting faces using aws's detect_faces and passing deteced faces to compare_faces.
compare_faces failed almost all the time when face detected by detect_faces was a littile obscure.
So, to summerize if any of your SourceImage or TargetImage is tightly cropped to face AND that face is not instantly obvious, compare_faces will fail.
There can be other reason but this observation worked for me.
ex:
In the above image you can fairly confidently say there is a face in the middle
But,
Now, not so obvious.
This was the reason for me atleast, check both your images and you should know.
The way you are opening the file, you don't need to cast to bytearray.
Try this:
client = boto3.client('rekognition', aws_access_key_id=key, aws_secret_access_key=secret, region_name=region )
source_bytes = open('source.jpg', 'rb')
target_bytes = open('target.jpg', 'rb')
response = client.compare_faces(
SourceImage = {
'Bytes':source_bytes.read()
},
TargetImage = {
'Bytes':target_bytes.read()
},
SimilarityThreshold = SIMILARITY_THRESHOLD
)
source_image.close()
target_image.close()
I'm using ploneformgen.
[edited] I'm using a custom script adapter from ploneformgen to create a page using the information from submitted forms. The problem is: the page generated when i submit, doesn't show the image. I would like to create a page/document with text AND one image. How can I do this?
I've tried this code:
#Creating the folder in zope instance. I gave the name 'conteudo' to it.
targetdir = context.conteudo
#Here asking for the form, so we can access its contents
form = request.form
#Creating an unique ID
from DateTime import DateTime
uid= str(DateTime().millis())
# Creating a new Page (Document)
targetdir.invokeFactory('Document',id=uid, title= 'Minha Página', image= form['foto-de-perfil_file'],text='<b>Nome Completo:</b><br>'+form
['nome-completo']+'<br><br>'+'<b>Formação:</b><br>'+form
['formacao']+'<br><br>'+'<b>Áreas de Atuação:</b><br>'+form['areas-de- atuacao']+'<br><br>'+'<b>Links:</b><br>'+form['links'])
# Set the reference for our new Page
doc = targetdir.get('uid',targetdir)
# Reindexed the Page in the site
doc.reindexObject()
It only shows the text, but the picture isn't there.
I tried using "setImage", but it shows an attribute error. I tried making another invokefactory but instead of 'Document', I put 'Image' then it only shows the image.
What should I modify on my code to display both the text and the picture?
Thanks in advance.
On a standard Plone Document content type does not own any image field.
If you are on Plone 4 using old ATContentTypes you must install and configure collective.contentleadimage.
On Plone 5 or if you are using Dexterity based contents you must add the new image field to your schema.
The only thing that worked well for me was instead of creating 'pages', I switched it for 'news items'.
(cross posted to boto-users)
Given an image ID, how can I delete it using boto?
You use the deregister() API.
There are a few ways of getting the image id (i.e. you can list all images and search their properties, etc)
Here is a code fragment which will delete one of your existing AMIs (assuming it's in the EU region)
connection = boto.ec2.connect_to_region('eu-west-1', \
aws_access_key_id='yourkey', \
aws_secret_access_key='yoursecret', \
proxy=yourProxy, \
proxy_port=yourProxyPort)
# This is a way of fetching the image object for an AMI, when you know the AMI id
# Since we specify a single image (using the AMI id) we get a list containing a single image
# You could add error checking and so forth ... but you get the idea
images = connection.get_all_images(image_ids=['ami-cf86xxxx'])
images[0].deregister()
(edit): and in fact having looked at the online documentation for 2.0, there is another way.
Having determined the image ID, you can use the deregister_image(image_id) method of boto.ec2.connection ... which amounts to the same thing I guess.
With newer boto (Tested with 2.38.0), you can run:
ec2_conn = boto.ec2.connect_to_region('xx-xxxx-x')
ec2_conn.deregister_image('ami-xxxxxxx')
or
ec2_conn.deregister_image('ami-xxxxxxx', delete_snapshot=True)
The first will delete the AMI, the second will also delete the attached EBS snapshot
For Boto2, see katriels answer. Here, I am assuming you are using Boto3.
If you have the AMI (an object of class boto3.resources.factory.ec2.Image), you can call its deregister function. For example, to delete an AMI with a given ID, you can use:
import boto3
ec2 = boto3.resource('ec2')
ami_id = 'ami-1b932174'
ami = list(ec2.images.filter(ImageIds=[ami_id]).all())[0]
ami.deregister(DryRun=True)
If you have the necessary permissions, you should see an Request would have succeeded, but DryRun flag is set exception. To get rid of the example, leave out DryRun and use:
ami.deregister() # WARNING: This will really delete the AMI
This blog post elaborates on how to delete AMIs and snapshots with Boto3.
Script delates the AMI and associated Snapshots with it. Make sure you have right privileges to run this script.
Inputs - Please pass region and AMI ids(n) as inputs
import boto3
import sys
def main(region,images):
region = sys.argv[1]
images = sys.argv[2].split(',')
ec2 = boto3.client('ec2', region_name=region)
snapshots = ec2.describe_snapshots(MaxResults=1000,OwnerIds=['self'])['Snapshots']
# loop through list of image IDs
for image in images:
print("====================\nderegistering {image}\n====================".format(image=image))
amiResponse = ec2.deregister_image(DryRun=True,ImageId=image)
for snapshot in snapshots:
if snapshot['Description'].find(image) > 0:
snap = ec2.delete_snapshot(SnapshotId=snapshot['SnapshotId'],DryRun=True)
print("Deleting snapshot {snapshot} \n".format(snapshot=snapshot['SnapshotId']))
main(region,images)
using the EC2.Image resource you can simply call deregister():
Example:
for i in ec2res.images.filter(Owners=['self']):
print("Name: {}\t Id: {}\tState: {}\n".format(i.name, i.id, i.state))
i.deregister()
See this for using different filters:
What are valid values documented for ec2.images.filter command?
See also: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/ec2.html#EC2.Image.deregister