Unable to attach Data disk to Azure VM - python

I created a shared disk following Create Disk and when i try to attach it to a VM Update VM i'm getting createOption cannot be changed. Below is the full error,
Disk attachment failed, request response is - {
"error": {
"code": "PropertyChangeNotAllowed",
"message": "Changing property 'dataDisk.createOption' is not allowed.",
"target": "dataDisk.createOption"
}
Request Body for Data Disk creation (please note this is a shared disk),
{
"location": LOCATION,
"sku": {
"name": "Premium_LRS"
},
"properties": {
"creationData": {
"createOption": "empty"
},
"osType": "linux",
"diskSizeGB": SIZE,
"maxShares": 5,
"networkAccessPolicy": "AllowAll"
}
}
Request body for VM Patch request,
{
"properties": {
"storageProfile": {
"dataDisks": [
{
"caching" : "ReadOnly",
"createOption": "Attach",
"lun": 0,
"managedDisk" : {
"id": disk_id, //-> this disk_id is id of the created disk above
"storageAccountType": "Premium_LRS"
}
}
]
}
}
}
can someone please point out where im doing wrong. I haven't found much documentation about shared disk attachment, through API.

As I see, there is no problem with your request body that updates the VM. I tried it right now and it works fine. I use the same request body as yours. So you need to check the disk again such as if the lun 0 is already in use.

Related

Microsoft Graph- SendMail for a scheduled time using PidTagDeferredSendTime

I am new to working with the Microsoft Graph API, and am currently working through an example of the sample code they provide on their website for python implementation. I have a simple function that connects to outlook mail and sends an email to a specified user. I am hoping to schedule a time to send the email for some time in the future rather than sending it immediately. I found a previous post that recommended using PidTagDeferredSendTime attribute within the extendedProperties options, but I can't seem to get it to work. Everything in the code works and sends fine until I add the "singleValueExtendedProperties" lines and then it never delivers but says it sent. Has anyone got this to work? Attached is the send mail function where I am having the issues.
def send_mail(subject: str, body: str, recipient: str):
print(recipient)
request_body = {
'message': {
'subject': subject,
'body': {
'contentType': 'text',
'content': body
},
'toRecipients': [
{
'emailAddress': {
'address': recipient
}
}
],
"singleValueExtendedProperties":
[
{
"id": "PtypTime 0x3FEF",
"value": "2022-08-01T13:48:00"
}
]
}
}
Your property definition PidTagDeferredSendTime isn't correct for the datatype it should be SystemTime eg
{
"message": {
"subject": "Meet for lunch?",
"body": {
"contentType": "Text",
"content": "The new cafeteria is open."
},
"toRecipients": [
{
"emailAddress": {
"address": "blah#blah.com"
}
}
],
"singleValueExtendedProperties": [
{
"id": "SystemTime 0x3FEF",
"value": "2022-08-01T23:39:00Z"
}
]
}
}
Also make sure the datetime you want to send it is converted to UTC, what you should see is the message saved into the drafts folder (as a draft) and then it will be sent at the time you specify. eg the above request works for me in the Graph explorer

Firebase Hosting REST API: Page not found after successful deploy

I was trying to use Python to deploy sites to Firebase Hosting. I followed this guide.
My code seems to be working fine, I'm not getting any errors and I'm getting 200 status codes in the API responses. I'm getting all the same responses as they show in the guide:
# versions.create
200, {
"name": "sites/xxxxx/versions/bd94931c702c6150",
"status": "CREATED",
"config": {
"headers": [
{
"headers": {
"Cache-Control": "max-age=1800"
},
"glob": "**"
}
]
}
}
# versions.populateFiles
200, {
"uploadRequiredHashes": [
"13f7dc725fc6c937322b1614479fdb916f5d27f027fef1bee83c7bc61fc393c6",
"8529e2e12706f35232fce346d3fe23166b72a8fa029c153533e1139a8cc7b08d",
"30e3a300bf4c8ab3fc5e3906772c9ccabfcbe18447143edf7ab6c9cb22a18d73"
],
"uploadUrl": "https://upload-firebasehosting.googleapis.com/upload/sites/xxxxx/versions/bd94931c702c6150/files"
}
200 # file1 upload
200 # file2 upload
200 # file3 upload
# versions.patch
200, {
"name": "sites/xxxxx/versions/bd94931c702c6150",
"status": "FINALIZED",
"config": {
"headers": [
{
"headers": {
"Cache-Control": "max-age=1800"
},
"glob": "**"
}
]
},
"createTime": "2021-10-01T11:38:24.345049Z",
"createUser": {
"email": "firebase-adminsdk-xj8ro#xxxxx.iam.gserviceaccount.com"
},
"finalizeTime": "2021-10-01T11:38:37.780419Z",
"finalizeUser": {
"email": "firebase-adminsdk-xj8ro#xxxxx.iam.gserviceaccount.com"
}
}
# releases.create
200, {
"name": "sites/xxxxx/releases/1633088318665339",
"version": {
"name": "sites/xxxxx/versions/bd94931c702c6150",
"status": "FINALIZED",
"config": {
"headers": [
{
"headers": {
"Cache-Control": "max-age=1800"
},
"glob": "**"
}
]
},
"createTime": "2021-10-01T11:38:24.345049Z",
"createUser": {
"email": "firebase-adminsdk-xj8ro#xxxxx.iam.gserviceaccount.com"
},
"finalizeTime": "2021-10-01T11:38:37.780419Z",
"finalizeUser": {
"email": "firebase-adminsdk-xj8ro#xxxxx.iam.gserviceaccount.com"
}
},
"type": "DEPLOY",
"releaseTime": "2021-10-01T11:38:38.665339693Z",
"releaseUser": {
"email": "firebase-adminsdk-xj8ro#xxxxx.iam.gserviceaccount.com"
}
}
(I replaced my site ID with xxxxx)
I don't know what the problem is.
Maybe it's due to the way I gzip my files? I do it using the gzip module in Python.
for file_name in file_names:
with open(f"{folder_path}/{file_name}", 'rb') as f_in, gzip.open(f"`{OUTPUT_DIR}/{file_name}.gz", 'wb') as f_out:
f_out.writelines(f_in)
And then I read and upload them like this:
headers = {
"Authorization": f"Bearer {access_token}",
"Content-Type": "application/octet-stream",
"Content-Length": "500"
}
f = open(file_path, "rb")
bytes = f.read()
r = requests.post(API_ENDPOINT, headers=headers, data=bytes)
However, I did notice that in my response to the versions.patch call
is missing the following part that is present in the tutorial:
"fileCount": "5",
"versionBytes": "114951"
The tutorial seems to be from 2018, so it could be an API change.
After doing everything like shown in the tutorial I still get Page Not Found error when I go to the URL of my site.
I can add more code if it is needed. Please help me. Thanks in advance.
There can be multiple reasons for the error you are facing. I am trying to put together possible fixes/workarounds for the error in this single answer for you to analyse and try upon :
The step to deploy are as following :
STEP 1:
ng build --prod
STEP 2:
firebase init
Are you ready to proceed? Yes
*What do you want to use as your public directory? dist/{your-
application-name}
Configure as a single-page app (rewrite all urls to /index.html)?(y/N)
Yes
File dist/{your-application-name}/index.html already exists.
Overwrite?(y/N) No
STEP 3:
firebase deploy --only hosting
And if still you are getting the same page just press 'CTRL + F5' it
will clean the cache.
Add a dot before /dist on the public tag "public":
"./dist/my-app-name" Example of firebase.json :
{
"hosting": {
"public": "./dist/my-app-name",
"ignore": [
"firebase.json",
"**/.*",
"**/node_modules/**"
],
"rewrites": [
{
"source": "**",
"destination": "/index.html"
}
]
}
}
It may happen that, hosting -> public entry in the firebase.json
file is not pointing to the directory you built it to. If you're
building a single-page-app with React, double check you have a
rewrites entry to redirect all requests to index.html
"hosting": {
"public": "build",
"ignore": [ "firebase.json", "**/.*", "**/node_modules/**" ],
"rewrites": [
{ "source": "**",
"destination": "/index.html"
} ] }
Add site property to firebase.json to fix this.
{
"hosting": {
"site": "my-app-id",
"public": "app", ... … }
It can also be because the index.html has modified itself when you
selected "Y" while initializing the firebase. It has basically
replaced your own index file to this one. Check and replace the
index file and next time, do not overwrite the index.html file. It
would work.
Activate Firestore for the project and specify the Resource Location
Id by following Get started with Cloud Firestore.
Try the following as you might have deleted the project in the
console but the source-code was still referencing it hence error 404
resource not found.
Delete .firebaserc file (contains your project alias) located in the
root of the project
Run firebase init and link to your project
Run firebase deploy again
Make sure you choose a default storage location for your firebase
project, then deploy again. In firebase project > project overview > gear icon > project settings > Default GCP resource location.
You can simply go to Firebase Console > Storage > enable Firebase Storageand it may resolve the issue.
Please update your installation of firebase-tools to the latest via’
npm i -g firebase-tools’. Also note that we are not actively testing
or supporting node versions greater than 10 - if you continue having
issues, downgrade your version of node to 10 and see if the issue
remains. You may have the latest version locally but not globally
installed. See Get started: write and deploy your first
functions for more details.
Also a little suggestion, go through the fixes/workarounds mentioned above, try them and follow the guide (as it is) without missing out on anything, and if still you are getting Page not Found after successful deployment, please open a public issue here.

ElasticSearch updating documents in an automated way

I don't know how to ask questions about my current problem so I think, that is why I couldn't find the right answer. So please let me tell you what my problem is.
I am trying to do a simple internet port scanner by using Zmap and Zgrab like Shodan.io,like censys.io etc.
I need to store the data inside of the ElasticSearch(because I want to learn how to use it).
In this case;
I have created a JSON architecture to use it in ElasticSearch such as
{
"value": "192.168.0.1",
"port": [
{
"value": 80,
"HTMLbody": "BODY OF THE WEB PAGE",
"status": 200,
"headers": {
"content_type": "html/png",
"content_length": 23123,
"...": "..."
},
"certificate": {
"company_names": [
"example.com",
"acme.io"
]
}
}
]
}
There will be approximately 4 billion IP address inside of the Elasticsearch with different ports open. My problem begins here; After first initial scan, I need to update the exist IP addresses.
For example;
IP: 192.168.0.1
port: 80 open
When in the second scan, I scan port 443 and It will be probably open too. Then I need to update my Elasticsearch document depends on the open ports.
What I found so far
There is an endpoint I found which is; POST /<index>/_update/<_id> but it updates a single document. I need to update at least more than 100.000 document in one scan. And it should be automatically too. How do I know that an ip address document id and update it ?
Also secondly, I found;
POST <index>/_update_by_query
I thought about searching the ip address by using query and gathering its index number and then updating the document as follows;
{
"value": "192.168.0.1",
"port": [
{
"value": 80,
"HTMLbody": "BODY OF THE WEB PAGE",
"status": 200,
"headers": {
"content_type": "html/png",
"content_length": 23123,
"...": "..."
},
"certificate": {
"company_names": [
"example.com",
"acme.io"
]
}
},
{
"value": 443,
"HTMLbody": "BODY OF THE SSL WEB PAGE",
"status": 200
}
]
}
In theory, I could do this but in practice couldn't write the code as efficient. Because I had at least 6 GB JSON file for one scan and it takes so long to process the whole file and updating elasticsearch.
Is there any way that solve this problem efficiently ?
Please Look the image below
To fully answer to the question, first, use the ip adress as _id to make sur to update the appropriate document.
You need to add an entry in the port array, so use the _update API to perform it like this :
POST <index>/_update/192.168.0.1
{
"script": {
"source": "ctx._source.port.add(params.tag)",
"lang": "painless",
"params": {
"tag": {
"value": 443,
"HTMLbody": "BODY OF THE SSL WEB PAGE",
"status": 200
}
}
}
}
This is an update query to add an entry to an array with a little script.
To use the bulk, simply add all of your request like explained here

Localhost request from angular

Hello i have created a mini endpoint in Flask in python, and it's working well. When i access it from Postman :
http://127.0.0.1:5000/retrieve_data
it retrieves the data as a json.
But when i call the endpoint from angular:
export class HomeComponent implements OnInit {
title = 'angular-demo';
data = this.getAnnounce();
getAnnounce() {
return this.http.get('http://127.0.0.1:5000/retrieve_data');
}
constructor(private http: HttpClient) {
}
The actual result is:
{
"_isScalar": false,
"source": {
"_isScalar": false,
"source": {
"_isScalar": false,
"source": {
"_isScalar": true,
"value": {
"url": "http://127.0.0.1:5000/retrieve_data",
"body": null,
"reportProgress": false,
"withCredentials": false,
"responseType": "json",
"method": "GET",
"headers": {
"normalizedNames": {},
"lazyUp
Canceldate": null,
"headers": {}
},
"params": {
"updates": null,
"cloneFrom": null,
"encoder": {},
"map": null
},
"urlWithParams": "http://127.0.0.1:5000/retrieve_data"
}
},
"operator": {
"concurrent": 1
}
},
"operator": {}
},
"operator": {}
}
PS: i am very noob at anuglar
From the document of angular's http client, the return value's type of http.get() is Observable
So if you want to get the data that you server respond, you must subscribe as this:
data = {};
getAnnounce() {
this.http.get('http://127.0.0.1:5000/retrieve_data')
.subscribe(result => this.data = result);
}
Great question, and as some others have suggested it would seem that you're missing a .subscribe() on your getAnnounce() method. Angular Docs has a guide on observables that can help you get a better understanding.
I have a few more suggestions for improving your experience with angular. Take advantage of services for grouping together similar functionality. For example, in your provided code, you could move your getAnnounce() to a new service you create; something like AnnouncementService. Then you can use this in many places of your code. it also helps test it and find bugs later on since your code is more separated.
Another thing you can do is to move your server api address to the environment variables. By default if you built your angular project using the Angular CLI you'll find in our src folder an environments folder with two files by default environment.ts and environment.prod.ts. you can access this JSON object anywhere within your .ts files and when you build your code for product vs locally it can change values to what ever you set. In your case you would put your local API address in there http://127.0.0.1:5000.
export const environment = {
production: false,
api: 'http://127.0.0.1:5000'
};
Now you can access this easily and have one spot to change if you ever change your port number or have your api in a real server.
import { environment } from './environments/environment';
/* remember to import the non prod version of the environment file, angular knows how to switch them automatically when building for production or not. */
Hopefully this helps you with your coding, good luck!
You have to subscribe to your getAnnounce function.
For instance, you can try:
data = this.getAnnounce().subscribe();
Or directly on your constructor / ngOnInit:
ngOnInit() {
this.getAnnounce().subscribe(
data => this.data = data
);
}

kubernetes set value of service/status/loadBalance/ingress- ip

I'm looking for a way to set service/status/loadBalance/ingress-ip after creating k8s service of type=loadbalancer (as appears in 'Type LoadBalancer' section at the next link https://kubernetes.io/docs/concepts/services-networking/service/ ).
My problem is similiar to the issue described in following link (Is it possible to update a kubernetes service 'External IP' while watching for the service? ) but couldn't find the answer.
Thanks in advance
There's two ways to do this. With a json patch or with a merge patch. Here's how you do the latter:
[centos#ost-controller ~]$ cat patch.json
{
"status": {
"loadBalancer": {
"ingress": [
{"ip": "8.3.2.1"}
]
}
}
}
Now, here you can see the for merge patches, you have to make a dictionary containing all the Object tree (begins at status) that will need some change to be merged. If you wanted to replace something, then you'd have to use the json patch strategy.
Once we have this file we send the request and if all goes well, we'll receive a response consisting on the object with the merge already applied:
[centos#ost-controller ~]$ curl --request PATCH --data "$(cat patch.json)" -H "Content-Type:application/merge-patch+json" http://localhost:8080/api/v1/namespaces/default/services/kubernetes/status{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "kubernetes",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/services/kubernetes/status",
"uid": "b8ece320-76c1-11e7-b468-fa163ea3fb09",
"resourceVersion": "2142242",
"creationTimestamp": "2017-08-01T14:00:06Z",
"labels": {
"component": "apiserver",
"provider": "kubernetes"
}
},
"spec": {
"ports": [
{
"name": "https",
"protocol": "TCP",
"port": 443,
"targetPort": 6443
}
],
"clusterIP": "10.0.0.129",
"type": "ClusterIP",
"sessionAffinity": "ClientIP"
},
"status": {
"loadBalancer": {
"ingress": [
{
"ip": "8.3.2.1"
}
]
}
}

Categories