Deploying/Updating Marathon Docker image using API - python

I am writing code to update deployed image on marathon automatically. I use the REST patch method as listed in marathon API reference http://mesosphere.github.io/marathon/api-console/index.html
url = 'https://<my-hostname>:<my-port>/v2/apps'
h = {'Content-type': 'application/json', 'Accept': 'application/json'}
data = {'app': { "id": app,
'container': {
'docker': {
'image': image}}}}
print ('requests.patch(%s, %s)' % (url + app, json.dumps(data)))
r = requests.patch(url + app, headers = h, auth = auth, data = json.dumps(data))
if r.status_code == 200:
print('Deployed %s' % app)
The code ran successfully, I got back a Deployment ID, but nothing was changed from the UI. No new deployment is happening. If I change the patch request into get request without the data, I get back the image I previous updated in using code above.
According to this similar API reference https://docs.mesosphere.com/1.11/deploying-services/marathon-api/#/apps/V2AppsByAppId1
It says " This operation will create a deployment" but nothing happened. From the Marathon GUI, I don't see the configuration getting changed at all. If I restart, it is the same old deployment getting restarted. Am I interpreting the API reference incorrectly?

If I read the API reference guide correctly, the body should be:
{ "id": app,
"container": {
"docker": {
"image": image
}
}
}
Tested this with marathon 1.4.11 and that worked.
Not sure why you would get a deploymentid, if I do it the way you did (with the extra {"app":} layer), I get a 500 error. BTW, I am not sure how sensitive this is for single versus double quotes.

There is open bug in Marathon which cause that request body is not merged with current definition. Therefore, PUT/PATCH request with only .container.docker.image will remove settings like port mappings, volumes, parameters...
Solution is to get container object from current application and replace image key with new Docker image.
Example:
export MARATHON_IMAGE="registry.foo.bar/app:v10"
export BODY=$(curl -s -H "Content-type: application/json" leader.mesos:8080/v2/apps/foo/bar | jq -cr '.app.container | .docker.image = env.MARATHON_IMAGE | {"container": .}')
curl -s -H "Content-type: application/json" -X PATCH -d "${BODY}" leader.mesos:8080/v2/apps/foo/bar

Related

HTTP Triggering Cloud Function with Cloud Scheduler

I have a problem with a job in the Cloud Scheduler for my cloud function. I created the job with next parameters:
Target: HTTP
URL: my trigger url for cloud function
HTTP method: POST
Body:
{
"expertsender": {
"apiKey": "ExprtSender API key",
"apiAddress": "ExpertSender APIv2 address",
"date": "YYYY-MM-DD",
"entities": [
{
"entity": "Messages"
},
{
"entity": "Activities",
"types":[
"Subscriptions"
]
}
]
},
"bq": {
"project_id": "YOUR GCP PROJECT",
"dataset_id": "YOUR DATASET NAME",
"location": "US"
}
}
The real values has been changed in this body.
When I run this job I got an error. The reason is caused by processing body from POST request.
However, when I take this body and use it as Triggering event in Testing I don't get any errors. So I think, that problem in body representation for my job but I havn't any idea how fix it. I'll be very happy for any idea.
Disclaimer:
I have tried to solve the same issue using NodeJS and I'm able to get a solution
I understand that this is an old question. But I felt like its worth to answer this question as I have spent almost 2 hours figuring out the answer for this issue.
Scenario - 1: Trigger the Cloud Function via Cloud Scheduler
Function fails to read the message in request body.
Scenario - 2: Trigger the Cloud Function via Test tab in Cloud Function interface
Function call always executes fine with no errors.
What did I find?
When the GCF routine is executed via Cloud Scheduler, it sends the header content-type as application/octet-stream. This makes express js unable to parse the data in request body when Cloud scheduler POSTs the data.
But when the exact same request body is used to test the function via the Cloud Function interface, everything works fine because the Testing feature on the interface sends the header content-type as application/json and express js is able to read the request body and parses the data as a JSON object.
Solution
I had to manually parse the request body as JSON (explicitly using if condition based on the content-type header) to get hold of data in the request body.
/**
* Responds to any HTTP request.
*
* #param {!express:Request} req HTTP request context.
* #param {!express:Response} res HTTP response context.
*/
exports.helloWorld = (req, res) => {
let message = req.query.message || req.body.message || 'Hello World!';
console.log('Headers from request: ' + JSON.stringify(req.headers));
let parsedBody;
if(req.header('content-type') === 'application/json') {
console.log('request header content-type is application/json and auto parsing the req body as json');
parsedBody = req.body;
} else {
console.log('request header content-type is NOT application/json and MANUALLY parsing the req body as json');
parsedBody = JSON.parse(req.body);
}
console.log('Message from parsed json body is:' + parsedBody.message);
res.status(200).send(message);
};
It is truly a feature issue which Google has to address and hopefully Google fixes it soon.
Cloud Scheduler - Content Type header issue
Another way to solve the problem is this:
request.get_json(force=True)
It forces the parser to treat the payload as json, ingoring the Mimetype.
Reference to the flask documentation is here
I think this is a bit more concise then the other solutions proposed.
Thank you #Dinesh for pointing towards the request headers as a solution! For all those who still wander and are lost, the code in python 3.7.4:
import json
raw_request_data = request.data
# Luckily it's at least UTF-8 encoded...
string_request_data = raw_request_data.decode("utf-8")
request_json: dict = json.loads(string_request_data)
Totally agree, this is sub-par from a usability perspective. Having the testing utility pass a JSON and the cloud scheduler posting an "application/octet-stream" is incredibly irresponsibly designed.
You should, however, create a request handler, if you want to invoke the function in a different way:
def request_handler(request):
# This works if the request comes in from
# requests.post("cloud-function-etc", json={"key":"value"})
# or if the Cloud Function test was used
request_json = request.get_json()
if request_json:
return request_json
# That's the hard way, i.e. Google Cloud Scheduler sending its JSON payload as octet-stream
if not request_json and request.headers.get("Content-Type") == "application/octet-stream":
raw_request_data = request.data
string_request_data = raw_request_data.decode("utf-8")
request_json: dict = json.loads(string_request_data)
if request_json:
return request_json
# Error code is obviously up to you
else:
return "500"
One of the workarounds that you can use is to provide a header "Content-Type" set to "application/json". You can see a setup here.

How to Translate CURL to Python Requests

I am currently trying to integrate Stripe Connect and have come across the flowing CURl POST request:
curl https://connect.stripe.com/oauth/token \
-d client_secret=SECRET_CODE \
-d code="{AUTHORIZATION_CODE}" \
-d grant_type=authorization_code
However I am very new to CURL and have been doing some research and trying to use the requests package to do it. This is what my current code looks like:
data = '{"client_secret": "%s", "code": "%s", "grant_type": "authorization_code"}' % (SECRET_KEY, AUTHORIZATION_CODE)
response = requests.post('https://connect.stripe.com/oauth/token', json=data)
However this always returns a response code 400. I have no idea where I am going wrong and any guidance would be thoroughly appreciated.
The error is because you are passing your data as string, instead json param of requests.post call expects it to be dict. Your code should be:
import requests
data = {
"client_secret": SECRET_KEY,
"code": AUTHORIZATION_CODE,
"grant_type": "authorization_code"
}
response = requests.post('https://connect.stripe.com/oauth/token', json=data)
Take a look at request library's More complicated POST requests document.

Python Pycurl POST method issue with Ambari CURL commands

I'm following this to restart Ambari components that are in INSTALLED state, for this i have written a python code parse the Ambari services using Pycurl that works.
But once the JSON is parsed i will be generating a JSON file like:
{
"RequestInfo": {
"command": "START",
"context": "Restart all components on HOST"
},
"Requests/resource_filters": [
{
"component_name": "NAMENODE",
"hosts": "hadoopm",
"service_name": "HDFS"
},
{
"component_name": "RESOURCEMANAGER",
"hosts": "hadoopm",
"service_name": "YARN"
}]}
this works with:
curl -u username:password -H 'X-Requested-By-ambari' http://ambariserver:8080/api/v1/clusters/CLUSTERNAME/requests -d#service-restart.json
but the same is not working and failing with 400 Bad request error with the below code:
import pycurl
c = pycurl.Curl()
c.setopt(pycurl.URL, url_post)
c.setopt(pycurl.HTTPHEADER, ["X-Requested-By:ambari"])
data = json.dumps(json.loads(open(output_temp_file,'rb').read()), indent=1, sort_keys=True)
tabs = re.sub('\n +', lambda match: '\n' + '\t' * (len(match.group().strip('\n')) / 2), data)
tabJSON=json.dumps(json.loads(open(output_temp_file,'rb').read()), indent=1, sort_keys=True)
c.setopt(pycurl.POST, 1)
c.setopt(pycurl.USERPWD,'admin:'+admin_pass)
c.setopt(pycurl.POSTFIELDS, 'tabJSON')
c.setopt(pycurl.WRITEFUNCTION, service_buffer.write)
c.setopt(pycurl.VERBOSE, 1)
c.perform()
c.close()
and failing with HTTP/1.1 400 Bad Request
is there something wrong that i'm doing with this can someone please help me with this.
Probably API call format is outdated in documentation. I'd suggest going by example:
open dev console in Chrome
use Ambari UI to perform an action (e.g. restart all services)
go to Network, find relevant POST/PUT request (sort by non-200 Status column)
copy request (right click on request -> Copy -> copy as cURL )
Now you have up-to-date CURL command example, and can go further playing around request body

Translate this request from Bash to Python?

I was given a request in Bash and I have to translate it to Python 2.7. I did this kind of translations several times, but now I am not able to make it work and I do not understand why.
First of all, I was given this Bash request:
curl -X POST -v -u user#domain:password --data "#file.json" -H "Content-Type:application/json" http://destination_url_a
With the file file.json, whose content is the following one:
{
"username":"user#domain",
"password":"password",
"shortName":"a-short-name",
"visibility":"PRIVATE",
"sitePreset":"site-dashboard",
"title":"A Title",
"description":"A description."
}
If I execute the Bash line in my computer, the result is succesful.
As always, I tried to use requests library in Python to make it work. What I did is:
import requests
from requests.auth import HTTPBasicAuth
import json
data = {
"username": "user#domain",
"password": "password",
"shortName": "a-short-name",
"visibility": "PRIVATE",
"sitePreset": "site-dashboard",
"title": "A Title",
"description": "A description.",
}
headers = {'Content-Type': 'application/json'}
data_json = json.dumps(data)
r = requests.post(
url='http://destination_url_a',
data=data_json,
headers=headers,
auth=HTTPBasicAuth('user#domain', 'password'),
verify=False,
)
Unfortunately, the response, stored in r variable, is an error, despite the status code is 200.
What could be happening? Does anyone find a problem in my code or has any idea?
EDIT
However, this is another example very similar which worked perfectly:
Bash:
curl -v -H "Content-Type:application/json" -X POST --data "#file.json" -u user#domain:password http://destination_url_b
My Python code
import requests
from requests.auth import HTTPBasicAuth
import json
data = {
"userName": "user#domain",
"password": "password",
"firstName": "Firstname",
"lastName": "Lastname",
"email": "email#domain.com",
"disableAccount": "False",
"quota": -1,
"groups": ["a_group",],
}
headers = {'Content-Type': 'application/json'}
data_json = json.dumps(data)
r = requests.post(
url='http://destination_url_b',
data=data_json,
headers=headers,
auth=HTTPBasicAuth('user#domain', 'password'),
verify=False,
)
It seems to be almost the same to the other request, but this works. Different data is sent, and to a different subdomain (both are sent to the same domain). Will these modifications be important if we are talking about the User-Agent you mentioned?
Sometimes, webservices filter on user-agent. User agent of curl and python are different. That may explain.
Try to forge a "legitimate" user-agent by modifying the request header.
Finally, there was no error in the code neither in the User-Agent.
The problem was that the destination application did not accept sites with the same short-name value.
What I was doing is creating the site from Bash, which worked, then removing it from the interface of the app and trying to create it from Python with the same data. I was getting error when doing that because in spite of having removed the site, I had to remove some residual data of it from the trashcan of the app too. If not, app returned an error since it considered that the site I was trying to create already existed.
So if I had introduced different short-name in each attempt, I would not have had any error.

Facebook Messenger with Flask

I'm trying to get the FB messenger API working using Python's Flask, adapting the following instructions: https://developers.facebook.com/docs/messenger-platform/quickstart
So far, things have been going pretty well. I have verified my callback and am able to receive the messages I send using Messenger on my page, as in the logs in my heroku server indicate the appropriate packets of data are being received by my server. Right now I'm struggling a bit to send responses to the client messenging my app. In particular, I am not sure how to perform the following segment from the tutorial in Flask:
var token = "<page_access_token>";
function sendTextMessage(sender, text) {
messageData = {
text:text
}
request({
url: 'https://graph.facebook.com/v2.6/me/messages',
qs: {access_token:token},
method: 'POST',
json: {
recipient: {id:sender},
message: messageData,
}
}, function(error, response, body) {
if (error) {
console.log('Error sending message: ', error);
} else if (response.body.error) {
console.log('Error: ', response.body.error);
}
});
}
So far, I have this bit in my server-side Flask module:
#app.route('/', methods=["GET", "POST"])
def chatbot_response():
data = json.loads(req_data)
sender_id = data["entry"][0]["messaging"][0]["sender"]["id"]
url = "https://graph.facebook.com/v2.6/me/messages"
qs_value = {"access_token": TOKEN_OMITTED}
json_response = {"recipient": {"id": sender_id}, "message": "this is a test response message"}
response = ("my response text", 200, {"url": url, "qs": qs_value, "method": "POST", "json": json_response})
return response
However, running this, I find that while I can process what someone send my Page, it does not send a response back (i.e. nothing shows up in the messenger chat box). I'm new to Flask so any help would be greatly appreciated in doing the equivalent of the Javascript bit above in Flask.
Thanks!
This is the code that works for me:
data = json.loads(request.data)['entry'][0]['messaging']
for m in data:
resp_id = m['sender']['id']
resp_mess = {
'recipient': {
'id': resp_id,
},
'message': {
'text': m['message']['text'],
}
}
fb_response = requests.post(FB_MESSAGES_ENDPOINT,
params={"access_token": FB_TOKEN},
data=json.dumps(resp_mess),
headers = {'content-type': 'application/json'})
key differences:
message needs a text key for the actual response message, and you need to add the application/json content-type header.
Without the content-type header you get the The parameter recipient is required error response, and without the text key under message you get the param message must be non-empty error response.
This is the Flask example using fbmq library that works for me:
echo example :
from flask import Flask, request
from fbmq import Page
page = fbmq.Page(PAGE_ACCESS_TOKEN)
#app.route('/webhook', methods=['POST'])
def webhook():
page.handle_webhook(request.get_data(as_text=True))
return "ok"
#page.handle_message
def message_handler(event):
page.send(event.sender_id, event.message_text)
In that scenario in your tutorial, the node.js application is sending an HTTP POST request back to Facebook's servers, which then forwards the content on to the client.
So far, sounds like your Flask app is only receiving (AKA serving) HTTP requests. The reason is that that's what the Flask library is all about, and it's the only thing that Flask does.
To send an HTTP request back to Facebook, you can use any Python HTTP client library you like. There is one called urllib in the standard library, but it's a bit clunky to use... try the Requests library.
Since your request handler is delegating to an outgoing HTTP call, you need to look at the response to this sub-request also, to make sure everything went as planned.
Your handler may end up looking something like
import json
import os
from flask import app, request
# confusingly similar name, keep these straight in your head
import requests
FB_MESSAGES_ENDPOINT = "https://graph.facebook.com/v2.6/me/messages"
# good practice: don't keep secrets in files, one day you'll accidentally
# commit it and push it to github and then you'll be sad. in bash:
# $ export FB_ACCESS_TOKEN=my-secret-fb-token
FB_TOKEN = os.environ['FB_ACCESS_TOKEN']
#app.route('/', method="POST")
def chatbot_response():
data = request.json() # flasks's request object
sender_id = data["entry"][0]["messaging"][0]["sender"]["id"]
send_back_to_fb = {
"recipient": {
"id": sender_id,
},
"message": "this is a test response message"
}
# the big change: use another library to send an HTTP request back to FB
fb_response = requests.post(FB_MESSAGES_ENDPOINT,
params={"access_token": FB_TOKEN},
data=json.dumps(send_back_to_fb))
# handle the response to the subrequest you made
if not fb_response.ok:
# log some useful info for yourself, for debugging
print 'jeepers. %s: %s' % (fb_response.status_code, fb_response.text)
# always return 200 to Facebook's original POST request so they know you
# handled their request
return "OK", 200
When doing responses in Flask, you have to be careful. Simply doing a return statement won't return anything to the requester.
In your case, you might want to look at jsonify(). It will take a Python dictionary and return it to your browser as a JSON object.
from flask import jsonify
return jsonify({"url": url, "qs": qs_value, "method": "POST", "json": json_response})
If you want more control over the responses, like setting codes, take a look at make_response()

Categories