I would like to list Docker events using its API, so I created this class:
This is the simplest form of my code:
#!/usr/bin/env python
import requests_unixsocket
import json
session = requests_unixsocket.Session()
resp = session.get("http+unix://%2Fvar%2Frun%2Fdocker.sock/events")
print resp
When I run the script and create a Docker network in another terminal, I am supposed to see something like that:
{"Type":"network","Action":"create","Actor":{"ID":"20f9f862aa509bdd2b147252c3cb50f035b1e7b36542c9f7fad4ccbce0206507","Attributes":{"name":"network15","type":"bridge"}},"time":1481387403,"timeNano":1481387403635383908}
But I don't see anything happening, it seems like the program is listening in a infinite loop but not showing anything like I said.
Do you have an idea on how to stream these events and show them on my terminal ?
Found something. Adding stream = True resolved the problem:
session.get( self.base + self.url, stream= True)
Related
I am using flask and flask-restx try to create a protocol to get a specific string from another service. I am trying to figure out a way to run the function in server in different threads. Here's my code sample:
from flask_restx import Api,fields,Resource
from flask import Flask
app = Flask(__name__)
api = Api(app)
parent = api.model('Parent', {
'name': fields.String(get_answer(a,b)),
'class': fields.String(discriminator=True)
})
#api.route('/language')
class Language(Resource):
# #api.marshal_with(data_stream_request)
#api.marshal_with(parent)
#api.response(403, "Unauthorized")
def get(self):
return {"happy": "good"}
What I expect:
In Client side, first the server should run, i.e., we should able to make curl -i localhost:8080 work. Then when a specific condition is true, the client side should receive a GET request with the parent JSON data I have in server. However, if that condition is true, the GET request should not be able to return the correct result.
What I did:
One of the method I used is wrap up the decorator and Class Language(Resource) part in a different function and wrong that function in a different thread, and put that thread under a condition check. Not sure if that's the right way to do.I was seeing anyone said celery might be a good choice but not sure if that can work in flask-restx.
I have the answer for you. to run a process in the background with flask, schedule it to run using another process using APScheduler. A very simple package that helps you schedule tasks to run functions at an interval, in your case one time at utcnow().
here is the link to Flask-APScheduler.
job = scheduler.add_job(myfunc, 'interval', minutes=2)
In your case use 'date' instead of 'interval' and specify run_date
job = scheduler.add_job(myfunc, 'date', run_date=datetime.utcnow())
You can send arguments to the function:
job = scheduler.add_job(myfunc, 'date', args = (your args), run_date=datetime.utcnow())
here is the documentation:
User Guide
I am relaying HTTP requests from a C# application by sending JSON data to a localhost flask application, sending the requests with python, and relaying the response back to my C# application. Needs to be done this way because the server I am dealing with is 3rd party and fingerprints SCHANNEL requests and sends back dummy data (Does this with Powershell as well, but not curl, Postman, or Python).
var process = new Process();
process.StartInfo = new ProcessStartInfo()
{
FileName = "cmd.exe",
Arguments = #" /k python Assets\Scripts\server.py",
UseShellExecute = true
};
process.Start();
I found this solution, which uses an endpoint (/shutdown)
def shutdown_server():
func = request.environ.get('werkzeug.server.shutdown')
if func is None:
raise RuntimeError('Not running with the Werkzeug Server')
func()
Get a warning that it is being deprecated. I can live with that, but my OCD makes me want to do this properly. The warning tells me this is a hacky solution.
I am new to python/flask. What would be a good way about going about this?
Sidenote: process.Kill() doesn't work. Wish it did.
process.CloseMainWindow() seems to do the trick from my initial tests. Why process.Close() or process.Kill() do not work, is beyond me.
When the loadbalancer in front of the tested https web site fails-over, this generates some HTTPError 500 for few seconds, then Locust hangs:
The response time graph stops (empty graph)
The total requests per second turns to a wrong green flat line.
If I just stop & start the test, locust restart monitoring properly the response time.
We can see some HTTPError 500 in the Failures tab
Is this a bug ?
How can I make sure Locust kills and restarts users, either manually or when timeout ?
My attempt to regularly "RescheduleTaskImmediately" did not help.
My locustfile.py:
#!/usr/bin/env python
import time
import random
from locust import HttpUser, task, between, TaskSet
from locust.exception import InterruptTaskSet, RescheduleTaskImmediately
URL_LIST = [
"/url1",
"/url2",
"/url3",
]
class QuickstartTask(HttpUser):
wait_time = between(0.1, 0.5)
connection_timeout = 15.0
network_timeout = 20.0
def on_start(self):
# Required to use the http_proxy & https_proxy
self.client.trust_env = True
print("New user started")
self.client.timeout = 5
self.client.get("/")
self.client.get("/favicon.ico")
self.getcount = 0
def on_stop(self):
print("User stopped")
#task
def track_and_trace(self):
url = URL_LIST[random.randrange(0,len(URL_LIST))]
self.client.get(url, name=url[:50])
self.getcount += 1
if self.getcount > 50 and (random.randrange(0,1000) > 990 or self.getcount > 200):
print(f'Reschedule after {self.getcount} requests.')
self.client.cookies.clear()
self.getcount = 0
raise RescheduleTaskImmediately
Each locust runs in a thread. If the thread gets blocked, it doesn't take further actions.
self.client.get(url, name=url[:50], timeout=.1)
Something like this is probably what you need, potentially with a try/except to do something different when you get an http timeout exception.
In my experience, the problem you're describing with the charts on the Locust UI has nothing to do with the errors your Locust users are hitting. I've seen this behavior if you have multiple people attempting to access the Locust UI simultaneously. Locust uses Flask to create and serve the UI. Flask by itself (at the way Locust is using it) doesn't do well with multiple connections.
If Person A starts using Locust UI and starts a test, they'll see stats and everything working fine until Person B loads the Locust UI. Person B will then see things working fine but Person A will experience issues as you describe, with the test seemingly stalling and charts not updating properly. In that state, sometimes starting a new test would resolve it temporarily, other times you need to refresh. Either way, A and B would be fighting between each other for a working UI.
The solution in this case would be to put Locust behind a reverse proxy using something such as Nginx. Nginx then maintains a single connection to Locust and all users connect through Nginx. Locust's UI should then continue to work for all connected users with correctly updating stats and charts.
i am new in airflow and gRPC
i use airflow running in docker with default setting
https://airflow.apache.org/docs/apache-airflow/stable/start/docker.html
when i try to do in this link
https://airflow.apache.org/docs/apache-airflow-providers-grpc/stable/_api/airflow/providers/grpc/index.html
channel = grpc.insecure_channel('localhost:50051')
number = calculator_pb2.Number(value=25)
con = GrpcHook(grpc_conn_id='grpc_con',
interceptors=[UnaryUnaryClientInterceptor]
)
run = GrpcOperator(task_id='square_root',
stub_class=calculator_pb2_grpc.CalculatorStub(channel),
call_func='SquareRoot',
grpc_conn_id='grpc_con',
data=number,
log_response=True,
interceptors=[UnaryUnaryClientInterceptor]
)
no response in DAG log even server is shut down or server port is wrong, but it works if i call with simple client
What you're looking for I guess is the GrpcOperator example.
In your example, the wrong parameter is data.
The data parameter should be data={'request':calculator_pb2.Number(value=25)}, if you don't modify generated protof files.
This is an example.
from airflow.providers.grpc.operators.grpc import GrpcOperator
from some_pb2_grpc import SomeStub
from some_pb2 import SomeRequest
GrpcOperator(task_id="task_id", stub_class=SomeStub, call_func='Function', data={'request': SomeRequest(var='data')})
I'd like to be able to trigger a long-running python script via a web request, in bare-bones fashion. Also, I'd like to be able to trigger other copies of the script with different parameters while initial copies are still running.
I've looked at flask, aiohttp, and queueing possibilities. Flask and aiohttp seem to have the least overhead to set up. I plan on executing the existing python script via subprocess.run (however, I did consider refactoring the script into libraries that could be used in the web response function).
With aiohttp, I'm trying something like:
ingestion_service.py:
from aiohttp import web
from pprint import pprint
routes = web.RouteTableDef()
#routes.get("/ingest_pipeline")
async def test_ingest_pipeline(request):
'''
Get the job_conf specified from the request and activate the script
'''
#subprocess.run the command with lookup of job conf file
response = web.Response(text=f"Received data ingestion request")
await response.prepare(request)
await response.write_eof()
#eventually this would be subprocess.run call
time.sleep(80)
return response
def init_func(argv):
app = web.Application()
app.add_routes(routes)
return app
But though the initial request returns immediately, subsequent requests block until the initial request is complete. I'm running a server via:
python -m aiohttp.web -H localhost -P 8080 ingestion_service:init_func
I know that multithreading and concurrency may provide better solutions than asyncio. In this case, I'm not looking for a robust solution, just something that will allow me to run multiple scripts at once via http request, ideally with minimal memory costs.
OK, there were a couple of issues with what I was doing. Namely, time.sleep() is blocking, so asyncio.sleep() should be used. However, since I'm interested in spawning a subprocess, I can use asyncio.subprocess to do that in a non-blocking fashion.
nb:
asyncio: run one function threaded with multiple requests from websocket clients
https://docs.python.org/3/library/asyncio-subprocess.html.
Using these help, but there's still an issue with the webhandler terminating the subprocess. Luckily, there's a solution here:
https://docs.aiohttp.org/en/stable/web_advanced.html
aiojobs has a decorator "atomic" that will protect the process until it is complete. So, code along these lines will function:
from aiojobs.aiohttp import setup, atomic
import asyncio
import os
from aiohttp import web
#atomic
async def ingest_pipeline(request):
#be careful what you pass through to shell, lest you
#give away the keys to the kingdom
shell_command = "[your command here]"
response_text = f"running {shell_command}"
response_code = 200
response = web.Response(text=response_text, status=response_code)
await response.prepare(request)
await response.write_eof()
ingestion_process = await asyncio.create_subprocess_shell(shell_command,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE)
stdout, stderr = await ingestion_process.communicate()
return response
def init_func(argv):
app = web.Application()
setup(app)
app.router.add_get('/ingest_pipeline', ingest_pipeline)
return app
This is very bare bones, but might help others looking for a quick skeleton for a temporary internal solution.