I have these libraries installed:
testcontainers==2.5
clickhouse-driver==0.1.0
This code:
from testcontainers.core.generic import GenericContainer
from clickhouse_driver import Client
def test_docker_run_clickhouse():
ch_container = GenericContainer("yandex/clickhouse-server")
ch_container.with_bind_ports(9000, 9000)
with ch_container as ch:
client = Client(host='localhost')
print(client.execute("SHOW TABLES"))
if __name__ == '__main__':
test_docker_run_clickhouse()
I am trying to get a generic container with clickhouse DB running.
But it gives me: EOFError: Unexpected EOF while reading bytes.
I am using Python 3.5.2. How to fix this?
It takes some time to run a container. Add a time delay before executing operations.
import time
with ch_container as ch:
time.sleep(3)
client = Client(host='localhost')
print(client.execute("SHOW TABLES"))
Related
I have consulted several topics on the subject, but I didn't see any related to launching an app on a device directly using a ppadb command.
I managed to do this code:
import ppadb
import subprocess
from ppadb.client import Client as AdbClient
# Create the connect functiun
def connect():
client = AdbClient(host='localhost', port=5037)
devices = client.devices()
for device in devices:
print (device.serial)
if len(devices) == 0:
print('no device connected')
quit()
phone = devices[0]
print (f'connected to {phone.serial}')
return phone, client
if __name__ == '__main__':
phone, client = connect()
import time
time.sleep(5)
# How to print each app on the emulator
list = phone.list_packages()
for truc in list:
print(truc)
# Launch the desired app through phone.shell using the package name
phone.shell(????????????????)
From there, I have access to each app package (com.package.name). I would like to launch it through a phone.shell() command but I can't access the correct syntax.
I can execute a tap or a keyevent and it's perfectly working, but I want to be sure my code won't be disturbed by any change in position.
From How to start an application using Android ADB tools, the shell command to launch an app is
am start -n com.package.name/com.package.name.ActivityName
Hence you would call
phone.shell("am start -n com.package.name/com.package.name.ActivityName")
A given package may have multiple activities. To find out what they are, you can use dumpsys package as follows:
def parse_activities(package, connection, retval):
out = ""
while True:
data = connection.read(1024)
if not data: break
out += data.decode('utf-8')
retval.clear()
retval += [l.split()[-1] for l in out.splitlines() if package in l and "Activity" in l]
connection.close()
activities = []
phone.shell("dumpsys package", handler=lambda c: parse_activities("com.package.name", c, activities))
print(activities)
Here is the correct and easiest answer:
phone.shell('monkey -p com.package.name 1')
This method will launch the app without needing to have acces to the ActivityName
Using AndroidViewClient/cluebra, you can launch the MAIN Activity of a package as follows:
#! /usr/bin/env python3
# -*- coding: utf-8 -*-
from com.dtmilano.android.viewclient import ViewClient
ViewClient.connectToDeviceOrExit()[0].startActivity(package='com.example.package')
This connects to the device (waiting if necessary) and then invokes startActivity() just using the package name.
startActivity() can also receive a component which is used when you know the package and the activity.
I tried to use Python to get the docker stats, by using Python's docker module.
the code is:
import docker
cli = docker.from_env()
for container in cli.containers.list():
stream = container.stats()
print(next(stream))
I run 6 docker containers, but when I run the code, It needs a few second to get all containers' stats, so is there have some good methods to get the stats immediately?
Docker stats inherently takes a little while, a large part of this is waiting for the next value to come through the stream
$ time docker stats 1339f13154aa --no-stream
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
...
real 0m1.556s
user 0m0.020s
sys 0m0.015s
You could reduce the time it takes to execute by running the commands in parralell, as opposed to one at a time.
To achieve this, you could use the wonderful threading or multiprocessing library.
Digital Ocean provides a good tutorial on how to accomplish this with a ThreadPoolExecutor:
import requests
import concurrent.futures
def get_wiki_page_existence(wiki_page_url, timeout=10):
response = requests.get(url=wiki_page_url, timeout=timeout)
page_status = "unknown"
if response.status_code == 200:
page_status = "exists"
elif response.status_code == 404:
page_status = "does not exist"
return wiki_page_url + " - " + page_status
wiki_page_urls = [
"https://en.wikipedia.org/wiki/Ocean",
"https://en.wikipedia.org/wiki/Island",
"https://en.wikipedia.org/wiki/this_page_does_not_exist",
"https://en.wikipedia.org/wiki/Shark",
]
with concurrent.futures.ThreadPoolExecutor() as executor:
futures = []
for url in wiki_page_urls:
futures.append(executor.submit(get_wiki_page_existence, wiki_page_url=url))
for future in concurrent.futures.as_completed(futures):
print(future.result())
This is what I use to get the stats directly for each running docker (still takes like 1 second per container, so I don't think it can be helped). Besides this if it helps https://docker-py.readthedocs.io/en/stable/containers.html documentation for the arguments. Hope it helps.
import docker
client = docker.from_env()
containers = client.containers.list()
for x in containers:
print(x.stats(decode=None, stream=False))
As the TheQueenIsDead suggested, it might require threading if you want to get it faster.
I am using #mulefunc decorator of uwsgi to process data in background of flask rest api call. Everything seems to be working fine though getting following message in log:
MULE MSG QUEUE IS FULL: buffer size 212992 bytes (you can tune it with --mule-msg-size)
number of hits to this function is very high. as suggested to tune --mule-msg-size. where can I tune it?
sample code:
from uwsgidecorators import mulefunc
#mulefunc
def mule(num):
for i in range(num):
print(i)
time.sleep(1)
#app.route("/mule")
def add_mule():
num = request.args.get("n", None)
if num:
mule(int(num))
return "Mule"
How did you start your uwsgi?
You could pass --mule-msg-size in command line.
Or you could put it in your ini file, and run uwsgi with --ini=<ini file>
mule-msg-size = 999999
I had my first glance on python's scapy package and wanted to test my first little script. In order to test the code below I sent myself an E-Mail containing this string "10000000000000". My expectation was that below minimal example (with my network card in monitoring mode and execution as su) would give an command line output for that E-Mail specifically but it doesn't. I am sure that this is caused by an misunderstanding of network traffic and tcp by myself -- could anyone elucidate?
The virtual environment I set-up is Python 3.6.8
(I have tested that I can sniff network traffic in general using the commented out line, only when I try to filter their content via regular expressions the result is not as I expected.)
import optparse
import scapy.all as sca
import re
from scapy import packet
from scapy.layers.dns import DNS
from scapy.layers.dot11 import Dot11Beacon, Dot11ProbeReq
from scapy.layers.inet import TCP
class SniffDataTraffic:
#staticmethod
def find_expr(pack: packet) -> str:
regex_dict = {'foo': r"1[0-9]{13}",'bar': r"2[1-5]{14}"}
raw_pack = pack.sprintf('%Raw.load%')
found = {iter: re.findall(regex_dict[iter], raw_pack) for iter in regex_dict.keys()}
for name, value in found.items():
if value:
return "[+] found {}: {}".format(name, value[0])
def main():
try:
print("[*] Starting sniffer")
#sca.sniff(iface="xxxxxmon", prn=lambda x: x.summary(), store=False)
sca.sniff(prn=SniffDataTraffic.find_expr, filter='tcp', iface="xxxxxmon", store=False)
except KeyboardInterrupt:
exit(0)
if __name__ == '__main__':
main()
I'm currently working on a project that requires I edit a configure file to replace an old standard port number if the port is being used. The code I'm currently using is the following:
import os
import sys
import socket
import select
import tempfile
import subprocess
import threading
import Queue
import time
import fileinput
...
def find_open_port():
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind(("",0))
s.listen(1)
tempport = s.getsocketname()[1]
s.close()
return tempport
When I run it from my Ubuntu machine (Python 2.7.6) , it runs fine, but on my CentOS 6 VM running in my Redhawk Component I get the following:
AttributeError: '_socketobject' object has no attribute 'getsocketname'
Not exactly sure why I'm getting this error. Python in Redhawk is running 2.6 I want to say?
Any clue as to why this would happen and how to fix?
Your code calls the method getsockname but your error says getsocketname, you sure you copied it right when writing it to Redhawk?