Probably a simple question. I'm a newbie
I have a local computer (computer 'A') and a remote computer (computer 'B').
I want to run a bokeh server on B and have the results show up in A's browser when I browse to localhost:8000.
First I created this file on B. It just has a simple plot with a slider. You slide the slider and the plot changes. It works when I run it on A.
import sys
import numpy as np
from tornado.ioloop import IOLoop
from bokeh.application.handlers import FunctionHandler
from bokeh.application import Application
from bokeh.layouts import column
from bokeh.models import ColumnDataSource, Slider
from bokeh.plotting import figure
from bokeh.server.server import Server
def modify_doc(doc):
# Initialize the data
power = 1
x = np.arange(10)
y = x**power
# Initialize the plot and slider
p = figure()
p_source = ColumnDataSource(data=dict(x=x, y=y))
p.line('x', 'y', source=p_source)
s = Slider(start=0, end=10, value=1, step=.1, title="multiplier")
# When the slider is changed, redraw the plot
def callback(attr, old, new):
x = p_source.data['x']
y = x**s.value
p_source.data = dict(x=x, y=y)
s.on_change('value', callback)
doc.add_root(column([p, s]))
def main(_):
io_loop = IOLoop.current()
bokeh_app = Application(FunctionHandler(modify_doc))
server_kwargs = {}
server_kwargs['port'] = 8000
server = Server({'/': bokeh_app}, io_loop=io_loop, **server_kwargs)
server.start()
io_loop.add_callback(server.show, "/")
io_loop.start()
if __name__ == '__main__':
main(sys.argv[1:])
So I copy this file to B and I run it on B by ssh'ing in and typing:
python barebones.py
Then on A I type:
ssh root#123.123.123.123 -N -D 7000
Note I typed 7000 not 8000. I've tried both; I don't understand why some things I've read online tell me to use a different number there.
Next I open Firefox > Preferences > Network proxy > Settings > Manual proxy configuration. I set SOCKS host to 'localhost' (without quotes) and port to 7000 and choose SOCKSv5. Click OK. In Firefox's address bar, browse to http://localhost:8000. (Note I said 8000 there not 7000. Again, not sure if that is correct but I've tried various permutations.) It says "Unable to connect".
I tried inserting this in the script but it didn't seem to have any effect. Maybe I'm way off here:
from bokeh.server.util import create_hosts_whitelist
allow_websocket_origin = ['0.0.0.0:8000']
port=8000
create_hosts_whitelist(allow_websocket_origin, port)
I know there is another way to run a bokeh server using "bokeh serve ..." command at command line, but I'm trying to create a standalone script if possible.
Gah. This should probably be simple. What am I missing?
Have you tried the following?
On B, run:
bokeh serve filename.py --address 0.0.0.0
You can then access the application from another computer with the URL http://Bs_name_or_ip_address:5006/. If there is a firewall running on the B computer, you have to allow incoming traffic on the port.
To override the default port number (5006), use the --port NNNN argument.
See also:
https://docs.bokeh.org/en/latest/docs/reference/command/subcommands/serve.html
following command worked for bokeh 1+ versions
bokeh serve --show filename.py --allow-websocket-origin=*:5006
I know this question is really old but I run into a similar problem and managed to solve it using the suggestion posted here: https://groups.google.com/a/continuum.io/forum/#!topic/bokeh/CmweAdzkWXw
Maybe this helps someone else:
On the node where Bokeh will run:
bokeh serve &
ssh -NfR 5006:localhost:5006 user#gateway
On the local machine (e.g., laptop):
ssh -NfL localhost:5006:localhost:5006 user#gateway
Now you should be able to navigate to http://localhost:5006 to access Bokeh running on a node behind the gateway.
Adding to the above answers, sometimes bokeh says
Refusing websocket connection from Origin http: //< bokeh_server_ip_address>:8000; use
--allow-websocket-origin=: 8000 or set BOKEH_ALLOW_WS_ORIGIN=:8000 to permit this; currently we allow origins {'localhost:8000'}
bokeh serve --show file_name --address 0.0.0.0 --port=8000 --allow-websocket-origin=<bokeh_server_ip_address>:8000
bokeh_server_ip_address is the remote server ip where you want to host the bokeh application
Related
i am new in airflow and gRPC
i use airflow running in docker with default setting
https://airflow.apache.org/docs/apache-airflow/stable/start/docker.html
when i try to do in this link
https://airflow.apache.org/docs/apache-airflow-providers-grpc/stable/_api/airflow/providers/grpc/index.html
channel = grpc.insecure_channel('localhost:50051')
number = calculator_pb2.Number(value=25)
con = GrpcHook(grpc_conn_id='grpc_con',
interceptors=[UnaryUnaryClientInterceptor]
)
run = GrpcOperator(task_id='square_root',
stub_class=calculator_pb2_grpc.CalculatorStub(channel),
call_func='SquareRoot',
grpc_conn_id='grpc_con',
data=number,
log_response=True,
interceptors=[UnaryUnaryClientInterceptor]
)
no response in DAG log even server is shut down or server port is wrong, but it works if i call with simple client
What you're looking for I guess is the GrpcOperator example.
In your example, the wrong parameter is data.
The data parameter should be data={'request':calculator_pb2.Number(value=25)}, if you don't modify generated protof files.
This is an example.
from airflow.providers.grpc.operators.grpc import GrpcOperator
from some_pb2_grpc import SomeStub
from some_pb2 import SomeRequest
GrpcOperator(task_id="task_id", stub_class=SomeStub, call_func='Function', data={'request': SomeRequest(var='data')})
I'm running a Bokeh app as shown in standalone_embed.py and want to use an authentication hook with it, as shown here. How do I set the auth_module in bokeh.settings.settings in standalone_embed.py?
I tried
from bokeh.settings import settings
settings.auth_module = "auth.py"
settings.xsrf_cookies = True
but that doesn't seem to do anything. Any help appreciated, thanks!
Found the answer:
Server can take the authentication module as param as follows:
auth_module_path = <path to auth.py>
if auth_module_path:
server_kwargs['auth_provider'] = AuthModule(auth_module_path)
server = Server(
bokeh_applications, # list of Bokeh applications
io_loop=loop, # Tornado IOLoop
**server_kwargs # port, num_procs, etc.
)
I'm testing my own ddos protection feature implemented in my server (this is necessary). Currently I have a terrible loop for making multiple tor requests, each with it's own identity.
os.system("taskkill /f /im tor.exe")
os.startfile("C:/Tor/Browser/TorBrowser/Tor/tor.exe")
session = requests.session()
session.proxies = {}
session.proxies['http'] = 'socks5h://localhost:9050'
session.proxies['https'] = 'socks5h://localhost:9050'
Now I want to multithread this for faster speeds, since each tor connection takes ages to load.
If I google how to run multiple tor instances, I get info on how to do this from within the tor browser itself, never how to do it programmatically, Is there a way to do this on windows python3 specifically?
Any help appreciated
The key point to understand about running multiple separate Tor processes is that each one will need to listen on it's own ControlPort and SocksPort so that your clients can issue requests through each individual instance.
If you use Stem, stem.process.launch_tor_with_config would be the recommended way to launch multiple Tor processes. By using this method, you can pass the necessary config options dynamically to each client without having to create individual files, and you'll have better process management over the Tor instances.
If you want to use os, you will need to create one config file per instance and pass that to tor when you start it.
At minimum, create one torrc config file for each instance you want to run with the following:
torrc.1
ControlPort 9800
SocksPort 9801
torrc.2
ControlPort 9802
SocksPort 9803
Each individual client will connect on the different socks ports to issue requests.
To start them, use:
os.system("C:/Tor/Browser/TorBrowser/Tor/tor.exe -f C:/path/to/torrc.1")
os.system("C:/Tor/Browser/TorBrowser/Tor/tor.exe -f C:/path/to/torrc.2")
Then create one or more clients per instance:
session1 = requests.session()
session1.proxies = {}
session1.proxies['http'] = 'socks5h://localhost:9801'
session1.proxies['https'] = 'socks5h://localhost:9801'
session2 = requests.session()
session2.proxies = {}
session2.proxies['http'] = 'socks5h://localhost:9803'
session2.proxies['https'] = 'socks5h://localhost:9803'
first of All, install stem like this in terminal
>>>pip install stem
write this code in a text file and rename the file like this myfile.py
include stem and requests first like this in start of file and write following code
import requests
import stem.process
x = 6
for i in range(1, x):
cp = str(10000+i)
sp = str(11000+i)
tp1 = stem.process.launch_tor_with_config(tor_cmd = 'C:\\Users\\<Tor Directory>\\Browser\\TorBrowser\\Tor\\tor.exe', config = {
'ControlPort': cp,
'SocksPort' : sp,
'DataDirectory': 'C:/<Any Path for data directories>/proxies/'+str(i)+'/',
'Log': [
'NOTICE stdout',
'ERR file C:/<Any Path for Error file>/tor_error_log.txt',
],
},
)
proxies = {
'http': 'socks5h://127.0.0.1:'+str(sp),
'https': 'socks5h://127.0.0.1:'+str(sp)
}
r1 = requests.get('http://ipinfo.io/json', proxies=proxies)
print('\n')
print(r1.content)
print('\n')
now go into the folder that contains myfile.py and run command prompt(cmd) or any terminal there and launch the file like this.
>>>python myfile.py
this will launch 5 tor processes on these ports 11001,11002,11003,11004,11005
you can access the tor proxy(socks5) by using ip address 127.0.0.1 and any of the above ports from any program.
if you open task manager you will see 5 tor processes running that consumes 10-20mb of ram each process
if you get an error like this while running myfile.py in terminal,
can not bind listening port. working with config files left us in broken state. Dying
then just close all processes of tor and launch myfile.py again. this error happens because you have already a tor process running on the port.
to create more tor processes, close all tor instances from task manager, change the value of variable x in start of file like this
x = any integer like 10,20,30,50
save myfile.py and run this file again.
cheers!
For example, I want to run flask-app on http://api.domain.com . However, I have no idea how to do this and the flask documentation serves no help. I am using a shared namecheap web server via SSH to run python. I have ports 8080, 8181 and 8282 open.
Server-sided code:
from flask import Flask
from flask import Blueprint
app = Flask(__name__)
app.config['SERVER_NAME'] = 'domain.com'
#app.route('/status')
def status():
return 'Status : Online'
bp = Blueprint('subdomain', __name__, subdomain="api")
app.register_blueprint(bp)
if __name__ == '__main__':
app.run(host=app.config["SERVER_NAME"],port=8181,debug=True)
When I visit http://www.api.domain.com/status , it returns a 404 error.
Nothing displays on the SSH console.
Any help if very much appreciated.
First things first:
http (i.e. a web server without a SSL certificate) is insecure. You should set up a certificate and always use port 443 to the outside.
Then, on namecheap, you need to define a CNAME entry to point to the subdomain.
In Namecheap, click domain -> Manage, then Advanced DNS
Create a new record, select CNAME as the Type, and enter the subdomain name (just the top level) as the HOST, then the IP where you server is as the value (TTL (time to live) is the time it takes to change when you want to change it next time, 1, 10min is useful to debug stuff, but DNS may not honor that anyways...)
Wait a few minutes, and you should be able to reach your server at the subdomain name.
Now, if you use the same IP as a webserver for example, but a different port, that is basically not gonna do what you want. The DNS will forward subdomain traffic to your (same) server IP, so if your webserver is on port 443, you will also reach it with https://api.domain.com. If your API uses port 8080 or 8081, you will need to always specify the port to actually reach the API server at the subdomain (i.e api.domain.com:8080 ).
The DNS merely forwards the subdomain name to the IP you tell it to.
I solved this using the tornado in Python. I already answered this question and it's working fine
from tornado.wsgi import WSGIContainer
from tornado.httpserver import HTTPServer
from tornado.ioloop import IOLoop
from flask import Flask
app = Flask(__name__)
#app.route('/')
def hello_world():
return 'Hello, World!'
http_server = HTTPServer(WSGIContainer(app))
http_server.listen(int(5000),address="ip_address_of_your_server_machine")
IOLoop.instance().start()
Now you can access this page like www.example.com:5000
Actually Flask is not meant to run a server by itself, it's only for debugging, if you want to run an web app you should run behind a Apache or Nginx with some wsgi, here is an simple example on Ubuntu 18.04 LTS:
https://www.digitalocean.com/community/tutorials/how-to-serve-flask-applications-with-uswgi-and-nginx-on-ubuntu-18-04
I'm running a cherrypy based app on an openshift gear. Recently I've been getting a "503 service temporarily unavailable" error whenever I try to go to the site. Inspecting the logs, I see I'm getting an ImportError where I try to import CherryPy. This is strange - CherryPy is listed as a dependency in my requirements.txt and used to be imported just fine. I double checked to make sure I'm getting the right path to the openshift activate_this.py and it seems to be correct. I'm not quite sure where to look next; any help would be appreciated. Thanks!
The failed import is at line 14 of app.py:
import os
import files
virtenv = os.path.join(os.environ['OPENSHIFT_PYTHON_DIR'], 'virtenv')
virtualenv = os.path.join(virtenv, 'bin', 'activate_this.py')
conf = os.path.join(files.get_root(), "conf", "server.conf")
try:
execfile(virtualenv, dict(__file__=virtualenv))
print virtualenv
except IOError:
pass
import cherrypy
import wsgi
def mount():
def CORS():
cherrypy.response.headers["Access-Control-Allow-Origin"] = os.environ['OPENSHIFT_APP_DNS']
cherrypy.config.update({"tools.staticdir.root": files.get_root()})
cherrypy.tools.CORS = cherrypy.Tool('before_handler', CORS)
cherrypy.tree.mount(wsgi.application(), "/", conf)
def start():
cherrypy.engine.start()
def end():
cherrypy.engine.exit()
if __name__ == "__main__":
mount()
start()
UPDATE
I eventually saw (when pushing to the openshift repo using git bash CLI) that the dependency installation from requirements.txt was failing with some exceptions I haven't bothered to look into yet. It then goes on to try to install dependencies in setup.py, and that works just fine.
Regarding the port in use issue...I have no idea. I changed my startup from tree.mount and engine.start to quickstart, and everything worked when I pushed to openshift. Just for kicks (and because I need it to run my tests), I switched back to cherrypy.tree.mount, pushed it, and it worked just fine.
Go figure.
I use the app.py entry point for Openshift. Here are several examples on how I start my server using the pyramid framework on Openshift. I use waitress as the server but I have also used the cherrypy wsgi server. Just comment out the code you don't want.
app.py
#Openshift entry point
import os
from pyramid.paster import get_app
from pyramid.paster import get_appsettings
if __name__ == '__main__':
here = os.path.dirname(os.path.abspath(__file__))
if 'OPENSHIFT_APP_NAME' in os.environ: #are we on OPENSHIFT?
ip = os.environ['OPENSHIFT_PYTHON_IP']
port = int(os.environ['OPENSHIFT_PYTHON_PORT'])
config = os.path.join(here, 'production.ini')
else:
ip = '0.0.0.0' #localhost
port = 6543
config = os.path.join(here, 'development.ini')
app = get_app(config, 'main') #find 'main' method in __init__.py. That is our wsgi app
settings = get_appsettings(config, 'main') #don't really need this but is an example on how to get settings from the '.ini' files
# Waitress (remember to include the waitress server in "install_requires" in the setup.py)
from waitress import serve
print("Starting Waitress.")
serve(app, host=ip, port=port, threads=50)
# Cherrypy server (remember to include the cherrypy server in "install_requires" in the setup.py)
# from cherrypy import wsgiserver
# print("Starting Cherrypy Server on http://{0}:{1}".format(ip, port))
# server = wsgiserver.CherryPyWSGIServer((ip, port), app, server_name='Server')
# server.start()
#Simple Server
# from wsgiref.simple_server import make_server
# print("Starting Simple Server on http://{0}:{1}".format(ip, port))
# server = make_server(ip, port, app)
# server.serve_forever()
#Running 'production.ini' method manually. I find this method the least compatible with Openshift since you can't
#easily start/stop/restart your app with the 'rhc' commands. Mabye somebody can suggest a better way :)
# #Don't forget to set the Host IP in 'production.ini'. Use 8080 for the port for Openshift
# You will need to use the 'pre_build' action hook(pkill python) so it stops the existing running instance of the server on OS
# You also will have to set up another custom action hook so rhc app-restart, stop works.
# See Openshifts Origin User's Guide ( I have not tried this yet)
#Method #1
# print('Running pserve production.ini')
# os.system("pserve production.ini &")
#Method #2
#import subprocess
#subprocess.Popen(['pserve', 'production.ini &'])