I'm trying to install Jupyterhub on Linux server.
So I have sudo rights but I'm not a root.
I already configured JupyterHub and it's config, so I can run it in single-user mode (from different folders too).
But I have this error when trying to start multi-user as described here:
official documentation link.
I faced the same problem before with configurable-http-proxy and jupyterhub.sqlite - the problem was multi-user script try to save this files in system directory (/lib/systemd/system/jupyterhub.service or /etc/systemd/system/jupyterhub.service).
I changes such parameters in jupyter_config.py:
## url for the database. e.g. `sqlite:///jupyterhub.sqlite`
c.JupyterHub.db_url = 'sqlite:////data/jupyterhub/jupyterhub.sqlite'
## DEPRECATED since version 0.8. Use ConfigurableHTTPProxy.command
#c.JupyterHub.proxy_cmd = []
c.ConfigurableHTTPProxy.command = '/data/anaconda3/envs/fraud/bin/configurable-http-proxy'
So I tried the same way for jupyterhub-proxy.pid:
## File to write PID Useful for daemonizing JupyterHub.
c.JupyterHub.pid_file = '/data/jupyterhub/jupyterhub-proxy.pid'
But looks like JupyterHub ignores it and still trying to save it to system directory!
I add print to jupyterhub/proxy.py at _write_pid_file(self) function:
self.log.info("Writing log: %s", self.pid_file)
self.log.info("Writing log: %s", os.path.abspath(os.curdir))
Output:
[I 2019-12-19 20:23:50.289 JupyterHub proxy:562] Writing proxy pid file: jupyterhub-proxy.pid
[I 2019-12-19 20:23:50.290 JupyterHub proxy:564] Writing log: /
My idea - maybe there is another one config parameter I need to change, but I can't find anythig relevant.
It worked for me setting the c.ConfigurableHTTPProxy.pid_file parameter in the jupyterhub configuration file.
Related
I have created a Gunicorn project, with accesslog and errorlog being specified in a config file, and then the server being started with only a -c flag to specify this config file.
The problem is, each time I restart the same Gunicorn process (via pkill -F <pidfile, also specified in config>), the files specified in these configs are emptied. I've got an info that it's because of the mode in which Gunicorn opens these files being "write", rather than "append", but haven't found anything about in the official settings.
How can I fix it? It's important because I tend to forget manually backing up these logs and had no capacity for automating it so far.
This was my mistake, and mostly unrelated to Gunicorn itself: I had a script that would have created any file not existing but still required, as it could have crashed the app server:
for file in [pidfile, accesslog, errorlog]:
os.makedirs(os.path.dirname(file), exist_ok=True)
f = open(file, "w")
File mode w always emptied the files. Making a rule that uses it for pidfile only, and a for the logfiles solved the problem.
I'm deploying a flask (flask-restplus) REST API to an AWS Elastic Beanstalk instance, and I'm running into a weird failure mode.
One of my API endpoints has a dependency on OpenCV, which requires some dependencies as outlined at: ImportError: libGL.so.1: cannot open shared object file: No such file or directory while importing OCC. Per the answers there, I created an .ebextensions directory and created two files, one to install the libGL packages, which looks like this:
packages:
yum:
mesa-libGL : []
mesa-libGL-devel : []
I saved that file as packages.config, if that matters.
The second file in .ebextensions downloads and installs zlib:
commands:
00_download_zlib:
command: |
wget https://github.com/madler/zlib/archive/v1.2.9.tar.gz
tar xzvf v1.2.9.tar.gz
cd zlib-1.2.9
./configure
make
make install
ln -fs /usr/local/lib/libz.so.1.2.9 /lib64/libz.so
ln -fs /usr/local/lib/libz.so.1.2.9 /lib64/libz.so.1
I saved that file as zlib.config.
When I first ran eb deploy, everything worked great. Deployment was successful, my API responded to requests, and the code that depended on OpenCV worked. So far so good.
However, on subsequent deployments, I've gotten the following errors:
2020-11-18 23:47:44 ERROR Instance deployment failed. For details, see 'eb-engine.log'.
2020-11-18 23:47:45 ERROR [Instance: i-XXXXXXXXXXXXX] Command failed on instance. Return code: 1 Output: Engine execution has encountered an error..
2020-11-18 23:47:45 INFO Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
2020-11-18 23:47:45 ERROR Unsuccessful command execution on instance id(s) 'i-XXXXXXXXXXXXX'. Aborting the operation.
2020-11-18 23:47:46 ERROR Failed to deploy application.
I went in and pulled down the logs from the instance, first looking at eb-engine.log. The only error there is:
2020/11/18 23:47:44.131837 [ERROR] An error occurred during execution of command [app-deploy] - [PreBuildEbExtension]. Stop running the command. Error: EbExtension build failed. Please refer to /var/log/cfn-init.log for more details.
However, looking at cfn-init.log just indicates that everything succeeded:
2020-11-18 23:47:34,297 [INFO] -----------------------Starting build-----------------------
2020-11-18 23:47:34,306 [INFO] Running configSets: Infra-EmbeddedPreBuild
2020-11-18 23:47:34,309 [INFO] Running configSet Infra-EmbeddedPreBuild
2020-11-18 23:47:34,313 [INFO] Running config prebuild_0_newapi
2020-11-18 23:47:36,512 [INFO] Running config prebuild_1_newapi
2020-11-18 23:47:44,106 [INFO] Command 00_download_zlib succeeded
2020-11-18 23:47:44,108 [INFO] ConfigSets completed
2020-11-18 23:47:44,108 [INFO] -----------------------Build complete-----------------------
I then tried removing the entire .ebextensions directory and re-deploying, and the deployment succeeded. Then I tried adding back the .ebextensions directory and adding the files one at a time, and discovered that the deployment worked fine when I added packages.config, but failed again when I added zlib.config.
My question boils down to: why is this happening, and is there anything I can do to resolve it? My understanding is that I need both of these files deployed to my instance in case I migrate to a different enviroment, or AutoScaling migrates my instance, etc.
The only thing I can think of is that the instance doesn't like the fact that I keep re-installing zlib, but the cfn-init-cmd.log indicates that all the commands in zlib.config are succeeding, as does cfn-init.log. So why is eb-engine.log reporting an error? Is it telling me to look in the wrong place for logs that may be relevant? I've looked in every log file and I don't see anything else indicating any issues.
I did find one tangentially-related possible solution relating to Immutable Environment Updates, which looks like it may work but feels like a bit of unnecessary work. At the very least I'd like to understand why I need to make that change and why Elastic Beanstalk isn't playing nicer with my .ebextensions.
Just in case anyone runs across this in the future, I wanted to share the solution. I was never able to determine why the zlib install process was failing after the first deployment on an instance, so I ended up using the Immutable Environment Updates settings I linked in my original question.
Deployments take a bit longer to process as the deployment creates an autoscaling group and a new instance on each deployment, but my deployments just work every time now.
I've done alot of research, and I can't find anything which actually solves my issue.
Since basically no site accepts mitmdumps certificate for https, I want to ignore those hosts. I can access a specific website with "--ignore-hosts (ip)" like normal, but I need to ignore all HTTPS/SSL hosts.
Is there any way I can do this at all?
Thanks alot!
There is a script file called tls_passthrough.py on the mitmproxy GitHub which ignores hosts which has previously failed a handshake due to the user not trusting the new certificate. Although it does not save for other sessions.
What this also means is that the first SSL connection from this perticular host the will always fail. What I suggest you do is write out all the IPs which has failed previously into a text document and ignore all hosts which are in that text file.
tls_passthrough.py
To simply start it, you just add it with the script argument "-s (tls_passthrough.py path)"
Example,
mitmproxy -s tls_passthrough.py
you need a simple addon script to ignore all tls connections.
import mitmproxy
class IgnoreAllTLS:
def __init__(self) -> None:
pass
def tls_clienthello(self, data: mitmproxy.proxy.layers.tls.ClientHelloData):
'''
ignore all tls event
'''
# LOGC("tls hello from "+str(data.context.server)+" ,ignore_connection="+str(data.ignore_connection))
data.ignore_connection = True
addons = [
IgnoreAllTLS()
]
the latest version ( 7.0.4 for now) is not support ignore_connection feature yet,so u need to install the main source version:
git clone https://github.com/mitmproxy/mitmproxy.git
cd mitmproxy
python3 -m venv venv
activate the venv before startup the proxy
source /path/to/mitmproxy/venv/bin/activate
startup mitmproxy
mitmproxy -s ignore_all_tls.py
You can ignore all https/SSL traffic by using a wildcard:
mitmproxy --ignore-hosts '.*'
I had to push an .exe file to heroku to be able to create invoice pdfs. It works localy without any problems but on heroku I get an error:
OSError: [Errno 13] Permission denied
Probably because I am not allowed to execute .exe files. So I need somehow to create a rule that this file is allowed to execute.
I pushed wkhtmltopdf.exe to heroku and I access this file in my method to create a pdf:
MYDIR = os.path.dirname(__file__)
path_wkthmltopdf = os.path.join(MYDIR + "/static/executables/", "wkhtmltopdf.exe")
config = pdfkit.configuration(wkhtmltopdf=path_wkthmltopdf)
Was not able to find a solution yet.
EDIT:
Tryed giving permission with chmod through heroku bash and also adding a linux executable but still the same error:
~/static/executables $ chmod a+x wkhtmltopdf-linux.exe
~ $ chmod a+x static/executables/wkhtmltopdf-linux.exe
Using sudo gave me:
bash: sudo: command not found
I'm not very familiar with heroku, but if you can somehow get access to terminal of environment of your application (for example ssh to your server), you need to change permissions of that file so it can be executed. To do that, you need to run in that terminal:
sudo chmod a+x /path/to/file/FILENAME
Also,i'm pretty sure your app on Heroku runs on Linux, specifically on Ubuntu, since it's the default (link)
It means there might be difficulties with running Windows executables.
Okay I managed to fix this with a buildpack. In addition wkhtmltopdf-pack must be installed and added to the requirements.txt.
Then you have to set a config var in heroku for the wkhtmltopdf executable which will be generated from the files provided in the buildpack. Do not search for an .exe file.
heroku config:set WKHTMLTOPDF_BINARY=wkhtmltopdf-pack
You can see all your config vars also in the heroku dashboard under settings, you can also create it there and not use the CLI.
Then you have to tell the pdfkit configuration where to find the WKHTMLTOPDF_BINARY:
In my config.py:
import subprocess
WKHTMLTOPDF_CMD = subprocess.Popen(
['which', os.environ.get('WKHTMLTOPDF_BINARY', 'wkhtmltopdf')], # Note we default to 'wkhtmltopdf' as the binary name
stdout=subprocess.PIPE).communicate()[0].strip()
For the pdfkit configuration:
config = pdfkit.configuration(wkhtmltopdf=app.config['WKHTMLTOPDF_CMD'])
Now you should be able to create the pdf, example:
the_pdf = pdfkit.from_string("something", False, configuration=config)
Credit to this tutorial:
https://artandlogic.com/2016/12/generating-pdfs-wkhtmltopdf-heroku/
I would like to display "hello world" via MPI on different Google cloud compute instances with the help of the following code:
from mpi4py import MPI
size = MPI.COMM_WORLD.Get_size()
rank = MPI.COMM_WORLD.Get_rank()
name = MPI.Get_processor_name()
print("Hello, World! I am process/rank {} of {} on {}.\n".format(rank, size, name))
.
The problem is, that even so I can ssh-connect across all of these instances without problem, I get a permission denied error message when I try to run my script. I use following command to envoke my script:
mpirun --host localhost,instance_1,instance_2 python hello_world.py
.
And get the following error message:
Permission denied (publickey).
--------------------------------------------------------------------------
ORTE was unable to reliably start one or more daemons.
This usually is caused by:
* not finding the required libraries and/or binaries on
one or more nodes. Please check your PATH and LD_LIBRARY_PATH
settings, or configure OMPI with --enable-orterun-prefix-by-default
* lack of authority to execute on one or more specified nodes.
Please verify your allocation and authorities.
* the inability to write startup files into /tmp (--tmpdir/orte_tmpdir_base).
Please check with your sys admin to determine the correct location to use.
* compilation of the orted with dynamic libraries when static are required
(e.g., on Cray). Please check your configure cmd line and consider using
one of the contrib/platform definitions for your system type.
* an inability to create a connection back to mpirun due to a
lack of common network interfaces and/or no route found between
them. Please check network connectivity (including firewalls
and network routing requirements).
--------------------------------------------------------------------------
.
Additional information:
I installed open-MPI on all of my nodes
I have Google automatically set all of my ssh-keys by using gcloud to log into each instance from each instance
instance-type: n1-standard-1
instance-OS: Linux Debian (default)
.
Thanks you for your help :-)
.
New Information:
(thanks # Zulan for pointing out that I should edit my previous post instead of creating a new answer for new information)
So, I tried to do the same with mpich instead of openmpi. However, I run into a similar error message.
Command:
mpirun --host localhost,instance_1,instance_2 python hello_world.py
.
Error message:
Host key verification failed.
.
I can ssh-connect between my two instances without problems, and through the gcloud commands the ssh-keys should automatically be set up properly.
So, has somebody an idea what the problem could be? I also checked the path, the firewall rules, and my ability to write startup scripts in the temp-folder. Can someone please try to recreate this problem? + Should I raise this question to Google? (never done such thing before, Im quite unsure :S)
Thanks for helping :)
so I finally found a solution. Wow, problem was driving me nuts.
So it turned out, that I needed to generate ssh-keys manually for the script to work. I have no idea why, because google-services already set up the keys by using
gcloud compute ssh , but well, it worked :)
Steps I did:
instance_1 $ ssh-keygen -t rsa
instance_1 $ cd .ssh
instance_1 $ cat id_rsa.pub >> authorized_keys
instance_1 $ gcloud compute copy-files id_rsa.pub
instance_1 $ gcloud compute ssh instance_2
instance_2 $ cd .ssh
instance_2 $ cat id_rsa.pub >> authorized_keys
.
I will open another topic and ask why I cannot use ssh instance_2, even so gcloud compute ssh instance_2 is working. See: Difference between the commands "gcloud compute ssh" and "ssh"