Converting Streamlit App to exe file with nativefier certificate issues - python

I have a streamlit app up and running on my PC and need to convert it to an exe file to share amongst my team. I'm currently following the tutorial of how to do this from this forum: https://discuss.streamlit.io/t/streamlit-deployment-as-an-executable-file-exe-for-windows-macos-and-android/6812
It seems to work fine for everyone there, even when the app is just running on the localhost. However if I input the command
nativefier --name AppName.exe http://localhost:8501/ --platform windows --ignore-certificate --insecure
I get the following error: GotError [RequestError]: unable to get local issuer certificate at ClientRequest. I imagine that this is some sort of security issue with my work PC and I'm just trying to hunt down any workarounds that may be out there. As you can see I've tried to ignore the certificate and use the insecure option, however I still continue to get this error. Any ideas? Thanks so much for any advice.

Related

Running python code in a docker image in a server where I dont have root access

So, I have access to a server by ssh with some gpus where I can run some python code. I need to do that using a docker container, however if I try to do anything with docker in the server i get permission denied as I dont have root access (and I am not in the list of sudoers). What am I missing here?
Btw, I am totally new to Docker (and quite new to linux itself) so it might be that I am not getting some fundamental.
I solved my problem. Turns out I simply had to ask the server administrator to add me to a group and everything worked.

Dash App not working when deployed on Amazon EC2 instance

I am new to linux/aws in general and I am trying to deploy a dash webapp onto an ec2 instance. The webapp is written in python and uses an aws database. I created an EC2 instance, set the security group to allow all traffic, uses the default VPC and internet gateway. I successfully installed the all the app dependencies but anytime I run the app.py file. The public dns doesnt load the webpage. I have tried pinging the public IP and that works. I really have a limited knowledge base hear and have tried different options but cant seem to get it working. Please help :)
Public IP-https://ec2-3-8-100-74.eu-west-2.compute.amazonaws.com/
security group
webapp
I've been smacking my head on this for a couple days and finally got it. I know it's been a while but hopefully this helps someone else. Had a hard time finding answers elsewhere. Very similar to you, I had the ec2 instance set up, the security groups and vpc set up (those steps aren't too difficult and are well-documented). I had some successful pings, but was getting a "connection refused" error through the browser.
The "app.run_server()" parameters were the missing piece for me:
if __name__ == '__main__':
app.run_server(host= '0.0.0.0',port=80)
At that point calling the .py app gave me a 'permission denied,' which I was able to get around by running as sudo ("sudo python3 my_app.py") -- and by sudo pip install-ing necessary packages. (All through ssh, fwiw).
After finally running successfully I was given an IP from the dash app corresponding to my private IPv4 on EC2, and at that point could set my browser to the PUBLIC IPv4 and get to the app. Huzzah.
Playing around with it a little, it looks like as long as you have:
host= '0.0.0.0'
you'll run it online. Without that, it runs only locally (you'll see IP as 127.0.0.1). Then it's a matter of making sure whatever port you're using (:80, :443, :8050) is open according to firewalls and security groups. Dash for me defaults to :8050, and that port might be fine as long as it's allowed through security groups.
QUICK UPDATE:
I tried leaving it on port :8050, and also opened :8050 to all ipv4 in my security group. That let me run everything successfully without using "sudo python3".
if __name__ == '__main__':
app.run_server(host= '0.0.0.0',port=80)
With "python3 my_app.py" in ssh

Running an opencv based python script on a remote server with ssh forwarding from my macbook gives me an error

I am trying to run a python script on a remote server, which includes displaying images. The image does not get displayed and I get an error Gtk-WARNING **: cannot open display:
I have checked posts where they suggest editing the flags in sshd_config and also setting the DISPLAY variable manually. But, none of that seems to be working for me.
X11Forwarding yes
X11DisplayOffset 10
X11UseLocalhost no
XAuthLocation /usr/X11/bin/xauth
Running xclock or xeyes also gives me errors.
Edit: I used ssh -X and ssh -Y to ssh into the server, neither worked
Solution: Restart after installing XQuartz
After looking through multiple posts and trying to make it work, I realised that after installing XQuartz, the user is required to restart the machine. It allows to set the correct environment variables (like DISPLAY). It works for me now after having restarted.
Alternative
However, in case you face a similar problem, not stemming due to the restart issue, I found an alternate way as suggested in the following link:
https://uisapp2.iu.edu/confluence-prd/pages/viewpage.action?pageId=280461906

easy_install/curl fails because of "SSL3_GET_SERVER_CERTIFICATE:certificate verify failed"

I recently installed a new wildcard SHA256 certificate from Comodo. It's a "Premium SSL" cert.
I am now unable to curl/wget, or anything that uses those common SSL libraries, files from the server. I usually get the following message back:
SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
This is particularly an issue when I run easy_install and use our custom cheeseshop as the index, which is using the new cert.
I've tried running sudo update-ca-certificates but that didn't resolve the issue.
My Apache config has the following:
SSLEngine on
SSLCertificateFile /etc/ssl/localcerts/domain.net.crt
SSLCertificateKeyFile /etc/ssl/localcerts/domain.net.key
SSLCertificateChainFile /etc/ssl/localcerts/domain.net.ca-bundle
When I view the site in Chrome or Firefox, I get no errors. I've used online SSL Analysers as well which seem to pass fine.
If I pass the ca-bundle file given to me by Comodo direcly into curl, it works, but otherwise it doesn't.
My understanding is that this is because it's not in /etc/ssl/certs/cacerts.pem. I tried adding the bundled certs in as well, but it didn't work.
What's the best way to resolve this? We're using easy_install with Chef when deploying, so I'd like to avoid having to point to the ca bundle if at all possible.
Thanks.

Openstack. "No valid host was found" for any image other than cirrOS

I'm getting the following error on my Openstack (DevStack) every time I try to launch an image other than cirrOS. Walking through internet leads me to:
Openstack cannot allocate RAM, CPU resources.
It's not true because I have a lot of RAM, disk space and CPU available.
set in nova.conf -> scheduler_default_filters=AllHostsFilter
Tried without success.
This hapends to any image in any format that is other than cirrOS.
Update: Now it is clear that there is no direct answer to this question. Lets hope Openstack guys will provide more specific information in this error message
Make sure the flavour size you select is size "small" or larger, cirros uses tiny by default, as do others if not changed
For me, I got this same error because I mistakenly added an ubuntu image and set the metadata "hypervisor" tag to be "KVM" and not "QEMU". My host only had QEMU capability, of course. When I went to launch it, it gave that "No Valid Host was found". I'd say make sure the tags on the image aren't preventing the host from thinking "I can't run this". Simply changing the image tag back to QEMU fixed it for me.
check the core service is running by typing command " netstat -an | grep LISTENING". In the controller node,it should contains
listening port 8778(placement_api service), 8774(compute-service),9292(Image service),9696(network),5000(Identify service),5672(rabbitmq server),
11211( memcache server),35357(Identify service) at least if you don't modify the default config. if you install Ocata by offical guide line by line ,You must start placement-api service manually。
In compute node,you can run command "virt-host-validate" to check your host that whether it supports hardware virtualization.If fails ,edit the file "/etc/nova/nova.conf",set virt_type=qemu.
Ensure your host owns enough cpu,Memory,disk resources.
if All the steps is ok ,Open Debug log message By set debug=true int /etc/nova/nova.conf。you can find more information in the directory /var/log/nova/
I don't know WHY but after a while I can launch Ubuntu
saucy-server-cloudimg-i386-disk1.img — Ubuntu 13.10 x32
but can not
saucy-server-cloudimg-amd64-disk1.img — Ubuntu 13.10 x64
and vise versa, I can launch
precise-server-cloudimg-amd64-disk1.img — Ubuntu 13.04 x64
and cannot
precise-server-cloudimg-i386-disk1.img — Ubuntu 13.04 x32
The error can be due to many reasons. As you have told that it works with cirros, try this.
Run the command "glance index".
you will get the images you have in your glance.
Now do a "glance show (your-glance-id)"
Compare this between Cirros image and the rest.

Categories