I have huge problem with OPCDA and OpenOPC. I should (must) read a set of tags from a remote server, I have no access to the machine in any way. I only know the IP and the OPC server name.
Testing OpenOPC locally with this code all work fine. Otherwise, changing the hostname with the remote one nothing work with 0x800706BA error.
import OpenOPC
import time
try:
opc = OpenOPC.client()
opc.servers()
#change localhost to remote
opc.connect('Matrikon.OPC.Simulation.1', 'localhost')
srvList = opc.list()
print(srvList)
tags = opc.read(opc.list('Simulation Items.Random.Int*'), group='myTest')
for name, value, quality, tagTime in opc.read(opc.list('Simulation Items.Random.Int*'), group='myTest'):
print(name, value, quality, tagTime)
for tag in tags:
print(tag)
except Exception as e:
print('OPC failed')
print(str(e))
pass
finally:
print('END')
Anyone have any ideas on this?
Not having access to the server (set with anonymous logon), I have done DCOM configurations as much as possible.
Does anyone know a procedure for a possible solution?
Thanks!
Related
Sorry if title is unclear.
I'm trying to send an email through python 3.10 through the standard library. It works locally on my machine with these default settings.
smtp_server = "smtp.office365.com"
port = 587 # For starttls
sender_email = os.environ.get("EMAIL")
password = os.environ.get("EMAIL_PASSWORD")
# Create a secure SSL context
context = ssl.create_default_context()
server = smtplib.SMTP(smtp_server,port)
try:
server.starttls(context=context) # Secure the connection
server.login(sender_email, password)
except Exception as e:
print(e)
However when I run this code manually from my company's server, I get a "unable to get local issuer certificate" error.
I've been able to remedy the issue by setting the ssl context to unverified:
context = ssl._create_unverified_context() # was ssl.create_default_context()
And this works when running the python file manually. However, this needs to run as a cronjob, and when the crontab runs the script with this 'fix' I get a different error.
Authentication unsuccessful, the user credentials were incorrect.
Which is ridiculous, because the same credentials worked with a different ssl context.
I'm using pythons virtualenv to run the script. I don't know any networking or certificate specific things to the company ubuntu server, but there's clearly something up. I just don't know what specifically.
Yes, I have looked through the multitude of similar questions, though none of them seem to quite fit this set of circumstances to help me.
Thanks in advance.
You are seeing two separate issues. TLS/certificate issues, and then an authentication issue which most likely has nothing to do with TLS.
There is no need for you to use unverified TLS. You can get the catrust from mozilla via the certifi package. Not sure if microsoft uses browser trust to sign their certificates.
>>> import certifi, ssl
>>> context = ssl.create_default_context()
>>> context.load_verify_locations(certifi.where())
That being said, a common mistake people make is assuming that cron jobs will load the same shell profile files that a login does. I would add error checking/debug loggin to make sure EMAIL_PASSWORD and EMAIL are set according to your expectations, since the error message is most likely coming from server.login line which has nothing to do with TLS.
When I run this script I receive SSH connection failed: Host key is not trusted error, but even connect to this host to take the key, keep to receive this error.
import asyncio, asyncssh, sys
async def run_client():
async with asyncssh.connect('172.18.17.9', username="user", password="admin", port=9321) as conn:
result = await conn.run('display version', check=True)
print(result.stdout, end='')
try:
asyncio.get_event_loop().run_until_complete(run_client())
except (OSError, asyncssh.Error) as exc:
sys.exit('SSH connection failed: ' + str(exc))
Try adding the known_hosts=None parameter to the connect method.
asyncssh.connect('172.18.17.9', username="user", password="admin", port=9321, known_hosts=None)
From asyncssh documentation here:
https://asyncssh.readthedocs.io/en/latest/api.html#asyncssh.SSHClientConnectionOptions
known_hosts (see Specifying known hosts) – (optional) The list of keys
which will be used to validate the server host key presented during
the SSH handshake. If this is not specified, the keys will be looked
up in the file .ssh/known_hosts. If this is explicitly set to None,
server host key validation will be disabled.
With me, it runs smoothly after inserting known_hosts=None
Here's my example when trying the coding sample in Ortega book:
I tried with hostname=ip/username/password of localCentOS, command test is ifconfig
import asyncssh
import asyncio
import getpass
async def execute_command(hostname, command, username, password):
async with asyncssh.connect(hostname, username = username,password=password,known_hosts=None) as connection:
result = await connection.run(command)
return result.stdout
You should always validate the server's public key.
Depending on your use case you can:
Get the servers host keys, bundle them with your app and explicitly pass them to asyncssh (e.g., as string with a path to your known_hosts file).
Manually connect to the server on the command line. SSH will then ask you if you want to trust the server. The keys are then added to ~/.ssh/known_hosts and AsyncSSH will use them.
This is related but maybe not totally your salvation:
https://github.com/ronf/asyncssh/issues/132
The real question you should be asking yourself as you ask this question (help us help you) is where is it all failing? Known-hosts via analogy is like env vars that don't show up when you need them to.
EDIT: Questions that immediately fire. Host key is found but not trusted? Hmm?
EDIT2: Not trying to be harsh towards you but I think it's a helpful corrective. You've got a software library that can find the key but is not known. You're going to come across a lot of scenarios with SSH / shell / env var stuff where things you take for granted aren't known. Think clearly to help yourself and to ask the question better.
I am trying to use the Docker Python API to login to a Docker cloud:
https://docker-py.readthedocs.io/en/stable/client.html#creating-a-client1
What is the URL? What is the Port?
I have tried to get it to work with cloud.docker.com, but I am fine with any registry server, so long as it is free to use and I can use it to upload Docker images from one computer and run them on another.
I have already got everything running using my own locally hosted registry, but I can’t seem to figure out how to connect to a server. It’s kind of ridiculous that hosting my own registry is easier than using an existing registry server.
My code looks like this, but I am unsure what the args.* parameters should be:
client = docker.DockerClient(base_url=args.docker_registry)
client.login(username=args.docker_user, password=args.docker_password)
I’m not sure what the base_url needs to be so that I can log in, and the error messages are not helpful at all.
Can you give me an example that works?
The base_url parameter is the URL of the Docker server, not the Docker Registry.
Try something like:
from docker.errors import APIError, TLSParameterError
try:
client = docker.from_env()
client.login(username=args.docker_user, password=args.docker_password, registry=args.docker_registry)
except (APIError, TLSParameterError) as err:
# ...
Here's how I have logged in to Docker using Python:
import docker
client = docker.from_env()
client.login(username='USERNAME', password='PASSWORD', email='EMAIL',
registry='https://index.docker.io/v1/')
and here's what it returns:
{'IdentityToken': '', 'Status': 'Login Succeeded'}
So, that means it has been logged in successfully.
I still haven't figured out what the registry of cloud.docker.com is called, but I got it to work by switching to quay.io as my registry server, which works with the intuitive registry='quay.io'
I'm using google health check in order to send request to my flask client to make sure my service is alive.
the same route in flask client sends request to two more flask clients to make sure the other two is also alive.
For some reason the request sometimes fails when the service is still running.
I tries to figure out why but there is nothing in my services logs that indicates that something happened and on most cases it works fine.
This is my code:
#GET /health_check//
def get(self):
try:
for service in INTERNAL_SERVICES_HEALTH_CHECKS:
client = getattr(all_clients, service + '_client')
response = client.get('g_health_check')
except Exception, e:
sentry_client.captureMessage('health check failed for '+env+ ' environment. error log:' + repr(e))
return output_json({'I\'m Not fine!':False}, requests.codes.server_error)
return output_json({'I\'m fine!':True}, requests.codes.ok)
If anyone has any suggestions I will be happy to try and fix it.
I've created some web services using pysimplesoap like on this documentation:
https://code.google.com/p/pysimplesoap/wiki/SoapServer
When I tested it, I called it like this:
from SOAPpy import SOAPProxy
from SOAPpy import Types
namespace = "http://localhost:8008"
url = "http://localhost:8008"
proxy = SOAPProxy(url, namespace)
response = proxy.dummy(times=5, name="test")
print response
And it worked for all of my web services, but when I try to call it by using an library which is needed to specify the WSDL, it returns "Could not connect to host".
To solve my problem, I used the object ".wsdl()" to generate the correct WSDL and saved it into a file, the WSDL generated by default wasn't correct, was missing variable types and the correct server address...
The server name localhost is only meaningful on your computer. Once outside, other computers won't be able to see it.
1) find out your external IP, with http://www.whatismyip.com/ or another service. Note that IPs change over time.
2) plug the IP in to http://www.soapclient.com/soaptest.html
If your local service is answering IP requests as well as from localhost, you're done!