pyxmpp2 connect to openfire cannot resolve NXDOMAIN - python

I installed pyxmpp2 https://github.com/Jajcus/pyxmpp2to my Ubuntu machine. I also installed Openfire 3.8.1 to it. I would like to use pyxmpp2 to connect to my Openfire server within the same machine.
In the Server -> Server Manager-> Server Information in my Openfire control panel, the Server Name showed in Server Properties in the panel was mymachine and the Host Name showed in the Environment section was MyMachine.
I tried the following code:
import logging
from pyxmpp2.jid import JID
from pyxmpp2.client import Client
logging.basicConfig()
client = Client(JID("admin#mymachine"),[])
client.connect()
and got the following message:
WARNING:pyxmpp2.resolver:Could not resolve '_xmpp-client._tcp.mymachine': NXDOMAIN
Did I miss configuring something?

It looks like there are no DNS SRV records for your domain and pyxmpp2 is therefore unable to resolve them. Have a look at http://wiki.xmpp.org/web/SRV_Records on how to create them.
Basically a DNS SRV record has the form
_service._proto.name TTL class SRV priority weight port target
which could look like this example
_xmpp-client._tcp.example.net. 86400 IN SRV 5 0 5222 example.net.
Maybe pyxmpp2 also provides a way to directly specify the host used for the XMPP service. This would avoid the DNS SRV lookup.

It may be using ipv6, you can force ipv4 using u"ipv4": True and specifing the server u"server": "chat.facebook.com"
handler = MyHandler(JID(target_jid), message)
settings = XMPPSettings({
u"ipv4": True,
u"server": "chat.facebook.com",
u"password": your_password,
u"starttls": True,
u"tls_verify_peer": False,
})
client = Client(JID(your_jid), [handler], settings)
client.connect()
client.run()
The full code is located on the pyxmpp2 examples folder send_message_client.py

Related

paramiko.ssh_exception.NoValidConnectionsError: [Errno None] Unable to connect to port 22 using AWS Elastic Beanstalk

This is my first post so hopefully it is appropriate and not redundant.
I have an application I have deployed on AWS Elastic Beanstalk using a Flask Dash API. I have a snippet with the API that needs to connect to my Raspberry Pi remotely (SSH) and parse out some file.
This code snippet works flawlessly on my local machine
I can easily putty/ssh into my RPi on port 22
On my 1st router I have port 22 opened tcp/udp
On my 2nd router I have a NAT Forwarding Virtual Server rule for port 22 to direct to my RPI's static IP address
import paramiko
import json
client = paramiko.client.SSHClient()
hostname='example'
port=22
username='pi'
password='masked'
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.load_system_host_keys()
print('loaded client')
client.connect(hostname, port, username, password)
sftp_client = client.open_sftp()
localFilePath='./some_file.json'
sftp_client.get('/home/pi/some_file.json', localFilePath)
sftp_client.close()
I get error paramiko.ssh_exception.NoValidConnectionsError: [Errno None] Unable to connect to port 22 on
I am assuming this is some sort of networking access issue on the AWS side or possibly my router. I can easily connect to my RPi from outside of my own network also. I tried to add some inbound/outbound rules on my EC2 load balancer, but even opening it up completely did not resolve the issue. I have raked through the webs and cannot find many answers, so I am hoping someone has suggestions which can also be beneficial to others.
Thanks!
EC2 Instance SG Rules
Xfinity has a firewall enabled that is outside the router settings. Had to log into xFi mobile app, go to More, and My Services. Disable firewall. No whitelisting option which is terrible.

How to connect to mq queue with python and bindings file?

I'm trying to connect with a remote MQ queue/serie and I only have a .bindings file to do it. I'm trying the python library "pymqi" but I can't connect using binding mode. Does someone knows what should I do or where should I place the file in order to use it with the library? Is there some other python solution to connect to the MQ queue?
this is a glimpse of my .bindings file:
JMSC/ClassName=com.ibm.mq.jms.MQQueueConnectionFactory
JMSC/FactoryName=com.ibm.mq.jms.MQQueueConnectionFactoryFactory
JMSC/RefAddr/0/Type=VER
JMSC/RefAddr/0/Encoding=String
JMSC/RefAddr/0/Content=7
JMSC/RefAddr/1/Type=TRAN
JMSC/RefAddr/1/Encoding=String
JMSC/RefAddr/1/Content=1
JMSC/RefAddr/2/Type=QMGR
JMSC/RefAddr/2/Encoding=String
JMSC/RefAddr/2/Content=MQFEND00
JMSC/RefAddr/3/Type=HOST
JMSC/RefAddr/3/Encoding=String
JMSC/RefAddr/3/Content=somehost
JMSC/RefAddr/4/Type=PORT
JMSC/RefAddr/4/Encoding=String
JMSC/RefAddr/4/Content=1414
JMSC/RefAddr/5/Type=CHAN
JMSC/RefAddr/5/Encoding=String
JMSC/RefAddr/5/Content=PORTALS.MQFEND00
It has like 100 params, that are the first 6,
Thanks
update 22/05/2019:
I will add more information about what I try.
I tried to connect with bindings mode as I saw on the pymqi documentation:
qmgr = pymqi.connect('MQFEND00')
And I get this error:
MQI Error. Comp: 2, Reason 2058: FAILED: MQRC_Q_MGR_NAME_ERROR
I'm not sure if it's the queue_manager, someone knows how can I get the queue_manager from bindings file?
I've also tried to connect with host, channel and port:
qmgr = pymqi.connect(queue_manager, channel, conn_info)
And I get an error of not authorized, I think it's because this second way is to connect with the client and I would need user and password which I haven't.
If you want to use binding mode, you should setup pymqi with server or binding parameters. You can not use binding and client mode simultaneously:
#From pymqi folder
cd ./code
./setup.py build server
I am not sure, that you can use .bindings file with pymqi without parsing it himself.
Probably I'm very late to this discussion, but:
import pymqi
queue_manager = 'MQFEND00'
channel = 'PORTALS.MQFEND00'
host = 'somehost'
port = '1414'
conn_info = '%s(%s)' % (host, port)
qmgr = pymqi.connect(queue_manager, channel, conn_info)
# other operations, see https://dsuch.github.io/pymqi/examples.html for more.
qmgr.disconnect()
You must have a MQ Client installed in the same machine that you run pymqi;
Are you using conn_info like the code snnipet?

Host command and ifconfig giving different ips

I am using server(server_name.corp.com) inside a corporate company. On the server i am running a flask server to listen on 0.0.0.0:5000.
servers are not exposed to outside world but accessible via vpns.
Now when i run host server_name.corp.com in the box i get some ip1(10.*.*.*)
When i run ifconfig in the box it gives me ip2(10.*.*.*).
Also if i run ping server_name.corp.com in same box i get ip2.
Also i can ssh into server with ip1 not ip2
I am able to access the flask server at ip1:5000 but not on ip2:5000.
I am not into networking so fully confused on why there are 2 different ips and why i can access ip1:5000 from browser not ip2:5000.
Also what is equivalent of host command in python ( how to get ip1 from python. I am using socktet.gethostbyname(server_name.corp.com) which gives me ip2)
As far as I can tell, you have some kind of routing configured that allows external connections to the server by hostname (or ip1), but it does not allow connection by ip2. And there is nothing unusual in this. Probably, the system administrator can advise why it is done just like this. Assuming that there are no assynchronous network routes, the following function can help to determine public ip of server:
import socket
def get_ip():
try:
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.connect(("8.8.8.8", 80))
local_address = sock.getsockname()
sock.close()
local_address = local_address[0]
except OSError:
local_address = socket.gethostbyname(socket.gethostname())
return local_address
Not quite clear about the network status by your statements, I can only tell that if you want to get ip1 by python, you could use standard lib subprocess, which usually be used to execute os command. (See subprocess.Popen)

Accessing device on local network through server hosted webhook

I have a python script that acts as a webhook. A part of it is as follows:
import json
import os
import urllib
import socket
import _thread
from flask import Flask
from flask import request
from flask import make_response
app=Flask(__name__)
ip = ('192.168.1.75', 9050)
#app.route('/webhook',methods=['GET','POST'])
def webhook():
_thread.start_new_thread(sendDataToDevice,(ip))
req = request.get_json(silent=True,force=True)
print("Request:")
print(json.dumps(req,indent=4))
res=makeWebHookResult(req)
res=json.dumps(res,indent=4)
r=make_response(res)
r.headers['Content-Type']='application/json'
return r
if __name__ == '__main__':
app.run(port=8080,host='localhost')
The function of the script is to send some data to a device connected to the local network.
It works flawlessly when I open my web browser and type the following on the url bar:
http://localhost:8080/webhook
I want to host the script on a server, eg. Heroku. How can I access the local device in that case?
Note: I know I can run the script on my local machine and make it visible to the internet using ngrok, but I want to keep it accessible even when my computer is switched off. Also, want a fixed link, and the links given by ngrok change on every run.
I've faced a similar issue before with IoT. Unfortunately there is no simple way to make a device be visible online. Here's a simple solution I've used. It might not be the best, but it works.
DDNS + Port Forwarding + Static IP
If you have access to your local WiFi router, then you can setup something called as DDNS (Dynamic Domain Name System). Your router will then connect to a DDNS service provider like no-ip (www.noip.com) and it will be visible on the internet. You can give a custom URL like susmit-home.noip.com.
However susmit-home.noip.com will now point only to your WiFi router and not your WiFi network. So if you want to access the local device_ip and device_port such as "192.168.1.75", 9050. Then you can setup Port Forwarding on your router for that local IP-Port combination. Usually the setup looks like this:
Local IP: device_ip (e.g. 192.168.1.75)
Local Port: device_port (e.g. 9050)
Outbound Port: any_port (e.g. 9050)
Make sure that your device_ip is a static IP on your WiFi router so that it doesn't change.
Finally in your code you can just replace the line ip = ('192.168.1.75', 9050) with ip = ('susmit-home.noip.com', 9050).
Other solutions:
A slightly more complicated solution is setting up a VPN, such that your local network and your remote server (e.g. Heroku) will all be available to each other as if they were within the same local network.
If your device is a computer or a Raspberry Pi, then you can use SSH Remote Port Forwarding to have access to your local device from the remote server.

Errno 10060 A connection attempt failed

EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
EMAIL_HOST='smtp.gmail.com'
EMAIL_PORT=465
EMAIL_HOST_USER = 'yogi'
EMAIL_HOST_PASSWORD = '###'
DEFAULT_EMAIL_FROM = 'yogi#gmail.com'
above are the settings for django core mail module. I am using its send_mail to send mails to users. When i try to build the program with the gmail smtp it throws the following error
'Errno 10060 A connection attempt failed because the connected party
did not properly respond after a period of time, or established
connection failed because connected host has failed to respond'.
I am doing this in my company and so it has proxy settings. I have given the proxy credentials in .condarc settings file. But still the connection timeout error. Do i need to set the proxy settings somewhere else or let me know where i am going wrong. ?
As far as I know django does not detect any SMTP proxy settings from anaconda configuration files. You can overcome this by manually building a connection.
Notice that send_mail , has an option parameter for a connection. You get one by calling mail.get_connection now you need to wrap it around sockspi
see Python smtplib proxy support and Python send email behind a proxy server for further details.

Categories