I am trying to connect to an Oracle advanced queue using Python.
The basic principle of what I am trying to do is this: A queue has been set up that will send a message once every hour, I want to deque this message and analyze it with some code I've written.
I have credentials (host, port, sid, user & passwn) but I am not sure how to set up the connection and start consuming.
From what I can understand from previous questions on the web the cx_oracle module should have capabilities to do this, but I cannot figure out how to do this in practice.
If you have any links to tutorials that shows how this is done, or if you have some sample code yourself It would be highly appreciated. I have some experiece with RabbitMQ queues but it seems that there is alot less examples and tutorials for Oracle AQ, hence my question here.
The cx_Oracle Advanced Queuing docs are here.
An example would be something like:
# setup connection
connection = cx_Oracle.Connection('connection string')
# get the options
options = connection.deqoptions()
# set relevant options:
options.navigation = cx_Oracle.DEQ_FIRST_MSG
options.wait = cx_Oracle.DEQ_WAIT_FOREVER
# continuously deque
while connection.deq(NAME_OF_QUEUE, options, messageProperties, payload):
print(payload)
Anthony Tuininga (cx_Oracle author) has a much more complete example on Github.
Related
I Have a Python application which analyses data from multiple sources in real time. Once the data is analyzed the result of the analysis is stored in a database along with a time-stamp of when it was analyzed.
I would like to access the most recent result of this program remotely from another computer.
I was thinking about using python sockets and having a server script running on the main computer which runs the application and then that way I can access the data using a client script on another computer.
Is there a better way of doing this? Or are there any other solutions out there that can address this need?
Your question is very broad.
Most DB servers will provide a method/API to access the data remotely. You can use Python as a client if there is a DBAPI module for your DB that supports remote access over the network. For example if you are using Postgres you could use the psycopg2 module.
If you are using a simple DB such as SQLite then you might be able to use an ODBC driver. Some alternatives are here.
Edit
mongodb provides an API, pymongo.
In the end Redis was the best solution. Considering the original question The goal was to be able to send data in real time from one computer to another. Solutions such as Redis or RabbitMQ successfully accomplish this.
With Redis a server can be setup and it can publish messages to the network, clients can then subscribe to data channels and receive the messages in a queue
This Python library was used as a python Redis client :
https://pypi.python.org/pypi/redis
I use pywhois to determine if a domain name is registered or not. Here is my source code. (all permutations from a.net to zzz.net)
#!/usr/bin/env python
import whois #pip install python-whois
import string
import itertools
def main():
characters = list(string.ascii_lowercase)
##domain names generator
for r in range(1, 4) :
for name in itertools.permutations(characters, r) : #from 'a.net' to 'zzz.net'
url = ''.join(name) + '.net'
#check if a domain name is registered or not
try :
w = whois.whois(url)
except (whois.parser.PywhoisError): #NOT FOUND
print(url) #unregistered domain names?
if __name__ == '__main__':
main()
I got the following results:
jv.net
uli.net
vno.net
xni.net
However, all above domain names have already been registered. It is not accurate. Can anyone explain it? There are a lot of errors:
fgets: Connection reset by peer
connect: No route to host
connect: Network is unreachable
connect: Connection refused
Timeout.
There is an alternative way, reported here.
import socket
try:
socket.gethostbyname_ex(url)
except:
print(url) #unregistered domain names?
In speaking of speed, I use map to parallel processing.
def select_unregisteredd_domain_names(self, domain_names):
#Parallelism using map
pool = ThreadPool(16) # Sets the pool size
results = pool.map(query_method(), domain_names)
pool.close() #close the pool and wait for the work to finish
pool.join()
return results
This is a tricky problem to solve, trickier than most people realize. The reason for that is that some people don't want you to find that out. Most domain registrars apply lots of black magic (i.e. lots of TLD-specific hacks) to get the nice listings they provide, and often they get it wrong. Of course, in the end they will know for sure, since they have EPP access that will hold the authoritative answer (but it's usually done only when you click "order").
Your first method (whois) used to be a good one, and I did this on a large scale back in the 90s when everything was more open. Nowadays, many TLDs protect this information behind captchas and obstructive web interfaces, and whatnot. If nothing else, there will be quotas on the number of queries per IP. (And it may be for good reason too, I used to get ridiculous amounts of spam to email addresses used for registering domains). Also note that spamming their WHOIS databases with queries is usually in breach of their terms of use and you might get rate limited, blocked, or even get an abuse report to your ISP.
Your second method (DNS) is usually a lot quicker (but don't use gethostbyname, use Twisted or some other async DNS for efficiency). You need to figure out how the response for taken and free domains look like for each TLD. Just because a domain doesn't resolve doesn't mean its free (it could just be unused). And conversely, some TLDs have landing pages for all nonexisting domains. In some cases it will be impossible to determine using DNS alone.
So, how do you solve it? Not with ease, I'm afraid. For each TLD, you need to figure out how to make clever use of DNS and whois databases, starting with DNS and resorting to other means in the tricky cases. Make sure not to flood whois databases with queries.
Another option is to get API access to one of the registrars, they might offer programmatic access to domain search.
I am trying to add authentication to a xmlrpc server (which will be running on nodes of a P2P network) without using user:password#host as this will reveal the password to all attackers. The authentication is so to basically create a private network, preventing unauthorised users from accessing it.
My solution to this was to create a challenge response system very similar to this but I have no clue how to add this to the xmlrpc server code.
I found a similar question (Where custom authentication was needed) here.
So I tried creating a module that would be called whenever a client connected to the server. This would connect to a challenge-response server running on the client and if the client responded correctly would return True. The only problem was that I could only call the module once and then I got a reactor cannot be restarted error. So is there some way of having a class that whenever the "check()" function is called it will connect and do this?
Would the simplest thing to do be to connect using SSL? Would that protect the password? Although this solution would not be optimal as I am trying to avoid having to generate SSL certificates for all the nodes.
Don't invent your own authentication scheme. There are plenty of great schemes already, and you don't want to become responsible for doing the security research into what vulnerabilities exist in your invention.
There are two very widely supported authentication mechanisms for HTTP (over which XML-RPC runs, therefore they apply to XML-RPC). One is "Basic" and the other is "Digest". "Basic" is fine if you decide to run over SSL. Digest is more appropriate if you really can't use SSL.
Both are supported by Twisted Web via twisted.web.guard.HTTPAuthSessionWrapper, with copious documentation.
Based on your problem description, it sounds like the Secure Remote Password Protocol might be what you're looking for. It's a password-based mechanism that provides strong, mutual authentication without the complexity of SSL certificate management. It may not be quite as flexible as SSL certificates but it's easy to use and understand (the full protocol description fits on a single page). I've often found it a useful tool for situations where a trusted third party (aka Kerberos/CA authorities) isn't appropriate.
For anyone that was looking for a full example below is mine (thanks to Rakis for pointing me in the right direction). In this the user and password is stored in a file called 'passwd' (see the first useful link for more details and how to change it).
Server:
#!/usr/bin/env python
import bjsonrpc
from SRPSocket import SRPSocket
import SocketServer
from bjsonrpc.handlers import BaseHandler
import time
class handler(BaseHandler):
def time(self):
return time.time()
class SecureServer(SRPSocket.SRPHost):
def auth_socket(self, socket):
server = bjsonrpc.server.Server(socket, handler_factory=handler)
server.serve()
s = SocketServer.ForkingTCPServer(('', 1337), SecureServer)
s.serve_forever()
Client:
#! /usr/bin/env python
import bjsonrpc
from bjsonrpc.handlers import BaseHandler
from SRPSocket import SRPSocket
import time
class handler(BaseHandler):
def time(self):
return time.time()
socket, key = SRPSocket.SRPSocket('localhost', 1337, 'dht', 'testpass')
connection = bjsonrpc.connection.Connection(socket, handler_factory=handler)
test = connection.call.time()
print test
time.sleep(1)
Some useful links:
http://members.tripod.com/professor_tom/archives/srpsocket.html
http://packages.python.org/bjsonrpc/tutorial1/index.html
I want to code a Server which handles Websocket Clients while doing mysql selects via sqlalchemy and scraping several Websites on the same time (scrapy). The received data has to be calculated, saved to the db and then send to the websocket Clients.
My question ist how can this be done in Python from the logical point of view. How do I need to set up the code structure and what modules are the best solution for this job? At the moment I'm convinced of using twisted with threads in which the scrape and select stuff is running. But can this be done an easier way? I only find simple twisted examples but obviously this seems to be a more complex job. Are there similar examples? How do I start?
Cyclone, a Twisted-based 'network toolkit', based on/similar to facebook/friendfeed's Tornado server, contains support for WebSockets: https://github.com/fiorix/cyclone/blob/master/cyclone/web.py#L908
Here's example code:
https://github.com/fiorix/cyclone/blob/master/demos/websocket/websocket.tac
Here's an example of using txwebsocket:
http://www.saltycrane.com/blog/2010/05/quick-notes-trying-twisted-websocket-branch-example/
You may have a problem using SQLAlchemy with Twisted; from what I have read, they do not work well together (source). Are you married to SQLA, or would another, more compatible OR/M suffice?
Some twisted-friendly OR/Ms include Storm (a fork) and Twistar, and you can always fall back on Twisted's core db abstraction library twisted.enterprise.adbapi.
There are also async-friendly db libraries for other products, such as txMySQL, txMongo, and txRedis, and paisley (couchdb).
You could conceivably use both Cyclone (or txwebsockets) and Scrapy as child services of the same MultiService, running on different ports, but packaged within the same Application instance. The services may communicate, either through the parent service or some RPC mechanism (like JSONRPC, Perspective Broker, AMP, XML-RPC (2) etc), or you can just write to the db from the scrapy service and read from it using websockets. Redis would be great for this IMO.
Ideally you'll want to avoid writing your own WebSockets server, but since you're running Twisted, you might not be able to do that: there are several WebSockets implementations (see this search on PyPI). Unfortunately none of them are Twisted-based [Edit see #JP-Calderone's comment below.]
Twisted should drive the master server, so you probably want to begin with writing something that can be run via twistd (see here if your'e new to this). The WebSocket implementation mentioned by #JP-Calderone and Scrapy are both Twisted -based so they should be reasonable trivial to drive from your master Twisted-based server. SQLAlchemy will be more difficult, I've commented on this before in this question.
First of all, I will admit I am a novice to web services, although I'm familiar with HTML and basic web stuff. I created a quick-and-dirty web service using Python that calls a stored procedure in a MySQL database, that simply returns a BIGINT value. I want to return this value in the web service, and I want to generate a WSDL that I can give our web developers. I might add that the stored procedure only returns one value.
Here's some example code:
#!/usr/bin/python
import SOAPpy
import MySQLdb
def getNEXTVAL():
cursor = db.cursor()
cursor.execute( "CALL my_stored_procedure()" ) # Returns a number
result=cursor.fetchall()
for record in result:
return record[0]
db=MySQLdb.connect(host="localhost", user="myuser", passwd="********", db="testing")
server = SOAPpy.SOAPServer(("10.1.22.29", 8080))
server.registerFunction(getNEXTVAL)
server.serve_forever()
I want to generate a WSDL that I can give to the web folks, and I'm wondering if it's possible to have SOAPpy just generate one for me. Is this possible?
When I tried to write Python web service last year, I ended up using ZSI-2.0 (which is something like heir of SOAPpy) and a paper available on its web.
Basically I wrote my WSDL file by hand and then used ZSI stuff to generate stubs for my client and server code. I wouldn't describe the experience as pleasant, but the application did work.
I want to generate a WSDL that I can give to the web folks, ....
You can try soaplib. It has on-demand WSDL generation.
Sorry for the question few days ago. Now I can invoke the server successfully. A demo is provided:
def test_soappy():
"""test for SOAPpy.SOAPServer
"""
#okay
# it's good for SOAPpy.SOAPServer.
# in a method,it can have morn than 2 ws server.
server = SOAPProxy("http://localhost:8081/")
print server.sum(1,2)
print server.div(10,2)