ipython-cypher in Python: cypher.run.Connection object parameter - python

I'm trying to use ipython-cypher to run Neo4j Cypher queries (and return a Pandas dataframe) in a Python program. I have no trouble forming a connection and running a query when using IPython Notebook, but when I try to run the same query outside of IPython, as per the documentation:
http://ipython-cypher.readthedocs.org/en/latest/introduction.html#usage-out-of-ipython
import cypher
results = cypher.run("MATCH (n)--(m) RETURN n.username, count(m) as neighbors",
"http://XXX.XXX.X.XXX:xxxx")
I get the following error:
neo4jrestclient.exceptions.StatusException: Code [401]: Unauthorized. No permission -- see authorization schemes.
Authorization Required
and
Format: (http|https)://username:password#hostname:port/db/name, or one of dict_keys([])
Now, I was just guessing that that was how I should enter a Connection object as the last parameter, because I couldn't find any additional documentation explaining how to connect to a remote host using Python, and in IPython, I am able to do:
%load_ext cypher
results = %cypher http://XXX.XXX.X.XXX:xxxx MATCH (n)--(m) RETURN n.username,
count(m) as neighbors
Any insight would be greatly appreciated. Thank you.

The documentation has a section for the API. When used outside of IPython and in need to connect to a different host, just using the parameter conn and passing a string should work.
import cypher
results = cypher.run("MATCH (n)--(m) RETURN n.username, count(m) as neighbors",
conn="http://XXX.XXX.X.XXX:xxxx")
But also consider that with the new authentication support in Neo4j 2.2, you need to set the new password before connecting from ipython-cypher. I will fix this as soon as I implement the forcing password change mechanism in neo4jrestclient, the library underneath.

Related

Python package IBM_DB connection with enableAlternateServerListFirstConnect and alternateserverlist parameters

I have a requirement to enable a failover/secondary database for a DB2 database hosted on a Linux Server for a python application using the IBM_DB package.
With a JDBC driver, you can easily the following parameters to the connection string:
clientRerouteAlternatePortNumber=port#
clientRerouteAlternateServerName=servername
enableSeamlessFailover=1
Since the IBM_DB package uses a CLI driver, these parameters wouldn't be the same. I found the following parameters through the IBM documentation, which are:
https://www.ibm.com/support/knowledgecenter/SSEPGG_11.5.0/com.ibm.db2.luw.apdv.embed.doc/doc/c0060428.html
enableAlternateServerListFirstConnect
alternateserverlist
maxAcrRetries
However, through the instructions of how to include it in the link below, it seems like it is only possible to include them in this DB file: db2dsdriver.cfg
https://www.ibm.com/support/producthub/db2/docs/content/SSEPGG_11.5.0/com.ibm.db2.luw.apdv.cli.doc/doc/c0056196.html
I know a lot of these parameters are configurable in the connection string, and I wanted to know if it was possible to include these particular parameters in the connection string. Is there any documentation/verification that something like this can work:
import ibm_db_dbi
connect = ibm_db_dbi.connect("DATABASE=whatever; \
HOSTNAME=whatever; \
PORT=whatever; \
PROTOCOL=TCPIP; \
UID=whatever; \
PWD=whatever; \
CURRENTSCHEMA=whatever;\
AUTHENTICATION=SERVER_ENCRYPT;\
ClientEncAlg=2;\
enableAlternateServerListFirstConnect=True;\
alternateserverlist=server1,port1,server2,port2;\
maxAcrRetries=2", "", "")
Thank you for any help.
A helpful page to is this one.
Note the different names of keywords/parameters between jdbc/sqlj and CLI.
The idea is that with the CLI driver, if the Db2-LUW instance is properly configured, then the CLI driver will get the details of ACR from the Db2-LUW-instance automatically and useful defaults will apply . So you might not need to add more keywords in the connection string, unless tuning.
The HA related keyword parameters for CLI are below :
acrRetryInterval
alternateserverlist
detectReadonlyTxn
enableAcr
enableAlternateGroupSeamlessACR
enableAlternateServerListFirstConnect
enableSeamlessAcr
maxAcrRetries
More details here. Note that if enableACR=true ( the default ) then enableSeamlessAcr=true (also default).
Although the docs mention db2dsdriver.cfg most CLI parameter/keywords are also settable in the connection string, unless specifically excluded. So do your testing to verify.

Does Psycopg2 allow udf create queries to run on redshift using Python?

I am able to connect to aws-redshift with psycopg2 using python, I can query tables and get data back, etc...
However, when I try to run a create udf fucntion through psycopg2, nothing happens, no error returns but nothing gets created.
Here's my code:
def _applyFunctionToDB():
con=psycopg2.connect(dbname = redhsiftDatabase, host = redshiftHost, port = '5439', user = redshiftUser, password = redshiftPwd)
cur = con.cursor()
udf=_fileOpenWrite(udfFile)
size = os.stat(udfFile).st_size
udfCode=udf.read(size)
cur.execute(udfCode)
con.close()
I have run it through the debugger and all the pieces are there, but nothing happens when the "execute" method is invoked on the cursor.
If anyone has any advice and/or ideas on what might be going on here, please advise.
Thanks!
found answer just after posting here: Copying data from S3 to AWS redshift using python and psycopg2
I need to invoke a commit.
So, add con.commit in above code after execute.

latency with group in pymongo in tests

Good Day.
I have faced following issue using pymongo==2.1.1 in python2.7 with mongo 2.4.8
I have tried to find solution using google and stack overflow but failed.
What's the issue?
I have following function
from bson.code import Code
def read(groupped_by=None):
reducer = Code("""
function(obj, prev){
prev.count++;
}
""")
client = Connection('localhost', 27017)
db = client.urlstats_database
results = db.http_requests.group(key={k:1 for k in groupped_by},
condition={},
initial={"count": 0},
reduce=reducer)
groupped_by = list(groupped_by) + ['count']
result = [tuple(res[col] for col in groupped_by) for res in results]
return sorted(result)
Then I am trying to write test for this function
class UrlstatsViewsTestCase(TestCase):
test_data = {'data%s' % i : 'data%s' % i for i in range(6)}
def test_one_criterium(self):
client = Connection('localhost', 27017)
db = client.urlstats_database
for column in self.test_data:
db.http_requests.remove()
db.http_requests.insert(self.test_data)
response = read([column])
self.assertEqual(response, [(self.test_data[column], 1)])
this test sometimes fails as I understand because of latency. As I can see response has not cleaned data in it
If I add delay after remove test pass all the time.
Is there any proper way to test such functionality?
Thanks in Advance.
A few questions regarding your environment / code:
What version of pymongo are you using?
If you are using any of the newer versions that have MongoClient, is there any specific reason you are using Connection instead of MongoClient?
The reason I ask second question is because Connection provides fire-and-forget kind of functionality for the operations that you are doing while MongoClient works by default in safe mode and is also preferred approach of use since mongodb 2.2+.
The behviour that you see is very conclusive for Connection usage instead of MongoClient. While using Connection your remove is sent to server, and the moment it is sent from client side, your program execution moves to next step which is to add new entries. Based on latency / remove operation completion time, these are going to be conflicting as you have already noticed in your test case.
Can you change to use MongoClient and see if that helps you with your test code?
Additional Ref: pymongo: MongoClient or Connection
Thanks All.
There is no MongoClient class in version of pymongo I use. So I was forced to find out what exactly differs.
As soon as I upgrade to 2.2+ I will test whether everything is ok with MongoClient. But as for connection class one can use write concern to control this latency.
I older version One should create connection with corresponding arguments.
I have tried these twojournal=True, safe=True (journal write concern can't be used in non-safe mode)
j or journal: Block until write operations have been commited to the journal. Ignored if the server is running without journaling. Implies safe=True.
I think this make performance worse but for automatic tests this should be ok.

Baffling python mysql connection issue

Hello guys I am currently working in a python project at my school. First I want to make clear that I'm not a python programmer (I was just called to put out the flames in this project because no one else would and I was brave enough to say yes).
I have the following problem here. I have to write a method that connects to an existing localhost MySQL database (I'm using connector version 1.0.12 and python 2.6) and then does pretty basic stuff. The parameters are sent by a GTK-written GUI (I didn't write that interface). So I wrote my method like this:
def compMySQL(self, user, database, password, db_level, table_level, column_level):
sql_page_textview = self.mainTree.get_widget('sql_text_view')
sql_page_textview.modify_font(pango.FontDescription("courier 10"))
sql_page_buffer = sql_page_textview.get_buffer()
#Gonna try connecting to DB
try:
print("Calling conn with U:{0} P:{1} DB:{2}".format(user,password,database))
cnxOMC = mysql.connector.connect(user, password,'localhost',database)
except:
print "Error: Database connection failed. User name or Database name may be wrong"
return
#More code ...
But when I run my code I get this:
Calling conn with U:root P:PK17LP12r DB:TESTERS
Error: Database connection failed. User name or Database name may be wrong
And I don't know why, since the arguments sent are the same arguments that get printed (telling me that the GUI the other guy coded works fine) and they are valid login parameters. If I hardcode the login parameters directly insetad of using the GUI everything goes ok and the functions executes properly; the following code executes nice and smooth:
def compMySQL(self, user, database, password, db_level, table_level, column_level):
sql_page_textview = self.mainTree.get_widget('sql_text_view')
sql_page_textview.modify_font(pango.FontDescription("courier 10"))
sql_page_buffer = sql_page_textview.get_buffer()
#Gonna try hardcoding
try:
#print("Calling conn with U:{0} P:{1} DB:{2}".format(user,password,database))
cnxOMC = mysql.connector.connect(user="root", password='PK17LP12r', host='localhost', database='TESTERS')
print 'No prob with conn'
except:
print "Error: Database connection failed. User name or Database name may be wrong"
return
#more code ...
Console output:
No prob with conn
Any ideas guys? This one is killing me. I'm just learning Python but I imagine the problem to be something very easy for a seasoned python developer so any help would be strongly appreciated.
Thanks in advance.
The difference between the two versions is not that you are hard coding the parameters in the second one, it is that you are calling by keyword args rather than positional ones. The docs for MySQL connector don't seem to give the actual positional order, and there's no reason to think they are in the order you've given, so looks like you should always call by kwarg:
cnxOMC = mysql.connector.connect(user=user, password=password,host=host,database=database)

Using mx:RemoteObject with web2py's #service.amfrpc decorator

I am using web2py (v1.63) and Flex 3. web2py v1.61 introduced the #service decorators, which allow you to tag a controller function with #service.amfrpc. You can then call that function remotely using http://..../app/default/call/amfrpc/[function]. See http://www.web2py.com/examples/default/tools#services. Does anybody have an example of how you would set up a Flex 3 to call a function like this? Here is what I have tried so far:
<mx:RemoteObject id="myRemote" destination="amfrpc" source="amfrpc"
endpoint="http://{mysite}/{myapp}/default/call/amfrpc/">
<mx:method name="getContacts"
result="show_results(event)"
fault="on_fault(event)" />
</mx:RemoteObject>
In my scenario, what should be the value of the destination and source attributes? I have read a couple of articles on non-web2py implementations, such as http://corlan.org/2008/10/10/flex-and-php-remoting-with-amfphp/, but they use a .../gateway.php file instead of having a URI that maps directly to the function.
Alternatively, I have been able to use flash.net.NetConnection to successfully call my remote function, but most of the documentation I have found considers this to be the old, pre-Flex 3 way of doing AMF. See http://pyamf.org/wiki/HelloWorld/Flex. Here is the NetConnection code:
gateway = new NetConnection();
gateway.connect("http://{mysite}/{myapp}/default/call/amfrpc/");
resp = new Responder(show_results, on_fault);
gateway.call("getContacts", resp);
-Rob
I have not found a way to use a RemoteObject with the #service.amfrpc decorator. However, I can use the older ActionScript code using a NetConnection (similar to what I posted originally) and pair that with a #service.amfrpc function on the web2py side. This seems to work fine. The one thing that you would want to change in the NetConnection code I shared originally, is adding an event listener for connection status. You can add more listeners if you feel the need, but I found that NetStatusEvent was a must. This status will be fired if the server is not responding. You connection set up would look like:
gateway = new NetConnection();
gateway.addEventListener(NetStatusEvent.NET_STATUS, gateway_status);
gateway.connect("http://127.0.0.1:8000/robs_amf/default/call/amfrpc/");
resp = new Responder(show_results, on_fault);
gateway.call("getContacts", resp);
-Rob

Categories