We have built a series of programs that process data. In the last step, they write out a series of Neo4j commands to create nodes, and later, to connect those nodes. This is a sample of what is created.
CREATE (n1:ActionElement {
nodekey:1,
name:'type',
resource: 'action',
element:'ActionElement',
description:'CompilationUnit',
linestart:1,
colstart:7,
lineend:454,
colend:70,
content:[],
level:1,
end:False
});
The issue is that the file created has ~20,000 lines. When I run it through the shell, I get an error on some of the transactions. It seems to alternately process and reject. I cant see a pattern but I am assuming that I am overruning the processing speed.
neo4j> CREATE (n1573)-[:sibling]->(n1572);
Connection refused
neo4j> CREATE (n1574)-[:sibling]->(n1573);
Connection refused
neo4j> CREATE (n1575)-[:sibling]->(n1574);
0 rows available after 3361 ms, consumed after another 2 ms
Added 2 nodes, Created 1 relationships
neo4j> CREATE (n1579)-[:sibling]->(n1578);
0 rows available after 78 ms, consumed after another 0 ms*
Interesting enough, it recovers, fails, recovers.
Any thoughts ? is this just fundamentally the wrong way to do this ? The LAST program to touch it happens to be python, should I have it update the database direct ? Thank you
In the end, it was a network issue combined with a transaction issue. To solve, we FTPd the file to the same server as the Neo4j instance (eliminating the network latency) and then modified the python load to try/except on error, and if it failed, wait 5 seconds and retry. Only issued an error when the second try failed. That eliminated any 'transaction latency'. On a 16 core machine, it did not fail at all. On a single core, it retried 4 times across 20,000 updates, and all passed on the second try. Not ideal, but workable.
Thanks
Related
I have a Python script which reads files into my tables in MySQL. Now this program runs automatically every now and then. However I'am afraid of 2 things:
Their might come a time the program stops running because it cant connect to the MySQL server. There are a lot of processes depending on this tables, so if the tables are not up to date the rest of my process will also stop working.
Their might sneak a file inside the process which does not have the expected content. After the script finished running, every value of column X must have 12 rows. If it does not have 12 rows this means the files did not have the right content inside them.
My question is: Is there something I can do to tackle this before it happens? Like send an e-mail to myself so I can be notified if the connection fails or like run the program on another server or if a certain value has like NOT 12 rows?
I'm very eager to know how you guys handle this situations.
I have a very simple connection made like this:
mydb = mysql.connector.connect(
host= 'localhost',
user = 'root',
passwd = '*****.',
database= 'my_database'
)
The event you are talking about is very unlikely to happen, and the only possible situation I could see this happening is when your database runs out of memory. For which you can set up a 2 min period every 2 to 3 days when you will go and check the amount of memory left in your server.
We are calling postgres using psycopg2 2.7.5 this way that we perform a query then perform some operation on data that we received and then we open new connection and perform another query and so on.
Usually the query takes between 15 s to 10 min.
Occasinally after 2 h we receive error: Python Exception : connection already closed
What may be the reason for that? Data is the same and query is the same and sometimes the same query gives results back in 3 min and sometimes it gets that timeout after 2 hrs.
I wonder if it is possible that connection is broken earlier but in python we get that information for some reason after 2 hrs?
I doubt that there are any locks on DB at the moment when we perform a query but it may be under huge load and max number of connections may be reached (not confirmed but this is an option).
What would be the best way to track down the problem? Firewall is set to 30 min timeout.
We are calling postgres using psycopg2 2.7.5 this way that we perform a query then perform some operation on data that we received and then we open new connection and perform another query and so on.
Why do you keep opening new connections? What do you do with the old one, and when do you do it?
I wonder if it is possible that connection is broken earlier but in python we get that information for some reason after 2 hrs?
In general, a broken connection won't be detected until you try to use it. If you are using a connection pooler, it is possible the pool manager checks up on the connection periodically in the background.
I am using psycopg2 (2.6.1) to connect to Amazon's Redshift.
I have a query that should last about 1 second, but about 1 time out of every 20 concurrent tries it just hangs forever (I manually kill them after 1 hour). To address this, I configured the statement_timeout setting before my query, as such:
rcur.execute("SET statement_timeout TO 60000")
rcur.execute(query)
so that after 1 minute the query will give up, and I can try again (the second try does complete quickly as expected), but even with this (which I confirmed worked by setting the timeout to 1 ms and seeing it raise an Exception), sometimes the Python code hangs instead of raising an Exception (it never reaches the print directly after the rcur.execute(query)). And I can see in the Redshift AWS dashboard that the query has been "terminated" after 59 seconds, but my code still hangs for an hour instead of raising an Exception.
Does anyone know how to resolve this, or have a better method of dealing with typically short queries that occasionally take unnaturally long and simply need to be cancelled and retried?
I think you need to configure your keepAlive settings for the Redshift connection.
Follow the steps in this AWS doc to do that,
http://docs.aws.amazon.com/redshift/latest/mgmt/connecting-firewall-guidance.html
I have a MongoDB cluster (2.6.3) with three mongos processes and two replica sets, with no sharding enabled.
In particular, I have 7 hosts (all are Ubuntu Server 14.04):
host1: mongos + Client aplication
host2: mongos + Client aplication
host3: mongos + Client aplication
host4: RS1_Primary (or RS1_Secondary) and RS2_Arbitrer
host5: RS1_Secondary (or RS1_Primary)
host6: RS2_Primary (or RS2_Secondary) and RS1_Arbitrer
host7: RS2_Secondary (or RS2_Primary)
The Client application here is a Zato Cluster with 4 gunicorn workers running in each server which accesses MongoDB using two PyMongo.MongoClient instances for each worker.
These MongoClient objects are created as follows:
MongoClient(mongo_hosts, read_preference=ReadPreference.SECONDARY_PREFERRED, w=0, max_pool_size=25)
MongoClient(mongo_hosts, read_preference=ReadPreference.SECONDARY_PREFERRED, w=0, max_pool_size=10)
where this mongo_hosts is: 'host1:27017,host2:27017,host2:27017' in all servers.
So, in total, I have 12 MongoClient instances with max_pool_size=25 (4 in each server) and 12 others with max_pool_size=10 (also 4 in each server)
And my problem is:
When the Zato clusters are started and begin receiving requests (up to 10 rq/sec each, balanced using a simple round robin), a bunch of new connections are created and around 15-20 are then kept permanently open over the time in each mongos.
However, at some random point and with no apparent cause, a couple of connections are suddenly dropped at the same time in all three mongos and then the total number of connections keeps changing randomly until it stabilizes again after some minutes (from 5 to 10).
And while this happens, even though I see no slow queries in MongoDB logs (neither in mongos nor in mongod) the performance of the platform is severely reduced.
I have been isolating the problem and already tried to:
change the connection string to 'localhost:27017' in each MongoClient to see if the problem was in only one of the clients. The problem persisted, and it keeps affecting the three mongos at the same time, so it looks like something in the server side.
add log traces to make sure that the performance is lost inside MongoClient. The result is that running a simple find query in MongoClient is clearly seen to last more than one second in the client side, while usually it's less than 10ms. However, as I said before, I see no slow queries at all in MongoDB logs (default profiling level: 100ms).
monitor the platform activity to see if there's a load increase when this happens. There's none, and indeed it can even happen during low load periods.
monitor other variables in the servers, such as cpu usage or disk activity. I found nothing suspicious at all.
So, the questions at the end are:
Has anyone seen something similar (connections being dropped in PyMongo)?
What else can I look at to debug the problem?
Possible solution: MongoClient allows the definition of a max_pool_size, but I haven't found any reference to a min_pool_size. Is it possible to define so? Perhaps making the number of connections static would fix my performance problems.
Note about MongoDB version: I am currently running MongoDB 2.6.3 but I already had this problem before upgrading from 2.6.1, so it's nothing introduced in the last version.
I have a one year production site configured with django.contrib.sessions.backends.cached_db backend with a MySQL database backend. The reason why I chose cached_db is a mix of security with read performance.
The problem is, the cleanup command, responsible to delete all expired sessions, was never executed, resulting in a 2.3GB session table data length, 6 million rows and 500Mb index length.
When I try to run the ./manage.py cleanup (in Django 1.3) command, or ./manage.py clearsessions (Django`s 1.5 correspondent), the process never ends (or my patience doesn't complete 3 hours).
The code that Django use's to do this is:
Session.objects.filter(expire_date__lt=timezone.now()).delete()
In a first impression, I think that's normal because the table has 6M rows, but, after I inspect System's monitor, I discover that all memory and cpu was used by the python process, not mysqld, fullfilling my machine's resources. I think that's something terrible wrong with this command code. It seems that python iterates over all founded expired session rows before deleting each of them, one by one. In this case, a code refactoring to just raw a DELETE FROM command can resolve my problem and helps Django community, right? But, if this is the case, a Queryset delete command is acting weird and none optimized in my opinion. Am I right?