I know influx is for measurement type data. But I'm also using it for annotations on certain events.
I have scripts that run every minute, that it would be difficult for it to realise an event has already happened. Is there something I can do on the insert to only insert if a new record rather than every time.
So there didn't seem to be a way of doing it (and no answers here). But I solved the problem by performing a query first and if no record found perform an insert.
Basically i had to make the scripts figure it out.
Related
Short version: Need a faster/better way to update many column comments at once in spark/databricks. I have a pyspark notebook that can do this sequentially across many tables, but if I call it from multiple tasks they take so long waiting on a hive connection that I get timeout failures.
Command used: ALTER TABLE my_db_name.my_table_name CHANGE my_column COMMENT "new comment" (docs)
Long version: I have a data dictionary notebook where I maintain column descriptions that are reused across multiple tables. If I run the notebook directly it successfully populates all my database table and column comments by issuing the above command sequentially for every column across all tables (and the corresponding table description command once).
I'm trying to move this to a by-table call. In the databricks tasks that populate the tables I have a check to see if the output table exist. If not it's created, and at the end I call the dictionary notebook (using dbutils.notebook.run("Data Dictionary Creation", 600, {"db": output_db, "update_table": output_table}) to populate the comments for that particular table. If this happens simultaneously for multiple tables however the notebook calls now timeout, as most of the tasks spend a lot of time waiting for client connection with hive. This is true even though there's only one call of the notebook per table.
Solution Attempts:
I tried many variations of the above command to update all column comments in one call per table, but it's either impossible or my syntax is wrong.
It's unclear to me how to avoid the timeout issues (I've doubled timeout to 10 minutes and it still fails, while the original notebook takes much less time than that to run across all tables!). I need to wait for completion before continuing to the next task (or I'd spawn it as a process).
Update: I think what's happening here is that the above Alter command is being called in a loop, and when I schedule a job this loop is being distributed and called in parallel. What I may actually need is a way to call it, or a function in it, without letting the loop be distributed. Is there a way to force sequential execution for a single function?
In the end I found a solution for this issue.
First, the problem seems to have been that the loop with the ALTER command was getting parallelized by spark, and thus firing multiple (conflicting) commands simultaneously on the same table.
The answer to this was two-fold:
Add a .coalesce(1) to the end of the function I was calling with the ALTER line. This limits the function to sequential execution.
Return a newly-created empty dataframe from the function to avoid coalesce-based errors.
Part 2 seems to have been necessary because this command is I think meant to get a result back for aggregation. I couldn't find a way to make it work without that (.repartition(1) had the same issue), so in the end I returned spark.createDataFrame([ (1, "foo")],["id", "label"]) from the function and things then worked.
This gets me to my desired end goal of working through all the alter commands without conflict errors.
It's clunky as hell though; still love improvements or alternative approaches if anyone has one.
If you want to change multiple columns at once, why not recreate the table? (This trick will work only if table 'B' is an external table. Here table 'B' is the 'B'ad table with outdated comments. Table 'A' is the good table with good comments.)
drop table ('B')
create table with required comments ( 'A' )
If this table is NOT external, then you might want to create a view, and start using that. This would enable you to add updated comments without altering the original tables data.
Have you considered using table properties instead of comments?
I am currently working on a Django 2+ project involving a blockchain, and I want to make copies of some of my object's states into that blockchain.
Basically, I have a model (say "contract") that has a list of several "signature" objects.
I want to make a snapshot of that contract, with the signatures. What I am basically doing is taking the contract at some point in time (when it's created for example) and building a JSON from it.
My problem is: I want to update that snapshot anytime a signature is added/updated/deleted, and each time the contract is modified.
The intuitive solution would be to override each "delete", "create", "update" of each of the models involved in that snapshot, and pray that all of them the are implemented right, and that I didn't forget any. But I think that this is not scalable at all, hard to debug and ton maintain.
I have thought of a solution that might be more centralized: using a periodical job to get the last update date of my object, compare it to the date of my snapshot, and update the snapshot if necessary.
However with that solution, I can identify changes when objects are modified or created, but not when they are deleted.
So, this is my big question mark: how with django can you identify deletions in relationships, without any prior context, just by looking at the current database's state ? Is there a django module to record deleted objects ? What are your thoughts on my issue ?
All right?
I think that, as I understand your problem, you are in need of a module like Django Signals, which listens for changes in the database and, when identified (and if all the desired conditions are met), executes certain commands in your application ( even be able to execute in the database).
This is the most recent documentation:
https://docs.djangoproject.com/en/3.1/topics/signals/
I'm looking for ideas on how to improve a report that takes up to 30 minutes to process on the server, I'm currently working with Django and MySQL but if there is a solution that requires changing the language or SQL database I'm open to it.
The report I'm talking about reads multiple excel files and insert all the rows from those files into a table (the report table) with a range from 12K to 15K records, the table has around 50 columns. This part doesn't take that much time.
Once I have all the records on the report table I start applying multiple phases of business logic so I end having something like this:
def create_report():
business_logic_1()
business_logic_2()
business_logic_3()
business_logic_4()
Each function of the business_logic_X does something very similar, it starts by doing a ReportModel.objects.all() and then it applies multiple calculations like checking dates, quantities, etc and updates the record. Since it's a 12K record table it quickly starts adding time to the complete report.
The reason I'm going multiple functions separately and no all processing in one pass it's because the logic from the first function needs to be completed so the logic on the next functions works (ex. the first function finds all related records and applies the same status for all of them.
The first thing that I know could be optimized is somehow caching the objects.all() instead of calling it in each function but I'm not sure how to pass it to the next function without saving the records first.
I already optimized the report a bit by using update_fields on the save method of the functions and that saved a bit of time.
My question is, is there a better approach to this kind of problem? Is Django/MySQL the right stack for this?
What takes time is the business logic that you're doing in Django. So it does several round trips between the database and the application.
It sounds like there are several tables involved, so I would suggest that you write your query in raw sql and once you have the results you get that into the application, if you need it.
The orm has a method "raw" that you can use. Or you could drop down to even lower level and interface with Your database directly.
Unless I see more what you do, I can't give any more specific advice
I have a test that does the following:
create an index on ES using pyes
insert mapping
insert data
query the data to check if it fits the expected result
However when I run this test, sometimes it would find no result, and sometimes it would find something but some data is missing. The interesting thing is this only happens when I run the test automatically. If I type in the code ling-by-line, everything works as expected. I've tested it 3 times manually and it works without failure.
Sometimes I've even received this message:
NoServerAvailable: list index out of range
It seems like the index was not created at all
I've pinged my ES address and everything looks correct. There was no other error msg found anywhere.
I thought it would be because I was trying to get the data too fast after the insert, as documented here: Elastic Search No server available, list index out of range
(check first comment in the accepted answer)
However, even if I make it delay for 4 seconds or so, this problem still happens. What concerns me is the fact that sometimes only SOME data is missing, which is really strange. I thought it would be either finding it all or missing it all.
Anyone got similar experience that can shine some light on this issue? Thanks.
Elasticsearch is Near Realtime(NRT). It might take upto 1 second to make your recently indexed document be visible/available for search.
To make your recently indexed document available instantly for searching, you can append ?refresh=wait_for at the end of the index document command. For e.g.,
POST index/type?refresh=wait_for
{
"field1":"test",
"field2":2,
"field3":"testing"
}
?refresh=wait_for will forcibly refresh your index to make the recently indexed document available for search. Refer ?refresh.
Have a question about what sort of approach to take on a process I am trying to structure. Working with PostgreSQL and Python.
Scenario:
I have two databases A and B.
B is a processed version of A.
Data continually streams into A, which needs to be processed in a certain
way (using multi-processing) and is then stored in B.
Each new row in A needs to be processed only once.
So:
streamofdata ===> [database A] ----> process ----> [database B]
Database A is fairly large (40 GB) and growing. My question is regarding the determination on what is the new data not yet processed and put into B. What is the best way to determine what rows have to be processed still.
Matching primary keys each time on what has not yet been processed is not the way to go I am guessing
So let's say new rows 120 to 130 come into database A over some time period. So my last row processed row was 119. Is a correct approach to look at the last row id (the primary key) 119 processed and say that anything beyond that should now be processed?
Also wondering whether anyone has any further resources on this sort of 'realtime' processing of data. Not exactly sure what I am looking for technically speaking.
Well, there are a few ways you could handle this problem. As a reminder, the process you are describing is basically re-implementing a form of database replication, so you may want to familiarize yourself with the various popular replication options out there for Postgres and how they work, particularly Slony might be of interest to you. You didn't specify what sort of database "database B" is, so I'll assume it's a separate PostgreSQL instance, though that assumption won't change a whole lot about the decisions below other than ruling out some canned solutions like Slony.
Set up a FOR EACH ROW trigger on the important table(s) you have in database A which need to be replicated. Your trigger would take each new row INSERTed (and/or UPDATEd, DELETEd, if you need to catch those) in those tables and send them off to database B appropriately. You mentioned using Python, so just a reminder you can certainly write these trigger functions in PL/python if that makes life easy for you, i.e. you should hopefully be able to more-or-less easily tweak your existing code so that it runs inside the database as a PL/Python trigger function.
If you read up on Slony, you might have noticed that proposal #1 is very similar to how Slony works -- consider whether it would be easy or helpful for you to have Slony take over the replication of the necessary tables from database A to database B, then if you need to further move/transform the data into other tables inside database B, you might do that with triggers on those tables in database B.
Set up a trigger or RULE which will send out a NOTIFY with a payload indicating the row which has changed. Your code will will LISTEN for these notifications and know immediately which rows have changed. The psycopg2 adapter has good support for LISTEN and NOTIFY. N.B. you will need to exercise some care to handle the case that your listener code has crashed or gets disconnected from the database or otherwise misses some notifications.
In case you have control over the code streaming data into database A, you could have that code take over the job of replicating its new data into database B.