How to programmatically sync anki flashcard database with local file? - python

I would like to have a script ran by cron or an anki background job that will automatically read in a file (e.g. csv, tsv) containing all my flashcards and update the flashcard database in anki automatically so that i don't have to manually import my flashcards 1000 times a week.
any have any ideas how this can be achieved ?
Some interesting links I've came across, including from answers, that might provide a lead towards solutions:
https://github.com/langfield/ki
https://github.com/patarapolw/ankisync
https://github.com/towercity/anki-cli

The most robust approach there is so far is to have your collection under git, and use Ki to make Anki behave like a remote repository, so it's very easy to synchronise. The only constraint is the format of your collection. Each card is kept as a single file, and there is no real way around this.

I'm the maintainer of ki, one of the tools you linked! I really appreciate the shout-out #BlackBeans.
It's hard to give you perfect advice without more details about your workflow, but it sounds to me like you've got the source-of-truth for your notes in tabular files, and you import these files into Anki when you've made edits or added new notes.
If this is the case, ki may be what you're looking for. As #BlackBeans mentioned, this tool allows you to convert Anki notes into markdown files, and more generally, handles moving your data from your collection to a git repository, and back.
Basically, if the reason why you've got stuff in tabular files is (1) because you want to version it, (2) because you want to use an external editor, or (3) because your content is generated programmatically, then you might gain some use from using ki.
Feel free to open an issue describing your use case in detail. I'd love to give you some support in figuring out a good workflow if you think it would be helpful. I am in need of more user feedback at the moment, so you'd be helping me out, too!

Related

How to programmatically add a new contact to the iOS device from a PC/Mac

I need a way to programmatically add a new contact to iPhone from a computer.
Preferably using Python.
Long story short, I have contacts extracted from my old phone, and I want to transfer them to my new iPhone.
A solution that comes to my mind is to fill an addressbook supported by iTunes and then synchronize it with iOS.
But I would like a clean, easy, direct approach over intermixing this and that to find out that it wouldn't work because of something idiotic popped up.
Is there any protocol specification for doing that, or a way to link to iTunes DLLs or dynlibs?
Or perhaps a tweak using Apple Script on Mac.
I solved my personal problem by constructing vCard file containing all contacts from my old phone, then transfering that file to my iPhone manually who happily offered to import all contacts from it into its contacts App.
VCard specification can be found on Wikipedia. It is one nice file format. I used version 2.1 and it worked.
I was afraid that iPhone would accept only XML vCard version, but it worked with 2.1.
That's what I really meant in my comment. I wasn't sure which vCard version iOS uses.
It was extremely easy as my old phone gave me all contacts as a bunch of individual vCard files. So I only needed a simple little program to combine them into one file.
Thanks to apraetor for a comment that gave me that idea.
But the way I wanted it done (as I explained in the question) is possible by using LibIMobileDevice library available at:
http://www.libimobiledevice.org/
I didn't have time to experiment but it should work. And anyone searching for a way to do what I wanted should know about this excellent library.
I'd like someone's answer with details on how it is done and the code (for other's benefit), as I wouldn't play with this for some time. I am currently very busy.
I suspect it is something like:
Get all contacts from iPhone
Add new ones or delete old ones
Synchronize back
as I didn't see anything
there hinting at direct contact manipulation.

Need advice on writing a document control software with Python and MySQL

I'm looking for open-ended advice on the best approach to re-write a simple document control app I developed, which is really just a custom file log generator that looks for and logs files that have a certain naming format and file location. E.g., we name all our Change Orders with the format "CO#3 brief description.docx". When they're issued, they get moved to an "issued" folder under another folder that has the project name. So, by logging the file and querying it's path, we can tell which project it's associated with and whether it's been issued.
I wrote it with Python 3.3. Works well, but the code's tough to support because I'm building the reports while walking the file structure, which can get pretty messy. I'm thinking it would be better to build a DB of most/all of the files first and then query the DB with SQL to build the reports.
Sorry for the open-ended question, but I'm hoping not to reinvent the wheel. Anyone have any advice as to going down this road? E.g., existing apps I should look at or bundles that might help? I have lots of C/C++ coding experience but am still new to Python and MySQL. Any advice would be greatly appreciated.
Really nice answer by #GCord. I'd add just two bits:
If it's a relatively small database, consider sqlite3 instead of
MySQL (it is nicely supported out of the box, multiplatform, no
dependencies on a running rdbms)
If it's expected to grow, and/or you
just want to play with some new technology, try to write automated
ingestion scripts for a real document management system (e.g., http://www.alfresco.com/). I'd
recommend Apache Solr (based on Apache Lucene) as a full text
indexing service and then you could use Apache Tika to automatically
extract text and metadata from your documents (see
http://wiki.apache.org/solr/ExtractingRequestHandler)
Firstly, if it works well as you suggest, then why fix it?
Secondly, before doing any changes to your code I would ask myself the following questions:
What are the improvements/new requirements I want to implement that I can't easily do with the current structure?
Do I have a test suite of the current solution, so that I can regression-test any refactoring? When re-implementing something it is easy to overlook some specific behaviors which are not very well documented but that you/users rely on.
Do those improvements warrant an SQL database? For instance:
Do you need to often run reports out of an SQL database without walking the directory structure?
Is there a problem with walking the directories?
Do you have network or performance issues?
Are you facing an increase in usage?
When implementing an SQL solution, you will need a new task to update the SQL data. If I understand correctly, the reports are currently generated on-the-fly, and therefore are always up-to-date. That won't be the case with SQL reports, so you need to make sure they are up-to-date too. How frequently will you update the SQL database:
a) In real-time? That will necessitate a background service. That could be a operational hassle.
b) On-demand? Then what would be the difference with the current solution?
c) At scheduled times? Then your data may be not up-to-date between the updates.
I don't have any packages or technical approaches to recommend to you, I just thought I'd give you those general software management advices.
In any case, I also have extensive C++ and Python and SQL experience, and I would just stick to Python on this one.
On the SQL side, why stick to traditional SQL engines? Why not MongoDB for instance, which would be well suited to storing structured data such as file information.

What is the best way to track / record the current programming project u work on

I have been in this problem for long time and i want to know how its done in real / big companies project.
Suppose i have the project to build a website. now i divide the project into sub tasks and do it.
But u know that suppose i have task1 in hand like export the page to pdf. Now i spend 3 days to do that , came accross various problems , many stack overflow questions and in the end i solve it.
Now 4 months after someone told me that there is some error in the code.
Now by that i comepletely forgot about(60%) how i did it and why i do this way. I document the code but i can't write the whole story of that in the code.
Then i have to spend much time on code to find what was the problem so that i added this line etc.
I want to know that is there any way that i can log steps in completeing the project.
So that i can see how i end up with code , what erros i got , what questions i asked on So and etc.
How people do it in real time. Which software to use.
I know in our project management softaware called JIRA we have tasks but that does not cover what steps i took to solve that tasks.
what is the besy way so that when i look backt at my 2 year old project , i know how i solve particular task
If you are already using JIRA consider integrating it with your SCM.
When committing your changes to SCM refer to your JIRA issue number in comments. Like the following:
PORTAL-778 fixed the alignment issue with PDF exports
JIRA periodically connects to your SCM and parses the comments. You can easily find out changes made for a particular issue.
Please see the following link for more information
Integrating JIRA with Subversion
Every time you revisit code, make a list of the information you are not finding. Then the next time you create code, make sure that information is present. It can be in comments, Wiki, bugs or even text notes in a separate file. Make the notes useful for other people, so private notebooks aren't a good idea except for personal notes.

Using Excel to work with large amounts of output data: is an Excel-database interaction the right solution for the problem?

I have a situation where various analysis programs output large amounts of data, but I may only need to manipulate or access certain parts of the data in a particular Excel workbook.
The numbers might often change as well as newer analyses are run, and I'd like these changes to be reflected in Excel in as automated a manner as possible. Another important consideration is that I'm using Python to process some of the data too, so putting the data somewhere where it's easy for Python and Excel to access would be very beneficial.
I know only a little about databases, but I'm wondering if using one would be a good solution for what my needs - Excel has database interaction capability as far as I'm aware, as does Python. The devil is in the details of course, so I need some help figuring out what system I'd actually set up.
From what I've currently read (in the last hour), here's what I've come up with so far simple plan:
1) Set up an SQLite managed database. Why SQLite? Well, I don't need a database that can manage large volumes of concurrent accesses, but I do need something that is simple to set up, easy to maintain and good enough for use by 3-4 people at most. I can also use the SQLite Administrator to help design the database files.
2 a) Use ODBC/ADO.NET (I have yet to figure out the difference between the two) to help Excel access the database. This is going to be the trickiest part, I think.
2 b) Python already has the built in sqlite3 module, so no worries with the interface there. I can use it to set up the output data into an SQLite managed database as well!
Putting down some concrete questions:
1) Is a server-less database a good solution for managing my data given my access requirements? If not, I'd appreciate alternative suggestions. Suggested reading? Things worth looking at?
2) Excel-SQLite interaction: I could do with some help flushing out the details there...ODBC or ADO.NET? Pointers to some good tutorials? etc.
3) Last, but not least, and definitely of concern: will it be easy enough to teach a non-programmer how to setup spreadsheets using queries to the database (assuming they're willing to put in some time with familiarization, but not very much)?
I think that about covers it for now, thank you for your time!
Although you could certainly use a database to do what you're asking, I'm not sure you really want to add that complexity. I don't see much benefit of adding a database to your mix. ...if you were pulling data from a database as well, then it'd make more sense to add some tables for this & use it.
From what I currently understand of your requirements, since you're using python anyway, you could do your preprocessing in python, then just dump out the processed/augmented values into other csv files for Excel to import. For a more automated solution, you could even write the results directly to the spreadsheets from Python using something like xlwt.

How close is Python to being able to wrap it in a workbook type skin?

With my luck this question will be closed too quickly. I see a tremendous possibility for a python application that basically is like a workbook. Imagine if you will that instead of writing code you select from a menu of choices. For example, the File menu would have an open command that lets the user navigate to a file or directory of file or a webpage, even a list of web pages and specify those as the things that will be the base for the next actions.
Then you have a find menu. The menu would allow easy access to the various parsing tools, regular expression and string tools so you can specify the thing you want to find within the files.
Another menu item could allow you to create queries to interact with database objects.
I could go on and on. As the language becomes more higher level then these types of features become easier to implement. There is a tremendous advantage to developing something like this. How much time is spent reinventing the wheel for mundane tasks? Programmers have functions that they have built to do many mundane tasks but what about democratizing the power offered by a tool like Python.
I have people in my office all of the time asking how to solve problems that seem intractable to them, but when I show them how with a few lines of code their problem is solvable except for the edge cases they become amazed. I deflect their gratitude with the observation that it is not really that hard except for being able to construct the right google search to identify the right package or library to solve the problem. There is nothing amazing about my ability to use lxml and sets to pull all bolded sections from a collection of say 12,000 documents and compare across time and across unique identifiers in the collection how those bolded sections have evolved/changed or converged. The amazing piece is that someone wrote the libraries to do these things.
What is the advantage to the community for something like this. Imagine if you would an interface that looks like a workbook but interacts with an app-store. So if you want to pull something from html file you go to the app store and buy a plug-in that handles the work. If the workbook is built robustly enough it could be licensed to a machine, the 'apps' would be tied to a particular workbook.
Just imagine the creativity that could be unleashed by users if they could get over the feeling that access to this power is difficult. You guys may not see this but I see Python being so close to being able to port to something like a workbook framework. Weren't the early spreadsheet programs nothing more than a frame around some Fortran libraries that had been ported to C?
Comments or is there such an application and I have not found it.
There are Python application that are based on generating code -- the most amazing one probably Resolver One, which focuses on spreadsheets (and hinges on IronPython). With that exception, however, interacting based on the UI paradigm you have in mind (pick one of this, one of that, etc) tends to be pretty limited in the gamut of choices it offers to let the user generate the exact application they need -- there's just so much more you can say by writing even a little script, than what you can say by point-and-grunt.
That being said, Python would surely be a great choice both to implement such an app and as the language to generate... if and when you have a UI sketch that looks like it can actually allow non-programmers to specify a large-enough spectrum of apps in a broad-enough domain!-). Spreadsheets have proven themselves in this sense, but I don't know of other niches or approaches that have actually done so -- do you?
Your idea kinda reminded me of something I stumbled across months ago: http://www.ailab.si/orange/
Is your concept very similar to Microsoft Access? Generally programmers tend not to write such programs because they produce such horrible code that the authors themselves would never want to use their program.

Categories