I keep getting Err:508 or #Name? (Err:525) when opening a spreadsheet created with odfpy and putting a formula in a cell with the following code:
tc = TableCell( valuetype="string", formula=calc, value=0 )
In the spreadsheet, the formula looks fine, and any edit to it together with reverting the edit, so there is no net change to the formula, makes it think it has changed, re-evaluates it, and it works fine. But not until it is tweaked. What am I missing?
Here's the formula I'm using, in case that is relevant:
=TEXT(NOW()+0*LEN(CONCAT(A2:F999));"YYYY.MM.DD.HH.MM.SS")
(purpose, to timestamp the most recent edit to a range of cells). I note that at the time the formula is inserted in row 1, that other rows haven't been inserted yet, but a few are in subsequent steps. But I wouldn't think any attempt to evaluate the range would occur until loaded into LibreOffice, so that doesn't seem a likely cause of the error.
I already am using ; not , in the function parameters, which seems to be the most successful answer for other people that encounter this error, and I'm using the English install, which is the other seeming issue some have with copy/pasting formulas. But still no joy, and not much that is relevant shows up in searching.
Well, this is weird, but probably documented somewhere.
The most helpful thing that I was able to find by searching was "Solution 4" at this link. While that comment was not easy to find, because it was very generic, and didn't turn up in searches for "formula", it did provide a debug technique which enabled me to compare the formula that was getting inserted initially, as in my question, with what was there after the editing "tweak". "CONCAT" has to be prefixed with "COM.MICROSOFT.". Not at all obvious to the first-time formula insertion attempt.
Related
I'm very new to the whole programming thing but and ive ran into a pretty annoying problem.
This is a small slice of some code I had to do for school and I've ran into a problem where it keeps telling me the variables I'm trying to put into the {} brackets are not columns, when they shouldn't even be seen as columns in the first place. They are variables linked to some buttons and input things via Tkinter, I just want them to be placed into the columns.
cursor.execute("INSERT INTO reviews(name,messae,date,time,location,messaeg_id) VALUES ({},{},{},{},{},{})".format(naam, review, datumm, tijdd, locatie, review_id))
I've tried quite a few things like putting '' and ""s basically everywhere you could think of (I think), tried it with only one column and value, which just gave me a different error:
(psycopg2.errors.SyntaxError: syntax error at or near "VALUE")
If I kept it as VALUES instead of VALUE it gave me the same error as the main one this question is about.
Also "tried" some solutions I found on this site but since I'm still very new, I found it hard to adapt those solutions over to my code, hence why I've resorted to asking a question myself for once lol.
BTW it's pretty late here at the time of writing this and English is not my first language so please excuse any grammatical errors and what not lol
(For a better overview I will link my code, so that there is not too much space used unnecessarily)
Summary:
I am currently facing a weird issue when using the Grammar (partly modified file I made) made for C in Antlr4. It raises errors when encountering type/function definitions, such as int main() or int x. I am confused why that is though (partly due to my lack of experience or knowledge), since the rules seem to not contain an issue.
Still, when running the Python-generated code, it logs an error saying:
extraneous input 'int' expecting {'__extension__', '__builtin_va_arg', '__builtin_offsetof', '_Generic', '(', Identifier, Constant, StringLiteral}
Debugging the code I found that the entire declaration is classified as a primaryExpression, even though it should be an assignmentExpression. So it seems there might be an issue inside the grammar file causing it to identify it incorrectly, or my implementation (my file utilising the generated code) contains a weird bug causing this to happen.
If anyone has a clue what it might be or what I could try to fix the issue, I would greatly appreciate that ^^
Edit: Additional Info
Here the base version: link. The changes in my version are minimal and I only added a new type and specifier, meaning it should not interfere with the lexing and correctly identifying it. (Changes can be viewed here: link)
I found my issue and it was based on my misunderstanding from how antlr4 dealt with this and how the grammar file was structured. (I realised that after reading #sepp2k comment).
I called the wrong rule, which in this case was primaryExpression. This rule was not the actual entry rule, meaning that calling it causes Antlr4 to only call one of the rules needed and therefore parse the entire string wrongly and not recognise anything. I fixed the issue and found that compilationUnit was the actual entry, meaning that now everything is parsed fine.
(Also I realised that my question was rather loosely formed and didn't contain enough information, so sorry about that, but luckily I found the issue rather quickly after realising what was actually going on)
I have a test that does the following:
create an index on ES using pyes
insert mapping
insert data
query the data to check if it fits the expected result
However when I run this test, sometimes it would find no result, and sometimes it would find something but some data is missing. The interesting thing is this only happens when I run the test automatically. If I type in the code ling-by-line, everything works as expected. I've tested it 3 times manually and it works without failure.
Sometimes I've even received this message:
NoServerAvailable: list index out of range
It seems like the index was not created at all
I've pinged my ES address and everything looks correct. There was no other error msg found anywhere.
I thought it would be because I was trying to get the data too fast after the insert, as documented here: Elastic Search No server available, list index out of range
(check first comment in the accepted answer)
However, even if I make it delay for 4 seconds or so, this problem still happens. What concerns me is the fact that sometimes only SOME data is missing, which is really strange. I thought it would be either finding it all or missing it all.
Anyone got similar experience that can shine some light on this issue? Thanks.
Elasticsearch is Near Realtime(NRT). It might take upto 1 second to make your recently indexed document be visible/available for search.
To make your recently indexed document available instantly for searching, you can append ?refresh=wait_for at the end of the index document command. For e.g.,
POST index/type?refresh=wait_for
{
"field1":"test",
"field2":2,
"field3":"testing"
}
?refresh=wait_for will forcibly refresh your index to make the recently indexed document available for search. Refer ?refresh.
I often need to create two versions of an ipython notebook: One contains tasks to be carried out (usually including some python code and output), the other contains the same text plus solutions. Let's call them the assignment and the solution.
It is easy to generate the solution document first, then strip the answers to generate the assignment (or vice versa). But if I subsequently need to make changes (and I always do), I need to repeat the stripping process. Is there a reasonable workflow that will allow changes in the assignment to be propagated to the solutions document?
Partial self-answer: I have experimented with leveraging mercurial's hg copy, which will let two files with different names share history. But I can only get this to work if assignment and solution are in different directories, in two linked hg repositories. I would much prefer a simpler set-up. I've also noticed that diff gets very confused when one JSON file has more sections than another, making a VCS-based solution even less attractive. (To be clear: Ordinary use of a VCS with notebooks is fine; it's the parallel versions that stumble).
This question covers similar ground, but does not solve my problem. In fact an answer to my question would solve the OP's second remaining problem, "pulling changes" (see the Update section).
It sounds like you are maintaining an assignment and an answer key of some kind and want to be able to distribute the assignments (without solutions) to students, and still have the answers for yourself or a TA.
For something like this, I would create two branches "unsolved" and "solved". First write the questions on the "unsolved" branch. Then create the "solved" branch from there and add the solutions. If you ever need to update a question, update back to the "unsolved" branch, make the update and merge the change into "solved" and fix the solution.
You could try going the other way, but my hunch is that going "backwards" from solved to unsolved might be strange to maintain.
After some experimentation I concluded that it is best to tackle this by processing the notebook's JSON code. Version control systems are not the right approach, for the following reasons:
JSON doesn't diff very well when adding or deleting cells. A minimal change leads to mis-matched braces and a very messy diff.
In my use case, the superset version of the file (containing both the assignments and their solutions) must be the source document. This is because the assignment includes example code and output that depends on earlier parts, to be written by the students. This model does not play well with version control, as pointed out by #ChrisPhillips in his answer.
I ended up filtering the JSON structure for the notebook and stripping out the solution cells; they may be recognized via special metadata (which can be set interactively using the metadata button in the interface), or by pattern-matching on the cell contents. The following snippet shows how to filter out cells whose first line starts with # SOLUTION:
def stripcell(cell, pattern):
"""Check if the first line of the cell's content matches `pattern`"""
if cell["cell_type"] == "code":
content = cell["input"]
else:
content = cell["source"]
return ( len(content) > 0 and re.search(pattern, content[0]) )
pattern = r"^# SOLUTION:"
struct = json.load(open("input.ipynb"))
cells = struct["worksheets"][0]["cells"]
struct["worksheets"][0]["cells"] = [ c for c in cells if not stripcell(c, pattern) ]
json.dump(struct, open("output.ipynb", "wb"), indent=1)
I used the generic json library rather than the notebook API. If there's a better way to go about it, please let me know.
Edit 2:
Solved, see my answer waaaaaaay below.
Edit:
After banging my head a few times, I almost did it.
Here's my (not cleaned up, you can tell I was troubleshooting a bunch of stuff) code:
http://pastebin.com/ve4Qkj2K
And here's the problem: It works sometimes and other times not so much. For example, it will work perfectly with some files, then leave one of the longest codes instead of the shortest one, and for others it will delete maybe 2 out of 5 duplicates, leaving 3 behind. If it just performed reliably, I might be able to fix it, but I don't understand the seemingly random behavior. Any ideas?
Original Post:
Just so you know, I'm just beginning with python, and I'm using python 3.3
So here's my problem:
Let's say I have a folder with about 5,000 files in it. Some of these files have very similar names, but different contents and possible different extensions. After a readable name, there is a code, always with a "(" or a "[" (no quotes) before it. The name and code are separated by a space. For example:
something (TZA).blah
something [TZZ].another
hello (YTYRRFEW).extension
something (YJTR).another_ext
I'm trying to only get one of the something's.something, and delete the others. Another fact which may be important is that there are usually more than one code, such as "something (THTG) (FTGRR) [GTGEES!#!].yet_another_random_extension", all separated by spaces. Although it doesn't matter 100%, it would be best to save the one with the least codes.
I made some (very very short) code to get a list of all files:
import glob
files=[]
files=glob.glob("*")
but after this I'm pretty much lost. Any help would be appreciated, even if it's just pointing me in the right direction!
I would suggest creating separate array of bare file names and check the condition if any element exists in any other place by taking array with all indices excluding the current checked in loop iteration.
The
if str_fragment in name
condition simply finds any string fragment in any string-type name. It can be useful as well.
I got it! The version I ended up with works (99%) perfectly. Although it needs to make multiply passes, reading,analyzing, and deleting over 2 thousand files took about 2 seconds on my pitiful, slow notebook. My final version is here:
http://pastebin.com/i7SE1mh6
The only tiny bug is that if the final item in the list has a duplicate, it will leave it there (and no more than 2). That's very simple to manually correct so I didn't bother to fix it (ain't nobody got time fo that and all).
Hope sometime in the future this could actually help somebody other than me.
I didn't get too many answers here, but it WAS a pretty unusual problem, so thanks anyway. See ya.