How to fixed error "how to fix invalid character 'x' in string escape code by py mongo"? My error like this
Check your BSON packet where there is X in it, you can query for the value x using the mongo find command if you already have in other mongo instance, it clearly shows it imported 14k records but some records can't be imported because of incompatible format. The solution query the db and find the error documents and remove it.
Related
I am trying to use pyodbc to update an existing MS Access database table with a very long multiline string. The string is actually a csv that has been turned into a string.
The query I am trying to use to update the table is as follows:
query = """
UPDATE Stuff
SET Results = '{}'
WHERE AnalyteName =
'{}'
""".format(df, analytename)
The full printed statement looks as follows:
UPDATE Stuff
SET Results =
'col a,col b,col c,...,col z,
Row 1,a1,b1,c1,
...,...,...,...,
Row 3000,a3000,b3000,c3000'
WHERE AnalyteName = 'Serotonin'
However this does not seem to be working, and I keep getting the following error:
pyodbc.ProgrammingError: ('42000', '[42000] [Microsoft][ODBC Microsoft Access Driver] Syntax error in UPDATE statement. (-3503) (SQLExecDirectW)')
Which I assume is due to the format of the csv string I am trying to use to update the table with.
I have tried using INSERT and inserting a new row with the csv string and other relevant information and that seems to work. However, I need to use UPDATE as I will eventually be adding other csv strings to these columns. This leads me to believe that there is A) Something is wrong with the syntax of my UPDATE query (I am new to SQL syntax) or B) I am missing something from the documentation regarding UPDATE queries.
Is executing an UPDATE query like this possible? If so, where am I going wrong?
It would be determined by the table's field type.
For large amounts of text you'd need a blob field in your database table.
A blob field will store binary info so using blob will not 'see' illegal characters.
Answering my own question in case anyone else wants to use this.
It turns out what I was missing was brackets around the table column fields from my UPDATE statement. My final code looked something like this.
csv = df.to_csv(index=False)
name = 'some_name'
query = """
UPDATE Stuff
SET
[Results] = ?
WHERE
[AnalyteName] = ?
"""
self.cursor.execute(query, (csv, name))
I've seen several other posts here where brackets were not around the column names. However, since this is MS Access, I believe they were required for this query, or rather this specific query since it included a very long strong in the SET statement.
I welcome anyone else here to provide a more efficient method of performing this task or someone else who can provide more insight into why this is what worked for me.
This has been asked a million times but every single thing I try hasn't worked and all are for slightly different issues. I'm losing my mind over it!
I have a Python Script which pulls data from a MySql database - all works well.
Database Information:
I believe the information in the database is correct. I am trying to parse multiple records into word documents - that is why I am not too bothered about accuracy - even if the bad characters are removed - that is fine.
The Charset of the database is UTF-8 and the field I am working with is VarChar
I am using mysql.connector python module to connect
However, I am getting errors and I've realised it's because of values with unicode in, such as this:
The value of this item is "DOMAINoardroom".
I have tried:
text = order[11].encode().decode("utf-8")
text = order[11].encode("ascii", errors="ignore").decode()
text = str(order[11].encode("utf8", errors="ignore"))
The latter does work however it outputs it as b'DOMAIN\x08oardroom' due to it being bytes
I can get it to accept the text by print(text) to the screen. However when I try to output it to a word document (using the docx module), it produces an error:
table = document.add_table(rows=total_orders*2, cols=1)
row = table.rows[0].cells
row[0].text = row_text
ValueError: All strings must be XML compatible: Unicode or ASCII, no NULL bytes or control characters
I am not particularly fussy over how it handles the unicode, e.g. remove it if needed, but I just need it to parse without error.
Any thoughts or advice here?
Currently I am working to convert my py-2 project in py-3 & during this conversion I have faced below kind of error.
Partner.objects.filter(name__iexact = name_kv).count()
When I am running above query in py2 it working perfectly & getting output '0', means getting empty list.
When I am running above query in py3 it showing below kind of error.
django.db.utils.ProgrammingError: function upper(bytea) does not exist
LINE 1: ...alse AND UPPER("partners_partner"."name"::text) = UPPER('\x4...
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
I have searched lot online & SO other questions but but not able to find any solution.
I figured out that it must have been python version problem & I am getting above error when my ORM query does not have any records.
Try to convert your variable name_kv to string type using str(name_kv).
Update your query like Partner.objects.filter(name__iexact = str(name_kv)).count().
You are getting error because variable containing byte type data so converting byte data to string type may solve your problem.
I have a python script that receive (from a node.js script) an _id for a mongodb document as an argument. Using that value I'm trying to query the db and retrieve a document.
However, when i try to run the script it throws an error saying
"'xxxxxxxxxx' is not a valid ObjectId, it must be a 12-byte input or a 24-character hex string".
my script where the error is causing:
result = db.req.find_one({"_id": ObjectId(sys.argv[1])})
When i check the type() of the sys.argv[1] it says str. I thought wrapping the string around ObjectId should do the trick.
value of sys.argv[1] when printed: '"5902fbdd4d2f430dfe2dded4"'
Anyone one know whats the reason causing the issue?
Thanks in advance.
Resolved:
I removed the JSON.stringify() bit from the passing value in the node.js script. All seems fine now.
MongoDB allows storing "_id" in "ObjectId" format.
Consider the I have an dictionary that I want to store in db using python's pickle.
My question is: which django models' field should I use?
So far I've been using a CharField, but there seems to be an error:
I pickle a u'\xe9' (i.e. 'É'), and I get:
Incorrect string value: '\xE1, ist...' for column 'edition' at row 1
(the ,"ist..." was because I have more text after the 'É').
I'm using
data = dict();
data['foo'] = input_that_has_the_caracter
to_save_in_db = cPickle.dumps(data)
Should I use a binary field and pickle with a protocol that uses binary? Because I have to change the db in order to do that, so it is better to be sure first...
You should check if you are using a proper encoding for your table AND column in your database backend (I'm assuming MySQL since your error message seems to be from it). In MySQL columns can have different encoding than the table. See if it's UTF-8.