Receiving intermittent snowflake python connector error while trying to load file onto table.
Gives error in following code:
exe.execute("""PUT 'file:///Users/oscar/Desktop/data.txt'
'#"db"."schema".%"table"/ui4654116544'""")
Related
I recenlty deployed jupyter into kubernetes, and now i want to read my data and make the data cleaning, while i'm runing :
data = pd.read_csv("home/ghofrane21/data/Les indices des prix.csv", header=None)
I get this error :
File Not Found Error [Errno2] File home/ghofrane21/data/Les indices des prix.csv does not exist
The file already exist :
I cannot seem to create and attach a file to an item in podio using the pypodio client, which is the python wrapper for PODIO's API. I am trying to get the file id but keep on getting the error. I am using Python 3.6.0
My code is
`path = os.getcwd()`
`filename="system_information"`
`filepath = path + "\\system_information.txt"`
`filedata=open(filepath)`
uploading_response = pcbapp.Files.create(filename,filedata)
I get an error shown below,
File "c:\users\nipun.arora\src\podio-py\pypodio2\encode.py", line 317, in get_headers
boundary = urllib.quote_plus(boundary)
AttributeError: module 'urllib' has no attribute 'quote_plus'
That's might be because there is no urllib.quote_plus in python3. Can you try running same code in python2?
I am updating data on a Neo4j server using Python (2.7.6) and Py2Neo (1.6.4). My load function is:
from py2neo import neo4j,node, rel, cypher
session = cypher.Session('http://my_neo4j_server.com.mine:7474')
def load_data():
tx = session.create_transaction()
for row in dataframe.iterrows(): #dataframe is a pandas dataframe
name = row[1].name
id = row[1].id
merge_query = "MERGE (a:label {name:'%s', name_var:'%s'}) " % (id, name)
tx.append(merge_query)
tx.commit()
When I execute this from Spyder in Windows it works great. All the data from the dataframe is committed to neo4j and visible in the graph. However, when I run this from a linux server (different from the neo4j server) I get the following error at tx.commit(). Note that I have the same version of python and py2neo.
INFO:py2neo.packages.httpstream.http:>>> POST http://neo4j1.qs:7474/db/data/transaction/commit [1360120]
INFO:py2neo.packages.httpstream.http:<<< 200 OK [chunked]
ERROR:__main__:some part of process failed
Traceback (most recent call last):
File "my_file.py", line 132, in load_data
tx.commit()
File "/usr/local/lib/python2.7/site-packages/py2neo/cypher.py", line 242, in commit
return self._post(self._commit or self._begin_commit)
File "/usr/local/lib/python2.7/site-packages/py2neo/cypher.py", line 208, in _post
j = rs.json
File "/usr/local/lib/python2.7/site-packages/py2neo/packages/httpstream/http.py", line 563, in json
return json.loads(self.read().decode(self.encoding))
File "/usr/local/lib/python2.7/site-packages/py2neo/packages/httpstream/http.py", line 634, in read
data = self._response.read()
File "/usr/local/lib/python2.7/httplib.py", line 543, in read
return self._read_chunked(amt)
File "/usr/local/lib/python2.7/httplib.py", line 597, in _read_chunked
raise IncompleteRead(''.join(value))
IncompleteRead: IncompleteRead(128135 bytes read)
This post (IncompleteRead using httplib) suggests that is an httplib error. I am not sure how to handle since I am not calling httplib directly.
Any suggestions for getting this load to work on Linux or what the IncompleteRead error message means?
UPDATE :
The IncompleteRead error is being caused by a Neo4j error being returned. The line returned in _read_chunked that is causing the error is:
pe}"}]}],"errors":[{"code":"Neo.TransientError.Network.UnknownFailure"
Neo4j docs say this is an unknown network error.
Although I can't say for sure, this implies some kind of local network issue between client and server rather than a bug within the library. Py2neo wraps httplib (which is pretty solid itself) and, from the stack trace, it looks as though the client is expecting more chunks from a chunked response.
To diagnose further, you could make some curl calls from your Linux application server to your database server and see what succeeds and what doesn't. If that works, try writing a quick and dirty python script to make the same calls with httplib directly.
UPDATE 1: Given the update above and the fact that the server streams its responses, I'm thinking that the chunk size might represent the intended payload but the error cuts the response short. Recreating the issue with curl certainly seems like the best next step to help determine whether it is a fault in the driver, the server or something else.
UPDATE 2: Looking again this morning, I notice that you're using Python substitution for the properties within the MERGE statement. As good practice, you should use parameter substitution at the Cypher level:
merge_query = "MERGE (a:label {name:{name}, name_var:{name_var}})"
merge_params = {"name": id, "name_var": name}
tx.append(merge_query, merge_params)
I'm trying to use SQLite with python and I'm going over examples from the python website. One example is to build a shell for SQLite:
py
This is the beginning of the script
import sqlite3
con = sqlite3.connect(":memory:")
con.isolation_level = None
cur = con.cursor()
I'm loading the file from a text editor, and I'm confused by the error that I get when I import the file.
>>>import SQLoad
Traceback (most recent call last):
File"<stdin>", line 1, in <module>
File "SQLoad.py", line 1, in <module>
c = conn.cursor()
NameError: name 'conn' is not defined
I'm confused because 'conn' isn't being defined in what I'm uploading. Is it something that has to be defined?
Your first code block shows that the connection variable is named con.
The error message shows that you have written that variable as conn, and that this is in the first line of SQLoad.py, where the connection cannot have been opened yet.
Your first code block looks correct, but it is not what is actually stored in SQLoad.py.
I am trying to make an app similar to StumbleUpon using Python as a back end for a personal project . From the database I retrieve a website name and then I open that website with webbrowser.open("http://www.website.com"). Sounds pretty straight forward right but there is a problem. When I try to open the website with webbrowser.open("website.com") it returns the following error:
File "fetchall.py", line 18, in <module>
webbrowser.open(x)
File "/usr/lib/python2.6/webbrowser.py", line 61, in open
if browser.open(url, new, autoraise):
File "/usr/lib/python2.6/webbrowser.py", line 190, in open
for arg in self.args]
TypeError: expected a character buffer object
Here is my code:
import sqlite3
import webbrowser
conn = sqlite3.connect("websites.sqlite")
cur = conn.cursor()
cur.execute("SELECT WEBSITE FROM COLUMN")
x = cur.fetchmany(1)
webbrowser.open(x)
EDIT
Okay thanks for the reply, but now I'm receiving this: "Error showing URL: Error stating file '/home/user/(u'http:bbc.co.uk,)': No such file or directory".
What's going on ?
webbrowser.open is expecting a character buffer, but fetchmany returns a list. So webbrowser.open(x[0]) should do the trick.