My Python Script fails while running a stored procedure that accesses a view that uses a DB link to access a remote table.
That stored procedure works fine when run from Oracle SQL Developer but not when run from the python script.
When run from SQL Developer the stored procedure takes about a minute to run but fails in the python script after about 16 minutes.
It throws this error:
ORA-03150: end-of-file on communication channel for database link
Why would it fail from python and not sql developer? Why is it slower in python? It is logging in with the same userid in both.
Related
I have made an application in python which takes data from MQTT using paho library and store it in tables in SQL Server.
When I run the code using python filename.py, its working fine.
When I make exe file using pyinstaller and run, then also its working file.
When I make it as a WINDOWS SERVICE and start, then it is working and storing data in MS SQL server tables but blocking some other applications like MS SQL server management studio, web application using asp.net (which is used to view data from same tables) from querying and getting timeout EXCEPT VS code with SQL Server extension.
If I stop the windows service then immediately other applications are able to access the tables.
When I remove INSERT statement for a particular table from python code and then running it as windows service, then it is not blocking that table access from other applications.
What could be the reason?
Update 1: When making windows service, database connection variable was made (by mistake) local to a class function instead of global as in normal console code but the cursor variable was still global. Making the connection variable again as global solved the problem. What difference was it creating?
I have a Python script to update a SQL database. When I run the script on Terminal manually, everything works fine. But when I run this on a crontab, the database is not getting updated.
I've to execute long running (~10 hours) hive queries from my local server using a python script. my target hive server is in an aws cluster.
I've tried to execute it using
pyhs2, execute('<command>')
and
paramiko, exec_command('hive -e "<command>"')
in both cases my query will be running in hive server and will complete successfully. but issue is even after successfully completing the query my parent python script continue to wait for return value and will remain in Interruptible sleep (Sl) state for infinite time!
is there anyway I can make my script work fine using pyhs2 or paramiko? os is there any other better option available in python?
As i mentioned before that even I face a similar issue in my Performance based environment.
My use-case was i was using PYHS2 module to run queries using HIVE TEZ execution engine. TEZ generates lot of logs(basically in seconds scale). the logs gets captured in STDOUT variable and is provided to the output once the query successfully completes.
The way to overcome is to stream the output as an when it is generated as shown below:
for line in iter(lambda: stdout.readline(2048), ""):
print line
But for this you will have to use native connection to cluster using PARAMIKO or FABRIC and then issue hive command via CLI or beeline.
I am using a search API to fetch some results. These results are inserted into the database. The only issue is it takes too much time. What I want to do is get my script running, even when I close my laptop it should be running and inserting records into the remote database. Does running the script on the server like this helps:
python myscript.py &
I want to retrieve data from MySQL database which is running directly in Arduino YUN itself.
I want to do it without Python, but directly using MySQL commands and Process. Is it possible?
Every post that I found in the internet is about to "how to retrieve data using python". But I don't want to use python, because connecting do database and etc. routine slows down my queries.
I'm retrieving data from database multiple times, and from different tables. So I moved logic of data retrieval to MySQL function, and now want to just call this function using Process class. The problem is that it works directly from mysql console, works using python, but doesn't work directly. When I say directly, I mean this:
Process process;
process.begin("python");
process.addParameter("/mnt/sda1/arduino/python/read.py");
process.addParameter(msg);
process.run();
This code works perfectly and uses python. Inside read.py file I have database call routine. So I want to do the same thing, but without python:
Process process;
process.begin("mysql");
process.addParameter("-u root -parduino --database arduino -e \"select process('my_text')\"");
process.run();
As you can understand, this code sample doesn't work. But same Mysql script works perfectly, and much faster than using python, if I run it through console.
Try Process.runShellCommand. It will accept your complete statement as an argument:
Process process;
process.runShellCommand("mysql -u root -parduino --database arduino -e \"select process('my_text')\"");
You can use subprocess module.
https://docs.python.org/2/library/subprocess.html
Using this you can send your mysql commands to the terminal as if sending directly through python script.