I am using a search API to fetch some results. These results are inserted into the database. The only issue is it takes too much time. What I want to do is get my script running, even when I close my laptop it should be running and inserting records into the remote database. Does running the script on the server like this helps:
python myscript.py &
Related
I have written a python script that uses the MySQL Connector for Python to read, write and update the database. Do I need to install MySQL server on the client machine (Windows) too to run the program or do I just need to make sure that the database is present in the path used by the script? I tried finding guidance on Google and here but couldn't find what I needed.
No you don't need to install MySQL Server on a client machine. By definition client machine means you don't have the DB/Server there. Where is this DB allocated? You show have an IP or a domain/subdomain address where the DB is actually hosted.
I have made an application in python which takes data from MQTT using paho library and store it in tables in SQL Server.
When I run the code using python filename.py, its working fine.
When I make exe file using pyinstaller and run, then also its working file.
When I make it as a WINDOWS SERVICE and start, then it is working and storing data in MS SQL server tables but blocking some other applications like MS SQL server management studio, web application using asp.net (which is used to view data from same tables) from querying and getting timeout EXCEPT VS code with SQL Server extension.
If I stop the windows service then immediately other applications are able to access the tables.
When I remove INSERT statement for a particular table from python code and then running it as windows service, then it is not blocking that table access from other applications.
What could be the reason?
Update 1: When making windows service, database connection variable was made (by mistake) local to a class function instead of global as in normal console code but the cursor variable was still global. Making the connection variable again as global solved the problem. What difference was it creating?
My Python Script fails while running a stored procedure that accesses a view that uses a DB link to access a remote table.
That stored procedure works fine when run from Oracle SQL Developer but not when run from the python script.
When run from SQL Developer the stored procedure takes about a minute to run but fails in the python script after about 16 minutes.
It throws this error:
ORA-03150: end-of-file on communication channel for database link
Why would it fail from python and not sql developer? Why is it slower in python? It is logging in with the same userid in both.
I've to execute long running (~10 hours) hive queries from my local server using a python script. my target hive server is in an aws cluster.
I've tried to execute it using
pyhs2, execute('<command>')
and
paramiko, exec_command('hive -e "<command>"')
in both cases my query will be running in hive server and will complete successfully. but issue is even after successfully completing the query my parent python script continue to wait for return value and will remain in Interruptible sleep (Sl) state for infinite time!
is there anyway I can make my script work fine using pyhs2 or paramiko? os is there any other better option available in python?
As i mentioned before that even I face a similar issue in my Performance based environment.
My use-case was i was using PYHS2 module to run queries using HIVE TEZ execution engine. TEZ generates lot of logs(basically in seconds scale). the logs gets captured in STDOUT variable and is provided to the output once the query successfully completes.
The way to overcome is to stream the output as an when it is generated as shown below:
for line in iter(lambda: stdout.readline(2048), ""):
print line
But for this you will have to use native connection to cluster using PARAMIKO or FABRIC and then issue hive command via CLI or beeline.
I am developing a web scraping project on Ubuntu server of 25GB hard disk space. I am using python scrapy and mongodb
Last night my harddisk is full because of scraping 60,000 web pages. so mongodb has put a lock and i am unable to access my database it shows this error
function (){ return db.getCollectionNames(); }
Execute failed:exception: Can't take a write lock while out of disk space
So i removed all data stored in /var/lib/mongodb and run "rebbot" command from shell to restart server
When I try to run mongo on command line, I get this error:
MongoDB shell version: 2.4.5
connecting to: test
Thu Jul 25 15:06:29.323 JavaScript execution failed: Error: couldn't connect to
server 127.0.0.1:27017 at src/mongo/shell/mongo.js:L112
exception: connect failed
Guys please help me so that i can connect to mongodb
The first thing to do is to find out whether MongoDB is actually running. You can do that by running the following commands on the shell:
ps aux | grep mongo
And:
netstat -an | grep ':27017'
If neither of those has any output, then MongoDB is not running.
The best way to find out why it doesn't start is to look in the log file that MongoDB creates. It is usually located at /var/log/mongodb/mongodb.log and it should tell you why MongoDB refuses to start.