I am developing a web scraping project on Ubuntu server of 25GB hard disk space. I am using python scrapy and mongodb
Last night my harddisk is full because of scraping 60,000 web pages. so mongodb has put a lock and i am unable to access my database it shows this error
function (){ return db.getCollectionNames(); }
Execute failed:exception: Can't take a write lock while out of disk space
So i removed all data stored in /var/lib/mongodb and run "rebbot" command from shell to restart server
When I try to run mongo on command line, I get this error:
MongoDB shell version: 2.4.5
connecting to: test
Thu Jul 25 15:06:29.323 JavaScript execution failed: Error: couldn't connect to
server 127.0.0.1:27017 at src/mongo/shell/mongo.js:L112
exception: connect failed
Guys please help me so that i can connect to mongodb
The first thing to do is to find out whether MongoDB is actually running. You can do that by running the following commands on the shell:
ps aux | grep mongo
And:
netstat -an | grep ':27017'
If neither of those has any output, then MongoDB is not running.
The best way to find out why it doesn't start is to look in the log file that MongoDB creates. It is usually located at /var/log/mongodb/mongodb.log and it should tell you why MongoDB refuses to start.
Related
I'm trying to deploy a Dash app onto Heroku that is connected to an external PostgreSQL database. I've tried to use the heroku-postgres addon but I don't quite understand how that works, and I think it also won't have enough space for the data I need. I currently have an external PostgreSQL database I'm using and I've gone through the steps to attach/detach/config:add the new DATABASE_URL.
Every time I deploy I run into this error (more of the error code in the image):
Error fetching data from PostgreSQL table could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
I've set up my Config Vars such that the DATABASE_URL is postgres://user:password#host_ip:port/database_name
and in my code I'm connecting to the database via a .env file that looks like this:
# Development settings
user="user"
password="password"
host="ip"
database="postgres"
Also, I saw somewhere that this might be useful to check ps -ef | grep postgres and the output of that shows the correct database connection, but the output of pg_lsclusters shows nothing.
Please let me know if I can provide any more info. Thank you in advance!
I have a back-end API server created with Python & Flask. I used MongoDB as my database. I build and run docker-composer every time while I update my source code. Because of this, I always take a backup of my database before stopping and restarting docker container.
From the beginning I am using this command to get a backup in my default folder:
sudo docker-compose exec -T db mongodump --archive --gzip --db SuperAdminDB> backup.gz
This line worked well previously. Then I restore the database again after restarting the the docker-composer to enable my back-end with updated code. I used this command to restore the database.
sudo docker-compose exec -T db mongorestore --archive --gzip < backup.gz
But from today, while I am trying to take a backup from server while the docker is still running (as usual), the server freezes like the image below.
I am using Amazon EC2 server and Ubuntu 20.04 version
First, stop redirecting output of the command. If you don't know whether it is working you should be looking at all available information which includes the output.
Then verify you can connect to your deployment using mongo shell and run commands.
If that succeeds look at server log and verify there is a record of connection from mongodump.
If that works try dumping other collections.
After digging 3 days for right reason I have found that the main reason is the apache.
I have recently installed apache to host my frontend also. While apache is running the server won't allow me to dump mongodb backup. Somehow apache was conflicting with docker.
My solution:
1. Stop apache service
sudo service apache2 stop
2. Then take MongoDB backup
sudo docker-compose exec -T db mongodump --archive --gzip --db SuperAdminDB> backup.gz
I have a new pc and am trying to migrate over my mongodbs.
On my old pc I'm using 3.2.10 and 3.4 on the new one.
I thought the best method would be to simple back and restore on the new pc, however when I try and use the command 'mongodump'
I just get the error message:
2017-01-14T16:29:47.416+0900 E QUERY [thread1] ReferenceError: mongodump is not defined :
#(shell):1:1
you're running mongodump inside mongo but it's part of mongo commands! so exit mongo-cli and run it in bash!
I have made a script which saves data to Mysql database using mysqldp and python. When I run script from console (python myscript.py) it works, but when I run it on reboot using Crontab I get an email with following error:
_mysql_exceptions.OperationalError: (2002, "Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)")
When I try to run the script using Crontab not on reboot, but accordingly to time I don't get following error.
Maybe you have some ideas?
What if I would call the same script with while(1) loop once and once again (It will start the new background task every time)?
Thanks to Shadow's hint I have solved the problem.
I just have started the Mysql server manually from the script and now it works.
I have used this command to start the server
from subprocess import call
call(['sudo','service','mysql','start'])
I am using a search API to fetch some results. These results are inserted into the database. The only issue is it takes too much time. What I want to do is get my script running, even when I close my laptop it should be running and inserting records into the remote database. Does running the script on the server like this helps:
python myscript.py &