subprocess.popen and psql - python

I have Django application which needs to call psql. I do this in a celery thread which looks like this:
#task()
def insert_sqldump_threaded(username, database, file):
host = database.server.db_address
work = subprocess.Popen([settings.PSQL,
"-f%s" % file,
"-d%s" % database.db_name,
"-h%s" % host,
"-U%s" % settings.DB_ADMIN_USER
], env = {'PGPASSFILE': settings.DB_PASSFILE}
)
work.wait()
return work.returncode
On my development server the PGPASSFILE looks like this:
localhost:5432:*:postgres:postgres
which should be fine.
The problem is that all I get when this function gets called is an error from psql:
psql: could not translate host name "localhost" to address: Unknown server error
And now it gets really strange, but when I don't submit the "env" variable, psql seems to recognize the host. At least then it asks for a password.
Any ideas on how to solve this?

I think postgresql needs other environment variables that you clear when you pass env. You can simply change os.environ or make a copy of it beforehand as in the following code:
import os
#task()
def insert_sqldump_threaded(username, database, file):
d = dict(os.environ)
d['PGPASSFILE'] = settings.DB_PASSFILE
host = database.server.db_address
work = subprocess.Popen([settings.PSQL,
"-f%s" % file,
"-d%s" % database.db_name,
"-h%s" % host,
"-U%s" % settings.DB_ADMIN_USER
], env = d
)
work.wait()
return work.returncode

When you don't submit env, it picks up the environment variables from the shell you're running in - see os.environ. It must be depending on one of those for looking up localhost. You'll need to include that in the dictionary. Or just copy in everything from os.environ.

Related

How to check If Path Exists Using Fabric2.x

I am using Fabric2 version and I don't see It has exist method in it to check if folder path has existed in the remote server. Please let me know how can I achieve this in Fabric 2 http://docs.fabfile.org/en/stable/.
I have seen a similar question Check If Path Exists Using Fabric, But this is for fabric 1.x version
You can execute the test command remotely with the -d option to test if the file exist and is a directory while passing the warn parameter to the run method so the execution doesn't stop in case of a non-zero exit status code. Then the value failed on the result will be True in case that the folder doesn't exist and False otherwise.
folder = '/path/to/folder'
if c.run('test -d {}'.format(folder), warn=True).failed:
# Folder doesn't exist
c.run('mkdir {}'.format(folder))
exists method from fabric.contrib.files was moved to patchwork.files with a small signature change, so you can use it like this:
from fabric2 import Connection
from patchwork.files import exists
conn = Connection('host')
if exists(conn, SOME_REMOTE_DIR):
do_something()
The below code is to check the existence of the file (-f), just change to '-d' to check the existence of a directory.
from fabric import Connection
c = Connection(host="host")
if c.run('test -f /opt/mydata/myfile', warn=True).failed:
do.thing()
You can find it in the Fabric 2 documentation below:
https://docs.fabfile.org/en/2.5/getting-started.html?highlight=failed#bringing-it-all-together
Hi That's not so difficult, you have to use traditional python code to check if a path already exists.
from pathlib import Path
from fabric import Connection as connection, task
import os
#task
def deploy(ctx):
parent_deploy_dir = '/var/www'
deploy_dir ='/var/www/my_folder'
host = 'REMOTE_HOST'
user = 'USER'
with connection(host=host, user=user) as c:
with c.cd(parent_deploy_dir):
if not os.path.isdir(Path(deploy_dir)):
c.run('mkdir -p ' + deploy_dir)

os.environ.get() does not return the Environment Value in windows?

I already set SLACK_TOKEN environment Variable. But "SLACK_TOKEN=os.environ.get('SLACK_TOKEN')" is returning "None".
The type of SLACK_TOKEN is NoneType. I think os.environ.get not fetching value of environment variable. so rest of the code is not executing.
import os
from slackclient import SlackClient
SLACK_TOKEN= os.environ.get('SLACK_TOKEN') #returning None
print(SLACK_TOKEN) # None
print(type(SLACK_TOKEN)) # NoneType class
slack_client = SlackClient(SLACK_TOKEN)
print(slack_client.api_call("api.test")) #{'ok': True}
print(slack_client.api_call("auth.test")) #{'ok': False, 'error': 'not_authed'}
def list_channels():
channels_call = slack_client.api_call("channels.list")
if channels_call['ok']:
return channels_call['channels']
return None
def channel_info(channel_id):
channel_info = slack_client.api_call("channels.info", channel=channel_id)
if channel_info:
return channel_info['channel']
return None
if __name__ == '__main__':
channels = list_channels()
if channels:
print("Channels: ")
for c in channels:
print(c['name'] + " (" + c['id'] + ")")
detailed_info = channel_info(c['id'])
if detailed_info:
print(detailed_info['latest']['text'])
else:
print("Unable to authenticate.") #Unable to authenticate
I faced similar issue.I fixed it by removing quotes from the values.
Example:
I created a local.env file wherein I stored my secret key values :
*local.env:*
export SLACK_TOKEN=xxxxxyyyyyyyzzzzzzz
*settings.py:*
SLACK_TOKEN = os.environ.get('SLACK_TOKEN')
In your python terminal or console,run the command : *source local.env*
****Involve local.env in gitignore.Make sure you dont push it to git as you have to safeguard your information.
This is applicable only to the local server.Hope this helps.Happy coding :)
In my case, I write wrong content in env file:
SLACK_TOKEN=xxxxxyyyyyyyzzzzzzz
I forgot export befor it, the correct should be:
export SLACK_TOKEN=xxxxxyyyyyyyzzzzzzz
You can use a config file to get the env vars without using export,
in the env file store varibale normally
.env:
DATABASE_URL=postgresql+asyncpg://postgres:dina#localhost/mysenseai
Then create a config file that will be used to store the env variable like so
config.py:
from pydantic import BaseSettings
class Settings(BaseSettings):
database_url: str
class Config:
env_file = '.env'
settings = Settings()
than you can use it that way
from config import settings
url = settings.database_url
If you declared the variable SLACK_TOKEN in the windows command prompt you will be able to access it in the same instance of that command prompt not anywhere including Powershell and the git bash. Be careful with that
whenever you want to run that python script, consider running it in the same command prompt where you declared those variables
you can always check if the variable exists in the cmd by running echo %SLACK_TOKEN% if it does not exists the cmd will return %SLACK_TOKEN%

Python daemon can see database, but complains that tables do not exist

I have a vanilla python that connects to a sqlite database.
Everything works fine until I try to run it as a daemon. Here is the code I'm using to do that:
def start(self):
if self.lockfile.is_locked():
exit_with_code(7, self.pid_file)
# If we're running in debug, run in the foreground, else daemonise
if self.options['debug']:
try:
self.main()
except KeyboardInterrupt:
pass
finally:
self.close_gracefully()
else:
context = daemon.DaemonContext(
files_preserve = [self.logger.socket(), self.lockfile]
)
context.signal_map = {
signal.SIGTERM: self.close_gracefully
}
with context: self.main()
I can run it in the foreground with python -m starter -debug and everything is fine, my app writes into the database, but when I leave the debug flag off I see the following when I try to write:
no such table: Frontends
I know that the frontends table exists because I've opened the database up. I assume that python is finding the database, because there would be an entirely different error message otherwise.
All my files are owned by vagrant, and ls -l shows the following:
-rwxrwxrwx 1 vagrant vagrant 9216 Nov 9 18:09 development.sqlite
Anyone got any tips?
Update
As requested, here is the code for my db
import os
import sqlite3
class Database(object):
def __init__(self, db_file='/vagrant/my_daemon/db/development.sqlite'):
self.db = sqlite3.connect(db_file)
if os.path.exists(db_file):
print "db exists"
And when I run this it prints "db exists". I instantiate the database in starter.py with a call to Database().
Python daemon closes all open file descriptors (except stdin, stout and stderr) when you daemonise.
I spent ages trying to figure out which files to keep open to prevent my database from being inaccessible, and in the end I found that it's easier to initialise the database inside the daemon context rather than outside. That way I don't need to worry about which files should stay open.
Now everything is working fine.

Unable to identify the host : Fabric

I m trying to use fabric module through simple python module
remoteExc.py
from fabric.api import *
def clone_repo(IPADDRESS,USER,fPath,git_url):
env.hosts_string = IPADDRESS
env.user = USER
env.key_filename = fPath
env.disable_known_hosts = 'True'
run('git clone %s' % (git_url))
mainFile.py
from remoteExc import clone_repo
clone_repo(ipAddress,user,fPath,git_url)
When i execute it says
python mainfile.py
No hosts found. Please specify (single) host string for connection:
Please enlight me where i make a mistake
Typo. env.host_string = IPADDRESS - you've got an env.hosts_string instead.
Also, generally you run fabric via fab - unless you're trying to do something fairly non-standard, be aware that running it via python probably isn't what you want to do. See the Fabric docs for a pretty good intro.
http://docs.fabfile.org/en/1.7/tutorial.html

How to define more than one server environment in Fabric(Python)?

I need to use Fabric to do some operations in a website that use one machine for the filesystem and other machine to the database server. I need to handle two hosts. How can I do that?
I have some code but I cannot get the environment definition to work.
The idea is to connect to the remote Filesystem server and get the files and then connect to the remote Database server and get the database schema.
The code that I have for now is something like this:
from __future__ import with_statement
from fabric.api import *
from fabric.contrib.console import confirm
'''
Here I define where is my "aid"s file structure
'''
local_root = '/home/andre/test' # This is the root folder for the audits
code_location = '/remote_code' # This is the root folder dor the customer code inside each audit
#
# ENVIRONMENTS CONFIGURATIONS
#
'''
Here I configure where is the remote file server
'''
def file_server():
env.user = 'andre'
env.hosts = ['localhost']
'''
Here I configure where is the database server
'''
def database_server():
env.user = 'andre'
env.hosts = ['192.168.5.1']
#
# START SCRIPT
#
def get_install(remote_location, aid):
### I will get the files
'''
Here I need to load the file_server() definitions
'''
working_folder = local_root + '/%s' % aid # I will define the working folder
local('mkdir ' + working_folder) # I will create the working folder for this audit
local('mkdir ' + working_folder + code_location) # I will create the folder to receive the code
get(remote_location, working_folder + code_location) # I will download the code to my machine
### I will get the database
'''
Here I need to load the database_server() definitions
'''
local('dir') # Just to test
How can I inside get_install() define the environments file_server() and database_server() ?
Best Regards,
I don't understand exactly what you are trying to do, but maybe you can split up your get_install function into two functions each for every server.
Then limit those functions to the correct servers with fabric.decorators.hosts(*host_list) decorator:
For example, the following will ensure that, barring an override on the command line, my_func will be run on host1, host2 and host3, and with specific users on host1 and host3:
#hosts('user1#host1', 'host2', 'user2#host3')
def my_func():
pass
(For more info see http://readthedocs.org/docs/fabric/en/1.1.0/api/core/decorators.html#fabric.decorators.hosts)
And you can than call those 2 functions in one go by defining your get_install method as:
def get_install():
func1()
func2()
You should be able to do this with fab database_server get_install. Basically, fab [environment] [command] should do what you want.

Categories