I want certain functions in my application to only be accessible if the current user is an administrator.
How can I determine if the current user is in the local Administrators group using Python on Windows?
You could try this:
import ctypes
print ctypes.windll.shell32.IsUserAnAdmin()
import win32net
def if_user_in_group(group, member):
members = win32net.NetLocalGroupGetMembers(None, group, 1)
return member.lower() in list(map(lambda d: d['name'].lower(), members[0]))
# Function usage
print(if_user_in_group('SOME_GROUP', 'SOME_USER'))
Of course in your case 'SOME_GROUP' will be 'administrators'
I'd like to give some credit to Vlad Bezden, becuase without his use of the win32net module this answer here would not exist.
If you really want to know if the user has the ability to act as an admin past UAC you can do the following. It also lists the groups the current user is in, if needed. It will work on most (all?) language set-ups. The local group just hast to start with "Admin", which it usually does... (Does anyone know if some set-ups will be different?)
To use this code snippet you'll need to have pywin32 module installed,
if you don't have it yet you can get it from PyPI: pip install pywin32
IMPORTANT TO KNOW:
It may be important to some users / coders that the function os.getlogin() is only available since python3.1 on Windows operating systems...
python3.1 Documentation
win32net Reference
from time import sleep
import os
import win32net
if 'logonserver' in os.environ:
server = os.environ['logonserver'][2:]
else:
server = None
def if_user_is_admin(Server):
groups = win32net.NetUserGetLocalGroups(Server, os.getlogin())
isadmin = False
for group in groups:
if group.lower().startswith('admin'):
isadmin = True
return isadmin, groups
# Function usage
is_admin, groups = if_user_is_admin(server)
# Result handeling
if is_admin == True:
print('You are a admin user!')
else:
print('You are not an admin user.')
print('You are in the following groups:')
for group in groups:
print(group)
sleep(10)
# (C) 2018 DelphiGeekGuy#Stack Overflow
# Don't hesitate to credit the author if you plan to use this snippet for production.
Oh and WHERE from time import sleep and sleep(10):
INSERT own imports/code...
Related
I need your help.
I need a python script that would take a certain group in the Active directory and make a list of users who are in this group.
I have no knowledge with the active directory, I ask for your help.
I was able to win. This code takes a list of users from the group and displays it. In addition, blocked users are not displayed in the list.
from ldap3 import Server, Connection, ALL, NTLM, SUBTREE
ldap_cred = {'server_ip': 'SERVER_IP',
'user': 'USER_LOGIN',
'password': 'USER_PASS'}
ldap_conn = Connection(Server(ldap_cred['server_ip']),
user=ldap_cred['user'],
password=ldap_cred['password'],
authentication=NTLM,
auto_bind=True)
ldap_users_dir = 'OU=Users,DC=test,DC=local'
ldap_servers_dir = 'OU=Groups,DC=test,DC=local'
user_group = 'GROUP_W_USERS'
def get_group(group_name):
ldap_conn.search(ldap_servers_dir, '(cn={})'.format(group_name))
return ldap_conn.response[0]['dn']
def get_users(group_path):
ldap_conn.search(ldap_users_dir,
'(&(cn=*)(memberOf={})(objectClass=User)\
(!(userAccountControl=514))(!(userAccountControl=66050)))'.format(group_path),
attributes=('sAMAccountName', 'memberof'),
search_scope=SUBTREE)
return ldap_conn.entries
if __name__ == '__main__':
group_p = get_group(user_group)
for user in get_users(group_p):
user_n = user.sAMAccountName.value.lower()
print(user_n)
I need to produce a small script that will watch for accidental changes made by users to a large shared file structure.
I have found I can get the change events using the ReadDirectoryChanges API as per
http://timgolden.me.uk/python/win32_how_do_i/watch_directory_for_changes.html
However I cant see how I can identify the user account that has made the changes so that I can send out a notification.
Is it possible to get the name of the user account that moved the file/directory.
Tricky question, I'll get you an answer in two ways:
First, as optional part, you can to watch file modifications themselves, and add custom actions.
Example of file modification tracking, working on Windows / Linux / Mac / BSD
import time
import watchdog.events
import watchdog.observers
class StateHandler(watchdog.events.PatternMatchingEventHandler):
def on_modified(self, event):
print(event.event_type)
print(event.key)
print(event.src_path)
# Add your code here to do whatever you want on file modification
def on_created(self, event):
pass
def on_moved(self, event):
pass
def on_deleted(self, event):
pass
fs_event_handler = StateHandler()
fs_observer = watchdog.observers.Observer()
fs_observer.schedule(fs_event_handler, r'C:\Users\SomeUser\SomeFolder', recursive=True)
fs_observer.start()
try:
while True:
time.sleep(2)
except KeyboardInterrupt:
fs_observer.stop()
fs_observer.join()
Using the above filesystem observer, you can trigger security event log reviews.
You might also trigger them as scheduled task, but it's more fun to trigger them on file system modifications.
In order for security event logs to contain file modification information, you need to enable file auditing for the required directories using SACL lists (right click on your folder, security, auditing).
Then you can go through the security logs on file events.
Going through security logs can be done with windows_tools.
Get it installed with python -m pip install windows_tools.wmi_queries (obviously only works under Windows)
Then do the following:
from windows_tools.wmi_queries import *
result = query_wmi('SELECT * FROM Win32_NTLogEvent WHERE Logfile="Security" AND TimeGenerated > "{}"'.format(create_current_cim_timestamp(hour_offset=1)))
for r in result:
print(r)
You can add WHERE clauses like EventCode={integer} in order to filter only the events (file modifications or else) you need.
Usually the event codes you're searching are 4656, 4660, 4663, 4670 (open delete, edit, create).
See This microsoft article in order to know what WHERE clauses the event log class accepts.
DISCLAIMER: I'm the author of windows_tools package.
On an UNIX environment you can use os and pwd to retrieve the user account that has changed the file.
import os
import pwd
file_stat = os.stat("<changed_file>")
user_name = pwd.getpwuid(file_stat.st_uid).pw_name
I know how to get the current user using os or getpass.getuser(), but is there a way to get a list of all user and not only the current one? Read os and getpass documentations but i didn't thing anything.
This is OS-specific.
In Linux, see Python script to list users and groups.
In Windows:
via WMI
parse the output of wmic UserAccount get Name, or
make the same call with the wmi module:
import wmi
w=wmi.WMI()
# The argument (field filter) is only really needed if browsing a large domain
# as per the warning at https://learn.microsoft.com/en-us/windows/desktop/cimwin32prov/win32-useraccount
# Included it for the sake of completeness
for u in w.Win32_UserAccount(["Name"]): #Net
print u.Name
del u
via the NetUserEnum API
parse the output of net user, or
make the same call with pywin32:
import win32net, win32netcon
names=[]; resumeHandle=0
while True:
data,_,resumeHandle=win32net.NetUserEnum(None,0,
win32netcon.FILTER_NORMAL_ACCOUNT,resumeHandle)
names.extend(e["name"] for e in data)
if not resumeHandle: break
del data,resumeHandle
print names
Two ideas for methods that are Windows-specific:
from pathlib import Path
users = [x.name for x in Path(r'C:\Users').glob('*') if x.name not in ['Default', 'Default User', 'Public', 'All Users'] and x.is_dir()]
print(users)
Paths in C:\Users
import os
os.system('net user > users.txt')
users = Path('./users.txt').read_text()
print(users)
Output from net user
I created the following custom management command following this tutorial.
from django.core.management.base import BaseCommand, CommandError
from django.contrib.auth.models import User
from topspots.models import Notification
class Command(BaseCommand):
help = 'Sends message to all users'
def add_arguments(self, parser):
parser.add_argument('message', nargs='?')
def handle(self, *args, **options):
message = options['message']
users = User.objects.all()
for user in users:
Notification.objects.create(message=message, recipient=user)
self.stdout.write(
self.style.SUCCESS(
'Message:\n\n%s\n\nsent to %d users' % (message, len(users))
)
)
It works exactly as I want it to, but I would like to add a confirmation step so that before the for user in users: loop you are asked if you really want to send message X to N users, and the command is aborted if you choose "no".
I assume this can be easily done because it happens with some of the built-in management commands, but it doesn't seem to cover this in the tutorial and even after some searching and looking at the source for the built-in management commands, I have not been able to figure it out on my own.
You can use Python's raw_input/input function. Here's an example method from Django's source code:
from django.utils.six.moves import input
def boolean_input(question, default=None):
result = input("%s " % question)
if not result and default is not None:
return default
while len(result) < 1 or result[0].lower() not in "yn":
result = input("Please answer yes or no: ")
return result[0].lower() == "y"
Be sure to use the import from django.utils.six.moves if your code should be compatible with Python 2 and 3, or use raw_input() if you're on Python 2. input() on Python 2 will evaluate the input rather than converting it to a string.
I want to change the env.hosts dynamically because sometimes I want to deploy to one machine first, check if ok then deploy to many machines.
Currently I need to set env.hosts first, how could I set the env.hosts in a method and not in global at script start?
Yes you can set env.hosts dynamically. One common pattern we use is:
from fabric.api import env
def staging():
env.hosts = ['XXX.XXX.XXX.XXX', ]
def production():
env.hosts = ['YYY.YYY.YYY.YYY', 'ZZZ.ZZZ.ZZZ.ZZZ', ]
def deploy():
# Do something...
You would use this to chain the tasks such as fab staging deploy or fab production deploy.
Kind of late to the party, but I achieved this with ec2 like so (note in EC2 you do not know what the ip/hostname may be, generally speaking - so you almost have to go dynamic to really account for how the environment/systems could come up - another option would be to use dyndns, but then this would still be useful):
from fabric.api import *
import datetime
import time
import urllib2
import ConfigParser
from platform_util import *
config = ConfigParser.RawConfigParser()
#task
def load_config(configfile=None):
'''
***REQUIRED*** Pass in the configuration to use - usage load_config:</path/to/config.cfg>
'''
if configfile != None:
# Load up our config file
config.read(configfile)
# Key/secret needed for aws interaction with boto
# (anyone help figure out a better way to do this with sub modules, please don't say classes :-) )
global aws_key
global aws_sec
aws_key = config.get("main","aws_key")
aws_sec = config.get("main","aws_sec")
# Stuff for fabric
env.user = config.get("main","fabric_ssh_user")
env.key_filename = config.get("main","fabric_ssh_key_filename")
env.parallel = config.get("main","fabric_default_parallel")
# Load our role definitions for fabric
for i in config.sections():
if i != "main":
hostlist = []
if config.get(i,"use-regex") == 'yes':
for x in get_running_instances_by_regex(aws_key,aws_sec,config.get(i,"security-group"),config.get(i,"pattern")):
hostlist.append(x.private_ip_address)
env.roledefs[i] = hostlist
else:
for x in get_running_instances(aws_key,aws_sec,config.get(i,"security-group")):
hostlist.append(x.private_ip_address)
env.roledefs[i] = hostlist
if config.has_option(i,"base-group"):
if config.get(i,"base-group") == 'yes':
print "%s is a base group" % i
print env.roledefs[i]
# env["basegroups"][i] = True
where get_running_instances and get_running_instances_by_regex are utility functions that make use of boto (http://code.google.com/p/boto/)
ex:
import logging
import re
from boto.ec2.connection import EC2Connection
from boto.ec2.securitygroup import SecurityGroup
from boto.ec2.instance import Instance
from boto.s3.key import Key
########################################
# B-O-F get_instances
########################################
def get_instances(access_key=None, secret_key=None, security_group=None):
'''
Get all instances. Only within a security group if specified., doesnt' matter their state (running/stopped/etc)
'''
logging.debug('get_instances()')
conn = EC2Connection(aws_access_key_id=access_key, aws_secret_access_key=secret_key)
if security_group:
sg = SecurityGroup(connection=conn, name=security_group)
instances = sg.instances()
return instances
else:
instances = conn.get_all_instances()
return instances
Here is a sample of what my config looked like:
# Config file for fabric toolset
#
# This specific configuration is for <whatever> related hosts
#
#
[main]
aws_key = <key>
aws_sec = <secret>
fabric_ssh_user = <your_user>
fabric_ssh_key_filename = /path/to/your/.ssh/<whatever>.pem
fabric_default_parallel = 1
#
# Groupings - Fabric knows them as roledefs (check env dict)
#
# Production groupings
[app-prod]
security-group = app-prod
use-regex = no
pattern =
[db-prod]
security-group = db-prod
use-regex = no
pattern =
[db-prod-masters]
security-group = db-prod
use-regex = yes
pattern = mysql-[d-s]01
Yet another new answer to an old question. :) But I just recently found myself attempting to dynamically set hosts, and really have to disagree with the main answer. My idea of dynamic, or at least what I was attempting to do, was take an instance DNS-name that was just created by boto, and access that instance with a fab command. I couldn't do fab staging deploy, because the instance doesn't exist at fabfile-editing time.
Fortunately, fabric does support a truly dynamic host-assignment with execute. (It's possible this didn't exist when the question was first asked, of course, but now it does). Execute allows you to define both a function to be called, and the env.hosts it should use for that command. For example:
def create_EC2_box(data=fab_base_data):
conn = boto.ec2.connect_to_region(region)
reservations = conn.run_instances(image_id=image_id, ...)
...
return instance.public_dns_name
def _ping_box():
run('uname -a')
run('tail /var/log/cloud-init-output.log')
def build_box():
box_name = create_EC2_box(fab_base_data)
new_hosts = [box_name]
# new_hosts = ['ec2-54-152-152-123.compute-1.amazonaws.com'] # testing
execute(_ping_box, hosts=new_hosts)
Now I can do fab build_box, and it will fire one boto call that creates an instance, and another fabric call that runs on the new instance - without having to define the instance-name at edit-time.