So of course I'm new to Python and to programming in general...
I am trying to get OS version information from the network. For now I only care about the windows machines.
using PyWin32 I can get some basic information, but it's not very reliable. This is an example of what I am doing right now: win32net.NetWkstaGetInfo(myip, 100)
However, it appears as though this would provide me with more appropriate information: platform.win32_ver()
I have no idea how get the info from a remote machine using this. I need to specify an IP or a range of IP's... I intend on using Google's ipaddr to get a list of network ranges to scan. I will eventually need to scan a large network for this info.
Can someone provide an example?
A good way is to use WMI. The following links from Microsoft contain enough information to write code for your purposes:
Connecting to WMI on a Remote Computer
WMI Tasks: Operating Systems
The missing piece is how to do this in Python. For that, consult Tim Golden's site:
WMI for Python
WMI Cookbook
By the way, if you're OK with using a command line program and parsing the output, then I would suggest the PsTools available freely. In particular, psinfo can do what you want.
I had to use remote registry...
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion
ProductName, EditionID, CurrentVersion, CurrentBuild
Related
I am using Python 3.6.8 with pysnmp 4.4.12 and the device I am polling works fine with easysnmp or snmpwalk/snmpget from the command line. I have figure out the problem. While I am sending community string xxxxxx, the return packets show the community string as public. I changed it temporarily to public to see if that would work and it did. My question is there some way to tell pysnmp to ignore the community on incoming packets?
The original developer Ilya made it clear in many places that PySNMP aims to be standard compliant, so any violation can trigger such issues.
I took a glance on the related files and didn't see any option to skip community name check.
The project that I am working on is a bit confidential, but I will try to explain my issues and be as clear as possible because I need your opinion.
Project:
They asked me to set up a local ELK environment , and to use Python scripts to communicate with this stack (ELK), to store data, retrieve it, analyse it and visualise it thanks to Kibana, and finally there is a decision making based on that data(AI). So as you can see, it is a Data Engineering project with some AI for the decision making process. The issues that I am facing are:
I don't know how to use Python to communicate with the stack, I didn't find resources about it
Since the data is confidential, how can I assure a high security?
How many instances to use?
I am lost because I am new to ELK and my team is not Dev oriented
I am new to ELK, so please any advice would be really helpful!
I don't know how to use Python to communicate with the stack, I didn't
find resources about it
For learning how to interact with your stack use the python library:
You can install using pip3 install elasticsearch and the following links contain a wealth of tutorials on almost anything you would need to be doing.
https://kb.objectrocket.com/category/elasticsearch?filter=python
Suggest you start with these two:
https://kb.objectrocket.com/elasticsearch/how-to-parse-lines-in-a-text-file-and-index-as-elasticsearch-documents-using-python-641
https://kb.objectrocket.com/elasticsearch/how-to-query-elasticsearch-documents-in-python-268
Since the data is confidential, how can I assure a high security?
You can mask the data or restrict index access.
https://www.elastic.co/guide/en/elasticsearch/reference/current/authorization.html
https://nl.devoteam.com/expert-view/field-level-security-and-data-masking-in-elasticsearch/
How many instances to use?
I am lost because I am new to ELK and my team is not Dev oriented
I suggest you start with 1 Elasticsearch node, if you're on AWS use a t3a.large or equivalent and run Elasticsearch, Kibana and Logstash all on the same machine.
For setting it up: https://www.elastic.co/guide/en/elastic-stack-get-started/current/get-started-stack-docker.html#run-docker-secure
If you want to use phyton as your integration tools to Elasticsearch you can use elasticsearch phyton client.
The other options you can use python to create the result and save it in log file or insert to database than Logstash will get your data.
For the security ELK have good security from API authorization user authentication to cluster security. you can see in here Secure the Elastic Stack
I just use 1 instance, but feel free if you think you will need to separate between Kibana and Elasticsearch and Logstash (if you use it) or you can use docker to separate it.
Based on my experience, if you are going to load a lot of data in a short time it will be wise If you separate it so the processes don't interfere with each other.
I am trying to make an application using python that registers students' attendance. I'm planning to use my laptop's fingerprint built-in fingerprint device to identify the students and register the attendance.
I've tried some web searches but I couldn't find anyway to use built-in fingerprint devices for applications with python. Do you know any way to do it.
The device that i want to use for fingerprints is Lenovo ThinkPad L540.
I managed to find some stuff like windows biometric framework but those things were to be used with other languages.
https://learn.microsoft.com/en-us/windows/win32/secbiomet/biometric-service-api-portal?redirectedfrom=MSDN
This can not be done for now. The fingerprint sensor associated with laptop/mobile can be used for authentication purpose only. Means, you can add the more number of fingerprints who are eligible to access the device. Then, device will allow any one of them to unlock the device. It will not record whose fingerprint it is. It will just say, a fingerprint is authenticated or not.
For recording the attendance, you must go with the time attendances systems. if you want to build software based attendance system with the help of scanner, then you have to go with the fingerprint scanners like mfs100, zk7500 and etc.
From what I can tell, this absolutely can be done. The following link is for a python wrapper around the Windows Biometric Framework. It is around 4 years old, but the functionality it offers still seems to work fine.
https://github.com/luspock/FingerPrint
The identify function in this wrapper prints out the Sub Factor value whenever someone places a matching finger on the scanner. In my experimentation, the returned Sub Factor is unique to each finger that is stored. In the first day you use this, you would just fill a dictionary with sub factors and student names, then that is everything you need for your use case.
Considering that this wrapper only makes use of the system biometric unit pool, the drawback here is that you have to add all of your student's fingers to your PC through the windows sign-in options, meaning they would be able to unlock it. If you are okay with that, it seems like this will suit your needs.
It would also be possible for you to disable login with fingerprint and only use the system pool for this particular use case. That would give you what you want and keep your PC safe from anyone that has their fingerprint stored in the system pool.
If you want to make use of a private pool, you would have to add that functionality to the wrapper yourself. That's totally possible, but it would be a lot of work.
One thing to note about the Windows Biometric Framework is that it requires the process calling the function to have focus. In order for me to test the wrapper, I used the command-line through the Windows Console Host. Windows Terminal doesn't work, because it doesn't properly acquire focus. You can also use tkinter and call the functions with a button.
I am working on a project where I have been using Python to make API calls to our organization's various technologies to get data, which I then push to Power BI to track metrics over time relating to IT Security.
My boss wants to see info added from Exchange Online Protection such as malware detected in emails, spam blocks etc., essentially replicating some of the email and collaboration reports you'd see in M365 defender > reports > email and collaboration (security.microsoft.com/emailandcollabreport).
I have tried the Defender API and MS Graph API, read through a ton of documentation, and can't seem to find anywhere to pull this info from. Has anyone done something similar, or know where this data can be pulled from?
Thanks in advance.
You can try using the Microsoft Graph Security API using which you can get the alerts, information protection, secure score using that. Also you can refer the alerts section in the documentation which talks about the list of supported providers at this point using the Microsoft Graph security api.
In case anyone else runs into this, this is the solution I ended up using (hacky as it may be);
The only way to extract the pertinent info seems to be through PowerShell, you need the modules ExchangeOnlineManagement and PSWSMan so those will need to be installed.
You need to add an app to your Azure instance with global reader role minimum (or something custom) and generate and upload self-signed certificates to the app.
I then ran the following lines as a ps1 script:
Connect-ExchangeOnline -CertificateFilePath "<PATH>" -AppID "<APPID>" -Organization "<ORG>.onmicrosoft.com" -CertificatePassword (ConvertTo-SecureString -String '<PASSWORD>' -AsPlainText -Force)
$dte = (Get-Date).AddDays(-30)
Get-MailflowStatusReport -StartDate $dte -EndDate (Get-Date)
Disconnect-ExchangeOnline
I used python to call the powershell script, then extract the info I needed from the output and push it to PowerBI.
I'm sure there is a more secure and efficient way to do this but I was able to accomplish the task this way.
I am using the python-ldap module to (amongst other things) search for groups, and am running into the server's size limit and getting a SIZELIMIT_EXCEEDED exception. I have tried both synchronous and asynchronous searches and hit the problem both ways.
You are supposed to be able to work round this by setting a paging control on the search, but according to the python-ldap docs these controls are not yet implemented for search_ext(). Is there a way to do this in Python? If the python-ldap library does not support it, is there another Python library that does?
Here are some links related to paging in python-ldap.
Documentation: http://www.python-ldap.org/doc/html/ldap-controls.html#ldap.controls.SimplePagedResultsControl
Example code using paging: http://www.novell.com/coolsolutions/tip/18274.html
More example code: http://google-apps-for-your-domain-ldap-sync.googlecode.com/svn/trunk/ldap_ctxt.py
After some discussion on the python-ldap-dev mailing list, I can answer my own question.
Page controls ARE supported by the Python lDAP module, but the docs had not been updated for search_ext to show that. The example linked by Gorgapor shows how to use the ldap.controls.SimplePagedResultsControl to read the results in pages.
However there is a gotcha. This will work with Microsoft Active Directory servers, but not with OpenLDAP servers (and possibly others, such as Sun's). The LDAP controls RFC is ambiguous as to whether paged controls should be allowed to override the server's sizelimit setting. On ActiveDirectory servers they can by default while on OpenLDAP they cannot, but I think there is a server setting that will allow them to.
So even if you implement the paged control, there is still no guarantee that it will get all the objects that you want. Sigh
Also paged controls are only available with LDAP v3, but I doubt that there are many v2 servers in use.