How to get GPO Client Side Extension Details Using Python? - python

I'm trying to build GPO parser in Python. In order to do so, I'm using Impacket. For getting the general details about the GPO I'm using LDAP and for reading the files from SYSVOL I'm using SMB. So far I have managed to get GPO by its GUID. The problem is that there is a field which lists the extensions (Computer or User) for any GPO. The list contains GUIDs of Client Side Extensions (CSE) that the GPO is using.
In order to process the CSE I have tried to read files that are related to this extension (after research about this extension). However, there are missing parts that are not appear in the files (as far as I found) but they do exist in other tools that do the same thing (MMC, Get-GPOReport, gpresult).
To make my problem clear I'll give an example. The security CSE defines several things, including password size, logon attempts and more. One of the things the also defined there is weather or not to store the LMHash. this one is set by a registry key. Those settings are defined in a file called GptTml.inf in SYSVOL. The problem is that the display name isn't appear there as well, but tools like Get-GPOReport or MMC are managed to get it. My goal is to get it as well without running new process or tools (CMD or PowerShell), meaning using python only.
I have tried to look in adm/adml files, /root/rsop/computer namespace and more without success. Does someone know how can to get this information?
To be clear the question I'm trying to figure out is How to get any piece of information about some GPO CSE like MMC, Get-GPOReport, gpresult able to do using python?

Related

Getting compiler error when trying to verify a contract importing from #uniswap/v3-periphery

I'm trying to perform a simple Swap from DAI to WETH with Uniswap in my own SmartContract on the Kovan Testnet. Unfortunately my transaction keeps getting reverted even after setting the gas limit manually.
I also discovered that I can not verify the contract on Kovan via etherscan-API nor manually. Instead I keep getting this error for every library I import:
Source "#uniswap/v3-periphery/contracts/interfaces/ISwapRouter.sol" not found: File import callback not supported
Accordingly I have the feeling something is going wrong during compilation and I'm stuck without any further ideas to work out my problem.
Here are a couple infos on what I've tried so far and how to reproduce:
Brownie Version 1.16.4, Tested on Windows 10 and Ubuntu 21.04
I've tried:
Importing libraries with Brownie package manager
Importing libraries with npm and using relative paths
All kinds of different compiler remappings in the brownie-config.yaml
Adding all dependency files to project folders manually
Here's a link to my code for reproducing my error:
https://github.com/MjCage/swap-demo
It'd be fantastic if someone could help.
It's very unlikely that something is "going wrong during compilation". If your contract compiles but what it does does not match the sources, you have found a very serious codegen bug in the compiler and you should report it so that it can be fixed quickly. From experience I'd say that it's much more likely that you have a bug in your contract though.
As for the error during verification - the problem is that to properly compile a multi-file project, you have to provide all the source files and have them in the right directories. This applies to library code as well so if your contract imports ISwapRouter.sol, you need to also submit that file and all files it in turn imports too.
The next hurdle is that as far as I can tell, the multi-file verification option at Etherscan only allows you to submit files from a single directory so it only gets their names, not the whole paths (not sure if it's different via the API). You need Etherscan to see the file as #uniswap/v3-periphery/contracts/interfaces/ISwapRouter.sol but it sees just ISwapRouter.sol instead and the compiler will not treat them as the same (both could exist after all).
The right solution is to use the Standard JSON verification option - this way you submit the whole JSON input that your framework passes to the compiler and that includes all files in the project (including libraries) and relevant compiler options. The issue is that Brownie does not give you this input directly. You might be able to recreate it from the JSON it stores on disk (Standard JSON input format is documented at Compiler Input and Output JSON Description) but that's a bit of manual work. Unfortunately Brownie does not provide any way to request this on the command line. The only other way to get it that I know of is to use Brownie's API and call compiler.generate_input_json().
Since this is a simple project with just one contract and does not have deep dependencies, it might be easier for you to follow #Jacopo Mosconi's answer and just "flatten" the contract by replacing all imports by sources pasted directly into the main contract. You might also try copying the file to your project dir and altering the import so that it only contains the file name, without any path component - this might pass the multi-file verification. Flattening is ultimately how Brownie and many other frameworks currently do verification and Etherscan's check is lax enough to allow sources modified in such a way - it only checks bytecode so you can still verify even if you completely change the import structure, names, comments or even any code that gets removed by the optimizer.
the compiler can't find ISwapRouter.sol
you can add the code of ISwapRouter.sol directly on your swap.sol and delate that line from your code, this is the code https://github.com/Uniswap/v3-periphery/blob/main/contracts/interfaces/ISwapRouter.sol

Python-based PDF parser integrated with Zapier

I am working for a company which is currently storing PDF files into a remote drive and subsequently manually inserting values found within these files into an Excel document. I would like to automate the process using Zapier, and make the process scalable (we receive a large amount of PDF files). Would anyone know any applications useful and possibly free for converting PDFs into Excel docs and which integrate with Zapier? Alternatively, would it be possible to create a Python script in Zapier to access the information and store it into an Excel file?
This option came to mind. I'm using google drive as an example, you didn't say what you where using as storage, but Zapier should have an option for it.
Use cloud convert, doc parser (depends on what you want to pay, cloud convert at least gives you some free time per month, so that may be the closest you can get).
Create a zap with this step:
Trigger on new file in drive (Name: Convert new Google Drive files with CloudConvert)
Convert file with CloudConvert
Those are two options by Zapier that I can find. But you could also do it in python from your desktop by following something like this idea. Then set an event controller in windows event manager to trigger an upload/download.
Unfortunately it doesn't seem that you can import JS/Python libraries into zapier, however I may be wrong on that. If you could, or find a way to do so, then just use PDFminer and "Code by Zapier". A technician might have to confirm this though, I've never gotten libraries to work in zaps.
Hope that helps!

Python and downloading Google Sheets feeds

I'm trying to download a spreadsheet from Google Drive inside a program I'm writing (so the data can be easily updated across all users), but I've run into a few problems:
First, and perhaps foolishly, I'm only wanting to use the basic python distribution, so I'm not requiring people to download multiple modules to run it. The urllib.request module seems to work well enough for basic downloading, specifically the urlopen() function, when I've tested it on normal webpages (more on why I say "normal" below).
Second, most questions and answers on here deal with retrieving a .csv from the spreadsheet. While this might work even better than trying to parse the feeds (and I have actually gotten it to work), using only the basic address means only the first sheet is downloaded, and I need to add a non-obvious gid to get the others. I want to have the program independent of the spreadsheet, so I only have to add new data online and the clients are automatically updated; trying to find a gid programmatically gives me trouble because:
Third, I can't actually get the feeds (interface described here) to be downloaded correctly. That does seem to be the best way to get what I want—download the overview of the entire spreadsheet, and from there obtain the addresses to each sheet—but if I try to send that through urlopen(feed).read() it just returns b''. While I'm not exactly sure what the problem is, I'd guess that the webpage is empty very briefly when it's first loaded, and that's what urlopen() thinks it should be returning. I've included what little code I'm using below, and was hoping someone had a way of working around this. Thanks!
import urllib.request as url
key = '1Eamsi8_3T_a0OfL926OdtJwLoWFrGjl1S2GiUAn75lU'
gid = '1193707515'
# Single sheet in CSV format
# feed = 'https://docs.google.com/spreadsheets/d/' + key + '/export?format=csv&gid=' + gid
# Document feed
feed = 'https://spreadsheets.google.com/feeds/worksheets/' + key + '/private/full'
csv = url.urlopen(feed).read()
(I don't actually mind publishing the key/gid, because I am planning on releasing this if I ever finish it.)
Requres OAuth2 or a password.
If you log out of google and try again with your browser, it fails (It failed when I did logged out). It looks like it requires a google account.
I did have it working with and application password a while ago. But I now use OAuth2. Both are quite a bit of messing about compared to CSV.
This sounds like a perfect use case for a wrapper library i once wrote. Let me know if you find it useful.

Trying to automate the fpga build process in Xilinx using python scripts

I want to automate the entire process of creating ngs,bit and mcs files in xilinx and have these files be automatically be associated with certain folders in the svn repository. What I need to know is that is there a log file that gets created in the back end of the Xilinx gui which records all the commands I run e.g open project,load file,synthesize etc.
Also the other part that I have not been able to find is a log file that records the entire process of synthesis, map,place and route and generate programming file. Specially record any errors that the tool encountered during these processes.
If any of you can point me to such files if they exist it would be great. I haven't gotten much out of my search but maybe I didn't look enough.
Thanks!
Well, it is definitely a nice project idea but a good amount of work. There's always a reason why an IDE was built – a simple search yields the "Command Line Tools User Guide" for various versions of Xilinx ISE, like for 14.3, 380 pages about
Overview and list of features
Input and output files
Command line syntax and options
Report and message information
ISE is a GUI for various command line executables, most of them are located in the subfolder 14.5/ISE_DS/ISE/bin/lin/ (in this case: Linux executables for version 14.5) of your ISE installation root. You can review your current parameters for each action by right clicking the item in the process tree and selecting "Process properties".
On the Python side, consider using the subprocess module:
The subprocess module allows you to spawn new processes, connect to their input/output/error pipes, and obtain their return codes.
Is this the entry point you were looking for?
As phineas said, what you are trying to do is quite an undertaking.
I've been there done that, and there are countless challenges along the way. For example, if you want to move generated files to specific folders, how do you classify these files in order to figure out which files are which? I've created a project called X-MimeTypes that attempts to classify the files, but you then need a tool to parse the EDA mime type database and use that to determine which files are which.
However there is hope, so to answer the two main questions you've pointed out:
To be able to automatically move generated files to predetermined paths. From what you are saying it seems like you want to do this to make the versioning process easier? There is already a tool that does this for you based on "design structures" that you create and that can be shared within a team. The tool is called Scineric Workspace so check it out. It also have built in Git and SVN support which ignores things according to the design structure and in most cases it filters all generated things by vendor tools without you having to worry about it.
You are looking for a log file that shows all commands that were run. As phineas said, you can check out the Command Line Tools User guides for ISE, but be aware that the commands to run have changed again in Vivado. The log file of each process also usually states the exact command with its parameters that have been called. This should be close to the top of the report. If you look for one log file that contains everything, that does not exist. Again, Scineric Workspace supports evoking flows from major vendors (ISE, Vivado, Quartus) and it produces one log file for all processes together while still allowing each process to also create its own log file. Errors, warning etc. are also marked properly in this big report. Scineric has a tcl shell mode as well, so your python tool can run it in the background and parse the complete log file it creates.
If you have more questions on the above, I will be happy to help.
Hope this helps,
Jaco

Convert CVS/SVN to a Programming Snippets Site

I use cvs to maintain all my python snippets, notes, c, c++ code. As the hosting provider provides a public web- server also, I was thinking that I should convert the cvs automatically to a programming snippets website.
cvsweb is not what I mean.
doxygen is for a complete project and to browse the self-referencing codes online.I think doxygen is more like web based ctags.
I tried with rest2web, it is requires that I write /restweb headers and files to be .txt files and it will interfere with the programming language syntax.
An approach I have thought is:
1) run source-hightlight and create .html pages for all the scripts.
2) now write a script to index those script .htmls and create webpage.
3) Create the website of those pages.
before proceeding, I thought I shall discuss here, if the members have any suggestion.
What do do, when you want to maintain your snippets and notes in cvs and also auto generate it into a good website. I like rest2web for converting notes to html.
Run Trac on the server linked to the (svn) repository. The Trac wiki can conveniently refer to files and changesets. You get TODO tickets, too.
enscript or pygmentize (part of pygments) can be used to convert code to HTML. You can use a custom header or footer to link to the actual code for download.
I finally settled for rest2web. I had to do the following.
Use a separate python script to recursively copy the files in the CVS to a separate directory.
Added extra files index.txt and template.txt to all the directories which I wanted to be in the webpage.
The best thing about rest2web is that it supports python scripting within the template.txt, so I just ran a loop of the contents and indexed them in the page.
There is still lot more to go to automate the entire process. For eg. Inline viewing of programs and colorization, which I think can be done with some more trials.
I have the completed website here, It is called uthcode.

Categories