How to replace get_declared_classes() from CakePHP to Python? - python

I have been moving website made in Cakephp into Django. In one place I found get_declared_classes(). I thinks this function returns list of previously used classes before running current class.
First time when I encounter this, I just store list of classes manually in one file and I was using that and this solution worked only for a particular web page but Now I have multiple pages calling this class and everytime list of classnames are very different so I can not store class name list.
This code is actually connecting and fetching data from here and I want to replace this class in python(We are replacing whole website). The only problem I have is how to replace get_declared_classes.
class HrbcConnect{
private $scope = '';
private $token = null;
private $token_expire = 0;
private $request_opt = array();
public $id;
public $id_arr = array();
public $write_error;
public function read($resource,$condition='',$field=array(),$order=null){
$declared_classes = get_declared_classes();
foreach(App::objects('Model') as $v){
if(in_array($v,$declared_classes)){
$instance = new $v;
if(property_exists($instance,'hrbc_cols') && array_key_exists(ucfirst($resource),$instance->hrbc_cols)){
foreach($instance->hrbc_cols[ucfirst($resource)] as $vv){
if(is_array($vv)){
$field[] = $vv[0];
}else{
$field[] = $vv;
}
}
}elseif(property_exists($instance,'hrbc_cols_arr')){
foreach($instance->hrbc_cols_arr as $kk=>$vv){
if(array_key_exists(ucfirst($resource),$vv)){
foreach($vv[ucfirst($resource)] as $vvv){
if(is_array($vvv) && !in_array($vvv[0],$field)){
$field[] = $vvv[0];
}elseif(!is_array($vvv) && !in_array($vvv,$field)){
$field[] = $vvv;
}
}
}
}
}
}
}
}
}
When I print the $v in the above code to find what are classes being used, I found list of classes that defined in my models.
If question is not clear please let me know, I can provide more information.
Is there any other library that can replace this function in Python? Or Is there any other solution I can try ?

Related

Deserialize Nested Json Record From Postgres To POJO using Jackson

I am preety new to Jackson and how it really works under the hood when its get complicated to an issue of what i am facing. I have a record coming from the database which is stored as JSON TYPE (using postgres). Below is a sample of how it is in the database:
{"flutterwave": {"secret": "SECRET KEYS"}, "dlocal": {"xkey": X KEY VALUE", "xlogin": "X LOGIN VALUE"}}
Coming from python world, i would have just done json.loads(DATA_FROM_DB_IN_JSON) and it automatically converts the resulting output to a Dictionary in which i can easily retrieve and utilize the keys as i want them, However with Jackson library of Java, i haven't been able to get it to work.
Below is what i have done in java and haven't gotten it to work the way i would have expected it if were python.
public class PaymentConfigDTO {
#JsonAlias({"secrets"})
#JsonDeserialize(using = KeepAsJsonDeserializer.class)
private String processorCredentials;
}
DESERIALIZER CLASS
public class KeepAsJsonDeserializer extends JsonDeserializer<String> {
#Override
public String deserialize(JsonParser jsonParser, DeserializationContext deserializationContext) throws IOException, JacksonException {
TreeNode tree = jsonParser.getCodec().readTree(jsonParser);
return tree.toString();
}
}
In summary, what i want to acheive is been able to convert the result json coming from the db to deserializable to a Map<String, Map<>> or a better approach where i will be able to get the nested values without much stress.
If you want to get a Map<String, Map<String, String>> you can do:
import com.fasterxml.jackson.core.type.TypeReference;
import com.fasterxml.jackson.databind.ObjectMapper;
...
var mapper = new ObjectMapper();
var result = mapper.readValue(json, new TypeReference<Map<String, Map<String, String>>>() {});
But would be better to use types:
record Config(Flutterwave flutterwave, DLocal dlocal) {}
record Flutterwave(String secret) {}
record DLocal(String xkey, String xlogin) {}
...
var mapper = new ObjectMapper();
var result = mapper.readValue(json, Config.class);
var secret = result.flutterwave().secret()
Assuming that you already have Json String it is very simple to deserialize it to Map<String, Object>. you don't need to write your own deserializer class. All you need to do is:
ObjectMapper om = new ObjectMapper();
try {
Map<String, Object> map = om.readValue(jsonStr, Map.class);
} catch(IOException ioe) {
...
}
See JavaDoc for ObjectMapper
Also, if you want it even simpler I wrote my own JsonUtil where you don't even have to instantiate ObjectMapper. Your code would look like this:
try {
Map<String, Object> map = JsonUtils.readObjectFromJsonString(jsonStr, Map.class);
} catch (IOException ioe) {
...
}
In this example class JsonUtils comes with Open Source MgntUtils library written and maintained by me. See the Javadoc for JsonUtils class. The MgntUtils library can be obtained from Maven Central as Maven artifact or from Github along with Source code and Javadoc
So what i did basically was to update the custom deserializer class to use a JsonNode instead of TreeNode
It was re-written as:
#Override
public String deserialize(JsonParser jsonParser, DeserializationContext deserializationContext) throws IOException, JacksonException {
JsonNode node = jsonParser.getCodec().readTree(jsonParser);
return node.get("value").textValue();
}
Then to convert to a map, i did
Map<String, Object> toMap = objectMapper.readValue(JSON, Map.class)

How to trigger a Python or C# script to run when .Net/C# Windows service stops?

We have a c#/.net Windows service that parses big log files for us and updates a meta table when it does so. The problem is whenever you need to stop the service or (services, we have multiple of them running), one must manually delete the files that are in the process of being parsed in the local folder and also update the queue DB table where it tracks files to process.
I want to automate this. I am much more familiar with python so ideally, it would be a python script as opposed to .net. Is it possible to have a script that will trigger when the service is stopped? How would one do this?
I have tried doing this internally in the .net service but since it's multithreaded, files don't get cleaned up neatly. There's always a "can't stop service because another process is using it". It is like the service gets stuck trying to delete files when the Onstop() method is called. This was how I had tried to do it internally within the service:
protected override void OnStop()
{
ProducerConsumerQueue.Dispose();
Logger.Info($"{ProducerConsumerQueue.Count()} logs will be canceled");
CancellationTokenSource.Cancel();
FileUtil.DeleteFilesInProgress(Constants.ODFS_STAGING);
MetadataDbContext.UpdateServiceEntriesOnServiceReset();
//look into some staging directory, delete all files.
Logger.Info($"{ProducerConsumerQueue.Count()} logs canceled");
}
public static void DeleteFilesInProgress(string directory)
{
var constantsutil = new ConstantsUtil();
constantsutil.InitializeConfiguration();
try
{
System.IO.DirectoryInfo di = new DirectoryInfo(directory);
foreach (FileInfo file in di.GetFiles())
{
file.Delete();
}
}
catch(Exception ex)
{
Logger.Error(ex.Message);
string subject = Constants.GENERAL_EMAIL_SUBJECT;
string body = "The following error occured in Client.Util.ConstantsUtil:";
string error = ex.ToString(); //ex.ToString makes it more verbose so you can trace it.
var result = EmailUtil.Emailalert(subject, body, error);
}
}
public static int UpdateServiceEntriesOnServiceReset()
{
int rowsAffected = 0;
try
{
string connectionString = GetConnectionString();
using (SqlConnection connection = new SqlConnection())
{
connection.ConnectionString = connectionString;
SqlCommand cmd = new SqlCommand();
cmd.CommandType = CommandType.Text;
cmd.CommandText = $"UPDATE {Constants.SERVICE_LOG_TBL} SET STATUS = '0'";
cmd.Connection = connection;
connection.Open();
rowsAffected = cmd.ExecuteNonQuery();
}
}
catch (Exception ex)
{
Logger.Error($"{ex.Message.ToString()}");
string subject = Constants.GENERAL_EMAIL_SUBJECT;
string body = "The following error occured in Client.MetadatDbContext while Parser was processing:";
string error = ex.ToString(); //ex.ToString makes it more verbose so you can trace it.
var result = EmailUtil.Emailalert(subject, body, error);
}
return rowsAffected;
}
You can run your script from OnStop:
using System.Diagnostics;
Process.Start("python yourscript.py");
// or whatever the command for executing your python script is on your system.
And then use something like pywin32's win32service to find out the status of the service that launched the script, and then wait for it to die and release its hold on the files.
Then wipe them.

How do I block the IP for those who enter my website twice?

I have this code:
$accounts = fopen("accounts.txt", "r+");
$give_account = fgets($accounts);
$fileContents = file_get_contents("ips.txt");
fclose($accounts);
if(strpos($fileContents, $ip) !== false){
echo $give_account;
}else{
echo "you have already received an account";
}
i want to get his ip(i got this point),and I want you to receive your account,and I want to delete from the list the account that I gave and not be able to receive another one and print the else.If you can help me with something, even a little, I would appreciate it.Thanks
Imagining that you do not have any kind of database usage in your website, This code should give you a quick heads up on how to handle your scenario.
TIP: Try not to parse files like strings. Instead, use JSON or XML or some kind of markup language.
<?php
if(!function_exists('get_client_ip')){
function get_client_ip() {
$ipaddress = '';
if (isset($_SERVER['HTTP_CLIENT_IP']))
$ipaddress = $_SERVER['HTTP_CLIENT_IP'];
else if(isset($_SERVER['HTTP_X_FORWARDED_FOR']))
$ipaddress = $_SERVER['HTTP_X_FORWARDED_FOR'];
else if(isset($_SERVER['HTTP_X_FORWARDED']))
$ipaddress = $_SERVER['HTTP_X_FORWARDED'];
else if(isset($_SERVER['HTTP_FORWARDED_FOR']))
$ipaddress = $_SERVER['HTTP_FORWARDED_FOR'];
else if(isset($_SERVER['HTTP_FORWARDED']))
$ipaddress = $_SERVER['HTTP_FORWARDED'];
else if(isset($_SERVER['REMOTE_ADDR']))
$ipaddress = $_SERVER['REMOTE_ADDR'];
else
$ipaddress = null;
return $ipaddress;
}
}
if(!function_exists('log_ip_address')){
function log_ip_address($ip, $account_id){
// Get the JSON object from the IP Logs file and add this IP to the object
// $ip_logged is true if JSON object is updated. false otherwise
$ip_logged = true;
return $ip_logged ? true : false;
}
}
if(!function_exists('allot_account_to_client')){
function allot_account_to_client($account_id, $ip){
if(empty($account_id) || empty($ip)){
throw new Exception("Parameter missing");
}
// Do your magic here and allot an account to your client if he deserves
$alloted = true;
// When alloted, remove the current $account_id from the accounts JSON objects so that you won't allot the same account_id to anyone else
// Log the IP address
$log = log_ip_address($ip, $account_id);
// return true only if ip is logged. If IP logging fails, write a mechanism to put back the account_id back to the accounts Array.
return $log ? true : false;
}
}
$ip = get_client_ip();
if($ip == null){
throw new Exception("Unable to get IP. Do you even exist?");
}
// contains an JSON object of IP addresses that an account was taken from and it's details
$list_of_ips = json_decode(file_get_contents($_SERVER['DOCUMENT_ROOT'].'/logs/ips.json'));
// Contains JSON Array of account_ids
$accounts = json_decode(file_get_contents($_SERVER['DOCUMENT_ROOT'].'/accounts/available_accounts.json'));
if(isset($list_of_ips[$ip])){
$taken_on = new DateTime($list_of_ips[$ip]['taken_on']);
throw new Exception("You already got your account on ".$taken_on->format('m-d-Y H:i:sP'));
}
$available_account_id = $accounts[0];
// If he comes until here, he deserves the account
$alloted = allot_account_to_client($account_id, $ip);
if(!$alloted){
echo "Unable to allot you an account.";
exit(0);
}
echo "Your account id is : ".$account_id;
?>
EDIT:
I do not suggest you to use the files located in your server like this as these files are accessible to the user and it compromises whatever you are doing. Use Databases instead.
If you are so determined to use the files, instead of storing the unencrypted data, use some encryption methods to encrypt the data so that at least a normal non techie user would not know what it is.
You can write the IPs from the visitor in a local-file on your server. If someone visits your site, read this file either complete as an array in and check with array_key_exists if this IP is inside your list (compare with !== -1 for exists) or read line by line in and check if they are equal.
If the IP is twice then you can make a redirect to another site perhaps something like "Upps, your here again. I only like new vistors" or you just put the whole following code in an if-block that it won't be evaluated.

How to create terraform backend.tf file from python before execution to eliminiate interpolation state file issue

Actually we bulit webapp from there we are passing variables to the terraform by
like below
terraform apply -input=false -auto-approve -var ami="%ami%" -var region="%region%" -var icount="%count%" -var type="%instance_type%"
Actually the problem here was backend does not support variables i need to pass there values also form app.
TO resolve this I find some solution like we need to create backend.tf before execution.
But I am unable to get the idea how to do it if anyone having any exmaples regarding this please help me.
Thanks in advance..
I need to create backend.tf file from python by using below variables.
And need to replace key="${profile}/tfstate
for each profile the profile need to replace
i am thinking of using git repo by using git we create files and pull the values and again commit and execute
Please help me with some examples and ideas.
Code is like below:
My main.tf like below
terraform {
backend “s3” {
bucket = “terraform-007”
key = “key”
region = “ap-south-1”
profile=“venu”
}
}
provider “aws” {
profile = “ var.awsprofile"
region="{var.aws_region}”
}
resource “aws_instance” “VM” {
count = var.icount
ami = var.ami
instance_type = var.type
tags = {
Environment = “${var.env_indicator}”
}
}
vars.tf like
variable “aws_profile” {
default = “default”
description = “AWS profile name, as set in ~/.aws/credentials”
}
variable “aws_region” {
type = “string”
default = “ap-south-1”
description = “AWS region in which to create resources”
}
variable “env_indicator” {
type = “string”
default = “dev”
description = “What environment are we in?”
}
variable “icount” {
default = 1
}
variable “ami” {
default =“ami-54d2a63b”
}
variable “bucket” {
default=“terraform-002”
}
variable “type” {
default=“t2.micro”
}
output.tf like:
output “ec2_public_ip” {
value = ["${aws_instance.VM.*.public_ip}"]
}
output “ec2_private_ip” {
value = ["${aws_instance.VM.*.private_ip}"]
}
Actually the problem here was backend does not support variables i need to pass there values also form app.
TO resolve this I find some solution like we need to create backend.tf before execution.
But I am unable to get the idea how to do it if anyone having any exmaples regarding this please help me.
Thanks in advance..
Since the configuration for the backend cannot use interpolation, we have used a configuration by convention approach.
The terraform for all of our state collections (microservices and other infrastructure) use the same S3 bucket for state storage and the same DynamoDB table for locking.
When executing terraform, we use the same IAM role (a dedicated terraform only user).
We define the key for the state via convention, so that it does not need to be generated.
key = "platform/services/{name-of-service}/terraform.tfstate"
I would avoid a process that results in changes to the infrastructure code as it is being deployed to ensure maximum understand-ability by the engineers reading/maintaining the code.
EDIT: Adding key examples
For the user service:
key = "platform/services/users/terraform.tfstate"
For the search service:
key = "platform/services/search/terraform.tfstate"
For the product service:
key = "platform/services/products/terraform.tfstate"

NativeProcess communication giving error

I am trying to communicate to a python script through actionscript. it gives me error on line :
var stdOut:ByteArray = process.standardOutput;
from the function shown below :
public function onOutputData(event:ProgressEvent):void
{
var stdOut:ByteArray = process.standardOutput; //error
var data:String = stdOut.readUTFBytes(process.standardOutput.bytesAvailable);
trace("Got: ", data);
}
Error is:
Implicit coercion of a value with static type IDataInput to a possibly
unrelated type ByteArray.
I am following the same approach as on Adobe's page. Here is some testable code :
package
{
import flash.display.Sprite;
import flash.desktop.NativeProcessStartupInfo;
import flash.filesystem.File;
import flash.desktop.NativeProcess;
import flash.events.ProgressEvent;
import flash.utils.ByteArray;
public class InstaUtility extends Sprite
{
public var nativeProcessStartupInfo:NativeProcessStartupInfo = new NativeProcessStartupInfo();
public var file:File = new File("C:/Python27/python.exe");
public var process:NativeProcess = new NativeProcess();
public function InstaUtility()
{
nativeProcessStartupInfo.executable = file;
nativeProcessStartupInfo.workingDirectory = File.applicationDirectory.resolvePath(".");
trace("Location " + File.applicationDirectory.resolvePath(".").nativePath);
var processArgs:Vector.<String> = new Vector.<String>();
processArgs[0] = "test.py";
nativeProcessStartupInfo.arguments = processArgs;
var process:NativeProcess = new NativeProcess();
process.addEventListener(ProgressEvent.STANDARD_OUTPUT_DATA, onOutputData);
process.start(nativeProcessStartupInfo);
}
public function onOutputData(event:ProgressEvent):void
{
var stdOut:ByteArray = process.standardOutput; //error
var data:String = stdOut.readUTFBytes(process.standardOutput.bytesAvailable);
trace("Got: ", data);
}
}
}
The NativeProcess could not be started. Not supported in current
profile.
Are you testing in Flash IDE?
Test within IDE : In your AIR Publish Settings make sure you ticked only "extended Desktop" when debugging through IDE. This way you also get traces etc.
Test after Publish : You must tick both "Desktop" and "extended Desktop" and also tick "Windows Installer (.exe)". Install your App using the generated .exe file (not the .air file).
Implicit coercion of a value with static type IDataInput to a possibly
unrelated type ByteArray.
var stdOut:ByteArray = process.standardOutput; //error is not how it's done!! Don't make any var each time the progress event fires up. Each firing holds around 32kb or 64kb of bytes only (can't remember), so if the expected result is larger it will continue to fire in multiple chunks... Use and recycle a single public byteArray to hold all the result data.
Try a setup like below :
//# Declare the public variables
public var stdOut : ByteArray = new ByteArray();
public var data_String : String = "";
Your process also needs a NativeProcessExitEvent.EXIT listener.
process.addEventListener(NativeProcessExitEvent.EXIT, on_Process_Exit );
Before you .start a process, also clear the byteArray ready for new data with stdOut.clear();.
Now your progressEvent can look like this below... (Process puts result data into stdOut bytes).
public function onOutputData (event:ProgressEvent) : void
{
//var stdOut:ByteArray = process.standardOutput; //error
//# Progress could fire many times so keep adding data to build the final result
//# "stdOut.length" will be zero at first but add more data to tail end (ie: length)
process.standardOutput.readBytes( stdOut, stdOut.length, process.standardOutput.bytesAvailable );
//# Below should be in a Process "Exit" listener but might work here too
stdOut.position = 0; //move pointer back before reading bytes
data_String = stdOut.readUTFBytes( stdOut.length );
trace("function onOutputData -- Got : " + data_String );
}
But you really need to add an "onProcessExit" listener and then only check for results when the process itself has completed. (Tracing here is much safer for a guaranteed result).
public function on_Process_Exit (event : NativeProcessExitEvent) : void
{
trace ("PYTHON Process finished : ############# " )
stdOut.position = 0; //# move pointer back before reading bytes
data_String = stdOut.readUTFBytes( stdOut.length );
trace("PYTHON Process Got : " + data_String );
}

Categories