I have a question regarding Maximo Location Hierarchy, I want to delete location hierarchy using enterprise service via CSV file, additional field is "MARKFORDELETE" has been created, in the CSV file the user need to enter "1" on the MARKFORDELETE field. And I have created the following action automation script on LOCHIERARCHY object:
from psdi.util.logging import MXLogger
from psdi.util.logging import MXLoggerFactory
from psdi.mbo import MboConstants
from psdi.mbo import MboSetRemote
from psdi.server import MXServer
from psdi.util import MXException
mxServer = MXServer.getMXServer();
userInfo = mbo.getUserInfo();
if launchPoint=='LOCHIERDEL2':
locHierarchySet=mbo.getMboSet("LOCHIERARCHYDEL")
locHierarchySet.setWhere("markfordelete = 1")
locHierarchySet.reset()
locHier = locHierarchySet.moveFirst()
while locHier is not None:
locHierarchy=locHierarchySet.getMbo(0)
locAncestorSet=mxServer.getMboSet("LOCANCESTOR",userInfo);
locAncestorSet.setWhere("location='"+locHierarchy.getString("LOCATION")+"' and ancestor='"+locHierarchy.getString("PARENT")+"' and systemid='"+locHierarchy.getString("SYSTEMID")+"' and siteid='"+locHierarchy.getString("SITEID")+"'")
locAncestorSet.reset()
locAnc = locAncestorSet.moveFirst()
if locAncestorSet.count()==1:
locAncestor=locAncestorSet.getMbo(0)
locAncestor.delete(11l)
locAncestorSet.save(11l)
locHierarchy.delete(11l)
locHierarchySet.save(11l)
locHierarchySet2=mbo.getMboSet("LOCHIERARCHYDEL3")
locHier2 = locHierarchySet2.moveFirst()
while locHier2 is not None:
locHierarchy2=locHierarchySet2.getMbo(0)
locHierarchy2.delete(11l)
locHierarchySet2.save()
locHier2 = locHierarchySet2.moveNext()
locHier = locHierarchySet.moveNext()
And the following is the CSV file:
EXTSYS1,LOCHIER_DEL,AddChange,EN
LOCATION,PARENT,SYSTEMID,CHILDREN,SITEID,ORGID,MARKFORDELETE
45668,XY_10603,NETWORK,0,ABC,ORG1,1
45668,XY_10604,NETWORK,0,ABC,ORG1,1
45669,XY_10606,NETWORK,0,ABC,ORG1,1
45669,XY_10607,NETWORK,0,ABC,ORG1,1
Create an escalation point with action using the above action script with where clause markfordelete=1. The escalation is working fine, and the records were deleted from LOCHIERARCHY table from the above CSV file, however the LOCANCESTOR table records were deleted ALL with SYSTEMID is NETWORK. and I noticed that, if the LOCHIERARCHY record is deleted, a new record will be created with parent is null.
Is there something that I have done wrong in writing the code, or have I missed out something?
Any suggestions or pointer would be great.
That is out of box behaviour, it will create new record in lochierarchy and locancestor tables. Please look into Lochierarchy.class delete method.
Related
I am using the sample program from the Snowflake document on using Python to ingest the data to the destination table.
So basically, I have to execute put command to load data to the internal stage and then run the Python program to notify the snowpipe to ingest the data to the table.
This is how I create the internal stage and pipe:
create or replace stage exampledb.dbschema.example_stage;
create or replace pipe exampledb.dbschema.example_pipe
as copy into exampledb.dbschema.example_table
from
(
select
t.*
from
#exampledb.dbschema.example_stage t
)
file_format = (TYPE = CSV) ON_ERROR = SKIP_FILE;
put command:
put file://E:\\example\\data\\a.csv #exampledb.dbschema.example_stage OVERWRITE = TRUE;
This is the sample program I use:
from logging import getLogger
from snowflake.ingest import SimpleIngestManager
from snowflake.ingest import StagedFile
from snowflake.ingest.utils.uris import DEFAULT_SCHEME
from datetime import timedelta
from requests import HTTPError
from cryptography.hazmat.primitives import serialization
from cryptography.hazmat.primitives.serialization import load_pem_private_key
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives.serialization import Encoding
from cryptography.hazmat.primitives.serialization import PrivateFormat
from cryptography.hazmat.primitives.serialization import NoEncryption
import time
import datetime
import os
import logging
logging.basicConfig(
filename='/tmp/ingest.log',
level=logging.DEBUG)
logger = getLogger(__name__)
# If you generated an encrypted private key, implement this method to return
# the passphrase for decrypting your private key.
def get_private_key_passphrase():
return '<private_key_passphrase>'
with open("E:\\ssh\\rsa_key.p8", 'rb') as pem_in:
pemlines = pem_in.read()
private_key_obj = load_pem_private_key(pemlines,
get_private_key_passphrase().encode(),
default_backend())
private_key_text = private_key_obj.private_bytes(
Encoding.PEM, PrivateFormat.PKCS8, NoEncryption()).decode('utf-8')
# Assume the public key has been registered in Snowflake:
# private key in PEM format
# List of files in the stage specified in the pipe definition
file_list=['a.csv.gz']
ingest_manager = SimpleIngestManager(account='<account_identifier>',
host='<account_identifier>.snowflakecomputing.com',
user='<user_login_name>',
pipe='exampledb.dbschema.example_pipe',
private_key=private_key_text)
# List of files, but wrapped into a class
staged_file_list = []
for file_name in file_list:
staged_file_list.append(StagedFile(file_name, None))
try:
resp = ingest_manager.ingest_files(staged_file_list)
except HTTPError as e:
# HTTP error, may need to retry
logger.error(e)
exit(1)
# This means Snowflake has received file and will start loading
assert(resp['responseCode'] == 'SUCCESS')
# Needs to wait for a while to get result in history
while True:
history_resp = ingest_manager.get_history()
if len(history_resp['files']) > 0:
print('Ingest Report:\n')
print(history_resp)
break
else:
# wait for 20 seconds
time.sleep(20)
hour = timedelta(hours=1)
date = datetime.datetime.utcnow() - hour
history_range_resp = ingest_manager.get_history_range(date.isoformat() + 'Z')
print('\nHistory scan report: \n')
print(history_range_resp)
After running the program, I just need to remove the file in the internal stage:
REMOVE #exampledb.dbschema.example_stage;
The code works as expected for the first time but when I truncate the data on that table and run the code again, the table on snowflake doesn't have any data in it.
Do I miss something here? How can I make this code can run multiple times?
Update:
I found that if I use a file with a different name each time I run, the data can load to the snowflake table.
So how can I run this code without changing the data filename?
Snowflake uses file loading metadata to prevent reloading the same files (and duplicating data) in a table. Snowpipe prevents loading files with the same name even if they were later modified (i.e. have a different eTag).
The file loading metadata is associated with the pipe object rather than the table. As a result:
Staged files with the same name as files that were already loaded are ignored, even if they have been modified, e.g. if new rows were added or errors in the file were corrected.
Truncating the table using the TRUNCATE TABLE command does not delete the Snowpipe file loading metadata.
However, note that pipes only maintain the load history metadata for 14 days. Therefore:
Files modified and staged again within 14 days:
Snowpipe ignores modified files that are staged again. To reload modified data files, it is currently necessary to recreate the pipe object using the CREATE OR REPLACE PIPE syntax.
Files modified and staged again after 14 days:
Snowpipe loads the data again, potentially resulting in duplicate records in the target table.
For more information have a look here
I have a bunch of msg files in a directory, I'd like to kinda put/zip them in a pst file. I've seen some solutions like Aspos Email which needs JVM on the machine. I want to do it with Outlook itself using win32com.client. Please post if there is a way to do it. Thanks
In Outlook Object Model, call Namespace.AddStore (or AddStoreEx) to add a new PST file, find the store in the Namespace.Stores collection (AddStore does not return the newly added store), call Store.GetRootFolder() to get the top level folder and add folders and items. Keep in mind that OOM code cannot run in a service, and that you need to log to an existing profile in Outlook first (Namespace.Logon).
If using Redemption (I am its author) is an option (it can run in a service and does not require to log to an existing profile first), but can create a PST file as well as create folders and messages there. In VB script:
set Session = CreateObject("Redemption.RDOSession")
set pstStore = Session.LogonPstStore("c:\temp\test.pst", 1, "Test PST Store" )
set RootFolder = pstStore.IPMRootFolder
set newFolder = RootFolder.Folders.OpenOrAdd("Test folder")
set newItem = newFolder.Items.Add("IPM.Note")
newItem.Sent = true
newItem.Subject = "test"
newItem.HTMLBody = "test <b>bold</b> text"
newItem.Recipients.AddEx "The user", "user#domain.demo", "SMTP", olTo
vSenderEntryId = Session.AddressBook.CreateOneOffEntryID("Joe The Sender", "SMTP", "joe#domain.demo", false, true)
set vSender = Session.AddressBook.GetAddressEntryFromID(vSenderEntryId)
newItem.Sender = vSender
newItem.SentOnBehalfOf = vSender
newItem.SentOn = Now
newItem.ReceivedTime = Now
newItem.Save
How can I get the username value from the "Last saved by" property from any windows file?
e.g.: I can see this info right clicking on a word file and accessing the detail tab. See the picture below:
Does any body knows how can I get it using python code?
Following the comment from #user1558604, I searched a bit on google and reached a solution. I tested on extensions .docx, .xlsx, .pptx.
import zipfile
import xml.dom.minidom
# Open the MS Office file to see the XML structure.
filePath = r"C:\Users\Desktop\Perpetual-Draft-2019.xlsx"
document = zipfile.ZipFile(filePath)
# Open/read the core.xml (contains the last user and modified date).
uglyXML = xml.dom.minidom.parseString(document.read('docProps/core.xml')).toprettyxml(indent=' ')
# Split lines in order to create a list.
asText = uglyXML.splitlines()
# loop the list in order to get the value you need. In my case last Modified By and the date.
for item in asText:
if 'lastModifiedBy' in item:
itemLength = len(item)-20
print('Modified by:', item[21:itemLength])
if 'dcterms:modified' in item:
itemLength = len(item)-29
print('Modified On:', item[46:itemLength])
The result in the console is:
Modified by: adm.UserName
Modified On: 2019-11-08"
I have a plugin (PodcastPlugin) that contains two ManyToManyField (podcasts and custom_podcasts). I want to create a Django command that creates a new plugin on the same page and placeholder with the old instances.
[![Old Pluguin screenshot]
I can create a new plugin but it does not copy the old instances of podcasts and custom_podcasts into the newly created PodcastPlugin.
Here is my code:
from cms.models.pagemodel import Page
from cms.api import add_plugin
for page in Page.objects.all():
for placeholder in page.placeholders.filter(page=263):
for plugin in placeholder.get_plugins_list():
if plugin.plugin_type == 'PodcastPlugin':
for custom_ids in plugin.get_plugin_instance()[0].custom_podcasts.values_list('id'):
for podcasts_ids in plugin.get_plugin_instance()[0].podcasts.values_list('id'):
add_plugin(
placeholder=placeholder,
plugin_type='PodcastPlugin',
podcasts=[podcasts_ids[0]],
cmsplugin_ptr_id=plugin.id,
custom_podcasts=[custom_ids[0]],
title='New Podcast',
language='de'
)
I solved the problem by looping through the instances of both foreign keys (podcasts and custom_podcasts) and then using notation and to save it.
Here is my solution in case anyone come across this:
for page in Page.objects.all():
for placeholder in page.placeholders.all():
for plugin in placeholder.get_plugins_list():
if plugin.plugin_type == PodcastPlugin:
for podcasts_ids in plugin.get_plugin_instance()[0].podcasts.values_list('id'):
podcast_ids_list.append(podcasts_ids[0])
for custom_ids in plugin.get_plugin_instance()[0].custom_podcasts.values_list('id'):
custom_ids_list.append(custom_ids[0])
new_plugin = add_plugin(
placeholder=placeholder,
plugin_type=pluginname,
cmsplugin_ptr_id=plugin.id,
created_at=plugin.get_plugin_instance()[0].created_at,
podcasts_by_topic=plugin.get_plugin_instance()[0].podcasts_by_topic,
publication_date=plugin.get_plugin_instance()[0].publication_date,
publication_end_date=plugin.get_plugin_instance()[0].publication_end_date,
limit=plugin.get_plugin_instance()[0].limit,
collapse=plugin.get_plugin_instance()[0].collapse,
title=plugin.get_short_description() + ' (NEW)',
language='de'
)
for values in podcast_ids_list:
new_plugin.podcasts.add(values)
for values in custom_ids_list:
new_plugin.custom_podcasts.add(values)
new_plugin.save()
I am new to python and firebase and I am trying to flaten my firebase database.
I have a database in this format
each cat has thousands of data in it. All I want is to fetch the cat names and put them in an array. for example I want the output to be ['cat1','cat2'....]
I was using this tutorial
http://ozgur.github.io/python-firebase/
from firebase import firebase
firebase = firebase.FirebaseApplication('https://your_storage.firebaseio.com', None)
result = firebase.get('/Data', None)
the problem with the above code is it'll attempt to fetch all the data under Data. How can I only fetch the "cats"?
if you want to get the values inside the cats as columns, try using the pyrebase, using pip install pyrebase at cmd / anaconda prompt(later prefered if you didn't set up PIP or Python at your environment paths. after installing:
import pyrebase
config {"apiKey": yourapikey
"authDomain": yourapidomain
"databaseURL": yourdatabaseurl,
"storageBucket": yourstoragebucket,
"serviceAccount": yourserviceaccount
}
Note: you can find all the information above at your Firebase's console:
https://console.firebase.google.com/project/ >>> your project >>> click on the icon "<'/>" with the tag "add firebase to your web app
back to the code...
make a neat definition so you can store it into a py file:
def connect_firebase():
# add a way to encrypt those, I'm a starter myself and don't know how
username: "usernameyoucreatedatfirebase"
password: "passwordforaboveuser"
firebase = pyrebase.initialize_app(config)
auth = firebase.auth()
#authenticate a user > descobrir como não deixar hardcoded
user = auth.sign_in_with_email_and_password(username, password)
#user['idToken']
# At pyrebase's git the author said the token expires every 1 hour, so it's needed to refresh it
user = auth.refresh(user['refreshToken'])
#set database
db = firebase.database()
return db
Ok, now save this into a neat .py file
NEXT, at your new notebook or main .py you're going to import this new .py file that we'll call auth.py from now on...
from auth import *
# add do a variable
db = connect_firebase()
#and now the hard/ easy part that took me a while to figure out:
# notice the value inside the .child, it should be the parent name with all the cats keys
values = db.child('cats').get()
# adding all to a dataframe you'll need to use the .val()
data = pd.DataFrame(values.val())
and thats it, print(data.head()) to check if the values / columns are where they're expected to be.
Firebase Realtime Database is one big JSON tree:
when you fetch data at a location in your database, you also retrieve
all of its child nodes.
The best practice is to denormalize your data, creating multiple locations (nodes) for the same data:
Many times you can denormalize the data by using a query to retrieve a
subset of the data
In your case, you may create a second node named "categories" where you list "only" the category names.
/cat1
/...
/cat2
/...
/cat3
/...
/cat4
/...
/categories
/cat1
/cat2
/cat3
/cat4
In this scenario you can use the update() method to write to more than one location at the same time.
I was exploring pyrebase documentation. As per that, we may extract only keys from some path.
To return just the keys at a particular path use the shallow() method.
all_user_ids = db.child("users").shallow().get()
In your case, it'll be something like:
firebase = pyrebase.initialize_app(config)
db = firebase.database()
allCats = db.child("data").shallow().get()
Let me know if it didn't help.