No module named linq Error only when job is using python - python

I have a table in sql database which I inserted a python code as data.
I wanna map an entity to another one (first one is mine and the second one is from web service). So I used python for mapping. I used this python code in two places and in both of them it just maps the entities. the first place used in Job Service and works Parallel and the Second place is for mapping and show mapped Entity in XML Format.
the problem is when the job is mapping entities, the python says No module named linq!
but when I want to see the mapped entity, it maps and the xml show the mapped entity without any problems. whats wrong? the code is the same but somewhere I get error.
here is some part of my python script. Any opinion?
import System
import clr
import sys
clr.AddReference(''System.Core'')
clr.AddReference(''Tools.Cmn'')
clr.AddReference(''Tools.ThinCmn'')
clr.AddReference(''Common.Cmn'')
clr.AddReference(''Common.SA'')
clr.AddReference(''Life.SA'')
clr.AddReference(''Life.Cmn'')
clr.AddReference(''Life.BL'')
from Tools.Cmn import *
from Common.Cmn import *
from Common.SA import *
from Life.Cmn import *
from Life.BL import *
from Life.SA.CiiRegLifePlcyServices import *
from Life.SA import *
from System import Array
from System import DateTime
from System import Func
from System.Linq import Enumerable
from System.Collections.Generic import *
StrTools = StringToolsExposeToScript()
CIHelper = LifeCIWebServiceHelper()
########################################
def Map(bnVer , lifPlcyReq, extraInfo):
isEdm = 0 if bnVer.ElhNo == 0 else 1
isPrps = extraInfo["IsPrps"]
try :
calcList = bnVer.LifePolicyCalcGeneralList.LoadAll()
for calc in calcList:
clc = System.Activator.CreateInstance(LifPlcyYrReq)
clc.YrNo = calc.Year
clc.PayCnt = calc.InstallmentCnt
clc.Svng = calc.PrmReserved
clc.GrntPrftRte = calc.PolicyYearInterest
clc.CfcChngPrm = calc.PolicyYearPrmChangeZarib
LifPlcyYrLst.Add(clc)
lifPlcyReq.LifPlcyYrLst = LifPlcyYrLst.ToArray()
except TypeError, err:
raise Exception(str(err) + " line : " + str(sys.exc_info()[-1].tb_lineno))
except:
raise
return

Related

How to read custom message type using ros2bag

So I have this code which works great for reading messages out of predefined topics and printing it to screen. The rosbags come with a rosbag_name.db3 (sqlite) database and metadata.yaml file
from rosbags.rosbag2 import Reader as ROS2Reader
import sqlite3
from rosbags.serde import deserialize_cdr
import matplotlib.pyplot as plt
import os
import collections
import argparse
parser = argparse.ArgumentParser(description='Extract images from rosbag.')
# input will be the folder containing the .db3 and metadata.yml file
parser.add_argument('--input','-i',type=str, help='rosbag input location')
# run with python filename.py -i rosbag_dir/
args = parser.parse_args()
rosbag_dir = args.input
topic = "/topic/name"
frame_counter = 0
with ROS2Reader(rosbag_dir) as ros2_reader:
ros2_conns = [x for x in ros2_reader.connections]
# This prints a list of all topic names for sanity
print([x.topic for x in ros2_conns])
ros2_messages = ros2_reader.messages(connections=ros2_conns)
for m, msg in enumerate(ros2_messages):
(connection, timestamp, rawdata) = msg
if (connection.topic == topic):
print(connection.topic) # shows topic
print(connection.msgtype) # shows message type
print(type(connection.msgtype)) # shows it's of type string
# TODO
# this is where things crash when it's a custom message type
data = deserialize_cdr(rawdata, connection.msgtype)
print(data)
The issue is that I can't seem to figure out how to read in custom message types. deserialize_cdr takes a string for the message type field, but it's not clear to me how to replace this with a path or how to otherwise pass in a custom message.
Thanks
One approach would be that you declare and register it to the type system as a string:
from rosbags.typesys import get_types_from_msg, register_types
MY_CUSTOM_MSG = """
std_msgs/Header header
string foo
"""
register_types(get_types_from_msg(
MY_CUSTOM_MSG, 'my_custom_msgs/msg/MyCustomMsg'))
from rosbags.typesys.types import my_custom_msgs__msg__MyCustomMsg as MyCustomMsg
Next, using:
msg_type = MyCustomMsg.__msgtype__
you can get the message type that you can pass to deserialize_cdr.
Also, see here for a quick example.
Another approach is to directly load it from the message definition.
Essentially, you would need to read the message
from pathlib import Path
custom_msg_path = Path('/path/to/my_custom_msgs/msg/MyCustomMsg.msg')
msg_def = custom_msg_path.read_text(encoding='utf-8')
and then follow the same steps as above starting with get_types_from_msg().
A more detailed example of this approach is given here.

Import data from Zoho Analytics to Python

I'm trying to connect Zoho Analytics and Python via Zoho client library here: https://www.zoho.com/analytics/api/#python-library
I downloaded the client library file but now have no idea how to use it. What I want to do is importing data from Zoho Analytics to Python and the suggested code on Zoho is:
from __future__ import with_statement
from ReportClient import ReportClient
import sys
from __future__ import with_statement
from ReportClient import ReportClient
import sys
class Sample:
LOGINEMAILID="abc#zoho.com"
AUTHTOKEN="************"
DATABASENAME="Workspace Name"
TABLENAME="Table Name"
rc = None
rc = ReportClient(self.AUTHTOKEN)
def importData(self,rc):
uri = rc.getURI(self.LOGINEMAILID,self.DATABASENAME,self.TABLENAME)
try:
with open('StoreSales.csv', 'r') as f:
importContent = f.read()
except Exception,e:
print "Error Check if file StoreSales.csv exists in
the current directory"
print "(" + str(e) + ")"
return
impResult = rc.importData(uri,"APPEND",importContent,None)
print "Added Rows :" +str(impResult.successRowCount) + " and Columns :"
+ str(impResult.selectedColCount)
obj = Sample()
obj.importData(obj.rc)
How do I make from ReportClient import ReportClient work?
Also, how does rc = ReportClient(self.AUTHTOKEN) work if self wasn't predefined?
On the site you linked, you can download a zip file containing the file Zoho/ZohoReportPythonClient/com/adventnet/zoho/client/report/python/ReportClient.py. I'm not sure why it's so deeply nested, or why most of the folders contain an __init__.py file which only has #$Id$ in it.
You'll need to extract that file, and place it somewhere where your Python interpreter can find it. For more information about where Python will look for the module (ReportClient.py), see this question: How does python find a module file if the import statement only contains the filename?
Please note that the file is Python 2 code. You'll need to use a Python 2 interpreter, or convert it to Python 3 code. Once you've got it importing properly, you can use their API reference to start writing code with it: https://css.zohostatic.com/db/api/v7_m2/docs/python/

How to pass a Jinja2 to HiveQL using Python

I'm using Gcloud Composer as my Airflow. When I try to use Jinja in my HQL code, it does not translate it correctly.
I know that the HiveOperator has a Jinja translator as I'm used to it, but the DataProcHiveOperator doesn't.
I've tried to use the HiveConf directly into my HQL files, but when setting those values to my Partition (i.e. INSERT INTO TABLE abc PARTITION (ds = ${hiveconf:ds}))`, it doesn't work.
I have also added the following to my HQL file:
SET ds=to_date(current_timestamp());
SET hive.exec.dynamic.partition=true;
SET hive.exec.dynamic.partition.mode=nonstrict;
But it didn't work as HIVE is transforming the formula above into a STRING.
So my idea was to combine both operators to have the Jinja translator working fine, but when I do that, I get the following error: ERROR - submit() takes from 3 to 4 positional arguments but 5 were given.
I'm not very familiar with Python coding and any help would be great, see below code for the operator I'm trying to build;
Header of the Python File (please note that the file contains other Operators not mentioned in this question):
import ntpath
import os
import re
import time
import uuid
from datetime import timedelta
from airflow.contrib.hooks.gcp_dataproc_hook import DataProcHook
from airflow.contrib.hooks.gcs_hook import GoogleCloudStorageHook
from airflow.exceptions import AirflowException
from airflow.models import BaseOperator
from airflow.utils.decorators import apply_defaults
from airflow.version import version
from googleapiclient.errors import HttpError
from airflow.utils import timezone
from airflow.utils.operator_helpers import context_to_airflow_vars
modified DataprocHiveOperator:
class DataProcHiveOperator(BaseOperator):
template_fields = ['query', 'variables', 'job_name', 'cluster_name', 'dataproc_jars']
template_ext = ('.q',)
ui_color = '#0273d4'
#apply_defaults
def __init__(
self,
query=None,
query_uri=None,
hiveconfs=None,
hiveconf_jinja_translate=False,
variables=None,
job_name='{{task.task_id}}_{{ds_nodash}}',
cluster_name='cluster-1',
dataproc_hive_properties=None,
dataproc_hive_jars=None,
gcp_conn_id='google_cloud_default',
delegate_to=None,
region='global',
job_error_states=['ERROR'],
*args,
**kwargs):
super(DataProcHiveOperator, self).__init__(*args, **kwargs)
self.gcp_conn_id = gcp_conn_id
self.delegate_to = delegate_to
self.query = query
self.query_uri = query_uri
self.hiveconfs = hiveconfs or {}
self.hiveconf_jinja_translate = hiveconf_jinja_translate
self.variables = variables
self.job_name = job_name
self.cluster_name = cluster_name
self.dataproc_properties = dataproc_hive_properties
self.dataproc_jars = dataproc_hive_jars
self.region = region
self.job_error_states = job_error_states
def prepare_template(self):
if self.hiveconf_jinja_translate:
self.query_uri= re.sub(
"(\$\{(hiveconf:)?([ a-zA-Z0-9_]*)\})", "{{ \g<3> }}", self.query_uri)
def execute(self, context):
hook = DataProcHook(gcp_conn_id=self.gcp_conn_id,
delegate_to=self.delegate_to)
job = hook.create_job_template(self.task_id, self.cluster_name, "hiveJob",
self.dataproc_properties)
if self.query is None:
job.add_query_uri(self.query_uri)
else:
job.add_query(self.query)
if self.hiveconf_jinja_translate:
self.hiveconfs = context_to_airflow_vars(context)
else:
self.hiveconfs.update(context_to_airflow_vars(context))
job.add_variables(self.variables)
job.add_jar_file_uris(self.dataproc_jars)
job.set_job_name(self.job_name)
job_to_submit = job.build()
self.dataproc_job_id = job_to_submit["job"]["reference"]["jobId"]
hook.submit(hook.project_id, job_to_submit, self.region, self.job_error_states)
I would like to be able to use Jinja templating inside my HQL code to allow partition automation on my data pipeline.
P.S: I'll use the Jinja templating mostly for Partition DateStamp
Does anyone know what is the error message I'm getting + help me solve it?
ERROR - submit() takes from 3 to 4 positional arguments but 5 were given
Thank you!
It is because of the 5th argument job_error_states which is only in master and not in the current stable release (1.10.1).
Source Code for 1.10.1 -> https://github.com/apache/incubator-airflow/blob/76a5fc4d2eb3c214ca25406f03b4a0c5d7250f71/airflow/contrib/hooks/gcp_dataproc_hook.py#L219
So remove that parameter and it should work.

What is the correct way to import an external module when using this in my own module?

#SteeringParams.py
import csv
def GetSteeringParams(x):
steeringdict = {'steeringsymbol':'steeringparam'}
with open("/Users/xyz/Documents/CI-data/steeringdict.csv","r") as paramsfile:
paramline = csv.reader(paramsfile, delimiter=',')
for row in paramline:
if len(row) > 0:
steeringdict[row[0]] = row[1]
paramsfile.close()
return steeringdict
if __name__ == "__main__":
print GetSteeringParams(0)
This code imports a set of steering parameters into a dictionary. Works like a champ.
Next thing I want to do is to import this as a module as follows:
import csv
from SteeringParams import *
print GetSteeringParams(0)
And it returns:
NameError: global name 'csv' is not defined
Both scripts run from the same directory. Even though I am (also) globally importing csv I am getting this NameError returned. How is this explained and what should be done to import and run “SteeringParams.py” as a module ?

How To define arguments in ironpython for the Microsoft.SqlServer.SMO.Scripter.Script Class

I want to script my database objects using the ironpython code below:
import sys
import clr
database_name = r'localhost\SQLEXPRESS'
dir_assemblies = r'D:\programfiles\Microsoft SQL Server\100\SDK\Assemblies'
# Import SMO Namespace
sys.path.append(dir_assemblies)
clr.AddReferenceToFile('Microsoft.SqlServer.Smo.dll')
import Microsoft.SqlServer.Management.Smo as SMO
db = SMO.Server(database_name)
scripter = SMO.Scripter(db)
for database in db.Databases:
for table in database.Tables:
# TypeError: expected Array[Urn], got Table
scripter.Script(table)
When executing this code, I get the following error:
File "SMOtest2.py", line 18, in <module>
TypeError: expected Array[Urn], got Table
The SMO.Scripter.doc gives me the following information:
Script(self: Scripter, urns: Array[Urn]) -> StringCollection
Script(self: Scripter, list: UrnCollection) -> StringCollection
Script(self: Scripter, objects: Array[SqlSmoObject]) -> StringCollection
I tried creating an Array[Urn] or an Array[SqlSmoObject] but without any succes.
Does anyone have an idea how I can create the right argument for the SMO.Scripter.Script Class?
I want to write the VB code below in python.
Taken from: http://msdn.microsoft.com/en-us/library/ms162160(v=SQL.90).aspx
'Connect to the local, default instance of SQL Server.
Dim srv As Server
srv = New Server
'Reference the AdventureWorks database.
Dim db As Database
db = srv.Databases("AdventureWorks")
'Define a Scripter object and set the required scripting options.
Dim scrp As Scripter
scrp = New Scripter(srv)
scrp.Options.ScriptDrops = False
scrp.Options.WithDependencies = True
'Iterate through the tables in database and script each one. Display the script.
'Note that the StringCollection type needs the System.Collections.Specialized namespace to be included.
Dim tb As Table
Dim smoObjects(1) As Urn
For Each tb In db.Tables
smoObjects = New Urn(0) {}
smoObjects(0) = tb.Urn
If tb.IsSystemObject = False Then
Dim sc As StringCollection
sc = scrp.Script(smoObjects)
Dim st As String
For Each st In sc
Console.WriteLine(st)
Next
End If
Next
I found the solution:
arg=System.Array[SMO.SqlSmoObject]([table])
The full script looks like:
import sys
import clr
# import .NET Array
import System.Array
database_name = r'localhost\SQLEXPRESS'
dir_assemblies = r'D:\programfiles\Microsoft SQL Server\100\SDK\Assemblies'
# Import SMO Namespace
sys.path.append(dir_assemblies)
clr.AddReferenceToFile('Microsoft.SqlServer.Smo.dll')
import Microsoft.SqlServer.Management.Smo as SMO
db = SMO.Server(database_name)
scripter = SMO.Scripter(db)
for database in db.Databases:
for table in database.Tables:
# create a .NET Array as an argument for the scripter
arg=System.Array[SMO.SqlSmoObject]([table])
script = scripter.Script(arg)
#output script
for line in script:
print line
Never used IronPython or SMO, but it looks like it expects a collection of some sort. Have you tried:
scripter.Script(database.Tables)
instead of scripting one table at a time?

Categories