I'm using fhir.resource python library.
I have a trouble when assign value to status parameter in ServiceRequest, it show error
Traceback (most recent call last):
File "D:\workspace\nar-fhir\main.py", line 5, in
sr.status = 'completed'
File "pydantic\main.py", line 357, in pydantic.main.BaseModel.setattr
pydantic.error_wrappers.ValidationError: 1 validation error for ServiceRequest
root -> intent
none is not an allowed value (type=type_error.none.not_allowed)
This is my code :
from fhir.resources.servicerequest import ServiceRequest
sr = ServiceRequest.construct()
sr.status = 'completed'
sr.intent = 'directive'
When I search ServiceRequest code, show like this :
intent: fhirtypes.Code = Field(
None,
alias="intent",
title=(
"proposal | plan | directive | order | original-order | reflex-order | "
"filler-order | instance-order | option"
),
description=(
"Whether the request is a proposal, plan, an original order or a reflex"
" order."
),
# if property is element of this resource.
element_property=True,
element_required=True,
# note: Enum values can be used in validation,
# but use in your own responsibilities, read official FHIR documentation.
enum_values=[
"proposal",
"plan",
"directive",
"order",
"original-order",
"reflex-order",
"filler-order",
"instance-order",
"option",
],
)
Any help?
Related
I have below proto file
syntax = "proto3";
package my_proto;
message ID
{
optional uint64 id_upper = 1;
optional uint64 id_lower = 2;
}
message mymessage
{
optional ID id = 1;
}
And I tried assigning value to ID field using below python script
import my_proto_pb2
message = my_proto_pb2.mymessage()
id = message.id()
id.id_upper = 10
id.id_lower = 20
But I got below error:
Traceback (most recent call last):
File "create_message.py", line 4, in <module>
id = message.id()
TypeError: 'ID' object is not callable
May I know why it is throwing this error? Also, how can I assign value to an 'optional' message field?
Try removing parenthesis here:
id = message.id() to id = message.id
another way to do this:
thisid = ID()
thisid.id_upper = 10
thisid.id_lower = 20
message = mymessage()
message.id = thisid
I am having an issue in parsing inconsistent datatypes in pyspark. As shown in the example file below, SA key contains always a dictionary but sometimes it can appear as string value. When I try to fetch the column SA.SM.Name, I get the exception as shown below.
How do I put null for SA.SM.Name column in pyspark/hive for the values having other than JSONs. Can someone help me please?
I tried to cast to different datatypes but nothing worked or may be I would be doing something wrong.
Input file Contents: mypath
{"id":1,"SA":{"SM": {"Name": "John","Email": "John#example.com"}}}
{"id":2,"SA":{"SM": {"Name": "Jerry","Email": "Jerry#example.com"}}}
{"id":3,"SA":"STRINGVALUE"}
df=spark.read.json(my_path)
df.registerTempTable("T")
spark.sql("""select id,SA.SM.Name from T """).show()
Traceback (most recent call last): File "", line 1, in
File "/usr/lib/spark/python/pyspark/sql/session.py", line
767, in sql
return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped) File
"/usr/lib/spark/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py",
line 1257, in call File
"/usr/lib/spark/python/pyspark/sql/utils.py", line 69, in deco
raise AnalysisException(s.split(': ', 1)[1], stackTrace) pyspark.sql.utils.AnalysisException: "Can't extract value from
SA#6.SM: need struct type but got string; line 1 pos 10"
That is not possible using dataframes, since the column SA is being read as string while spark loads it. But you can load the file/table using sparkContext as rdd and then use a cleaner function for mapping the empty dict value to the SA. Here i loaded the file as textFile, but do necessary implementation if it is hadoopfile.
def cleaner(record):
output = ""
print(type(record))
try:
output = json.loads(record)
except Exception as e:
print("exception happened")
finally:
if isinstance(output.get("SA"), str ):
print("This is string")
output["SA"] = {}
return output
dfx = spark.sparkContext.textFile("file://"+my_path)
dfx2 = dfx.map(cleaner)
new_df = spark.createDataFrame(dfx2)
new_df.show(truncate=False)
+---------------------------------------------------+---+
|SA |id |
+---------------------------------------------------+---+
|[SM -> [Email -> John#example.com, Name -> John]] |1 |
|[SM -> [Email -> Jerry#example.com, Name -> Jerry]]|2 |
|[] |3 |
+---------------------------------------------------+---+
new_df.printSchema()
root
|-- SA: map (nullable = true)
| |-- key: string
| |-- value: map (valueContainsNull = true)
| | |-- key: string
| | |-- value: string (valueContainsNull = true)
|-- id: long (nullable = true)
Note: if the output value of name has to be written to the same table/ column , this solution might not work and if you try to write back the loaded dataframe to the same table, then it will cause the SA column to break and you will get a list of names and emails as per the schema provided in the comments of the qn.
I have a use case in which i have an existing sns topic and i am creating lambda functions using cloudformation and troposphere . I have to somehow create my stack in such a way in which the topic sends subscriptions to my lambda functions, but the topic itself should not be recreated.
Below is my code :
from troposphere import FindInMap, GetAtt, Join, Output
from troposphere import Template, Ref
from troposphere.awslambda import Function, Code, Permission
from troposphere.sns import Topic, SubscriptionResource
folder_names = ["welt", "jukin"]
t = Template()
t.set_version("2010-09-09")
t.add_mapping("MapperToTenantId",
{
u'welt': {'id': u't-012'},
u'jukin': {'id': u't-007'}
}
)
t.add_mapping("LambdaExecutionRole",
{u'lambda-execution-role': {u'ARN': u'arn:aws:iam::498129003450:role/service-role/lambda_execution_role'}}
)
code = [
"def lambda_handler(event, context):\n",
" message = event[‘Records’][0][‘Sns’][‘Message’]\n",
" print(“From SNS: “ + message)\n",
" return message\n"
]
for cp in folder_names:
lambda_function = t.add_resource(Function(
f"{cp}MapperLambda",
Code=Code(
ZipFile=Join("", code)
),
Handler="index.handler",
Role=FindInMap("LambdaExecutionRole", "lambda-execution-role", "ARN"),
Runtime="python3.6",
)
)
t.add_resource(Permission(
f"InvokeLambda{cp}Permission",
FunctionName=GetAtt(lambda_function, "Arn"),
Action="lambda:InvokeFunction",
SourceArn='arn:aws:sns:us-west-2:498129003450:IngestStateTopic',
Principal="sns.amazonaws.com"
))
t.add_resource(SubscriptionResource(
EndPoint=GetAtt(lambda_function, "Arn"),
Protocol='lambda',
TopicArn='arn:aws:sns:us-west-2:498129003450:IngestStateTopic'
))
with open('mapper_cf.yaml', 'w') as y:
y.write(t.to_yaml())
I am getting the below error and i am not able to figure a way out :
Traceback (most recent call last):
File "create_cloudformation.py", line 54, in <module>
TopicArn='arn:aws:sns:us-west-2:498129003450:IngestStateTopic'
TypeError: __init__() missing 1 required positional argument: 'title'
Is this possible to do in troposphere. I don't want to hardcode the block in cloud formation but i want to generate that in troposphere.
Is this even possible to do ?
Kindly give me some hints.
The error you are getting is related to not specifying a title string. Try this:
t.add_resource(SubscriptionResource(
f"{cp}Subscription",
EndPoint=GetAtt(lambda_function, "Arn"),
Protocol='lambda',
TopicArn='arn:aws:sns:us-west-2:498129003450:IngestStateTopic'
))
I'm trying to use the twint module to get some information from twitter, in particular the bio. The code example works just fine:
import twint
c = twint.Config()
c.Username = "twitter"
twint.run.Lookup(c)
yields
783214 | Twitter | #Twitter | Private: 0 | Verified: 1 | Bio: What’s happening?! | Location: Everywhere | Url: https://about.twitter.com/ | Joined: 20 Feb 2007 6:35 AM | Tweets: 10816 | Following: 140 | Followers: 56328970 | Likes: 5960 | Media: 1932 | Avatar: https://pbs.twimg.com/profile_images/1111729635610382336/_65QFl7B_400x400.png
Thing is, I only need the bio data. According to the site, you can use
c.Format = 'bio: {bio}'
Unfortunately, this yields
CRITICAL:root:twint.get:User:replace() argument 2 must be str, not None
I think this may be due to the following code line (from here):
output += output.replace("{bio}", u.bio)
Where the u.bio value is assigned here:
u.bio = card(ur, "bio")
The card function does the following when our type is "bio":
if _type == "bio":
try:
ret = ur.find("p", "ProfileHeaderCard-bio u-dir").text.replace("\n", " ")
except:
ret = None
I think the problem may lie in the second part, where a value is assigned to u.bio, either not even being called or returning None for some reason. Unfortunately, I do not know how to fix that or call the function.
I've had a similar problem before with a different function, twint.run.Following(c), but was able to solve it by not setting c.User_full = true
Could anyone help me out?
The format should be of the form
c.Format = "{bio}"
If you wanted multiple fields
c.Format = "{bio} | {name}"
I find you get a rate limit of 250 items before a blocker drops down and you need to wait for a few minutes for it to lift.
I have two Python classes Note and Link mapping to PostgresQL tables. Note has a foreign-key reference to Link, while Link points back to the node through a piece of JSON text. Links point to other things besides Notes but that doesn't matter here.
Note
+------+------------------+---------+
| ID | NAME | NOTE_ID |
+------+------------------+---------+
| 1 | Alice | 5 |
| 2 | Bob | 20 |
| 3 | Carla | 6 |
+------+------------------+---------+
Link
+------+--------------+
| ID | CONTENT |
+------+--------------+
| ... | ... |
| 5 | {"t":1} |
| 6 | {"t":3} |
| ... | ... |
| 20 | {"t":2} |
+------+--------------+
Now what I would like is that whenever I create a new Note
note = Note('Danielle')
it would automatically enter the row
(4, 'Danielle', 21)
into Note, AND enter
(21, '{"t":4}')
into Link. Here's what I have tried so far: I create the Note object and THEN try to create the Link in the #events.after_insert event:
class Note(Entity):
name = Field(Unicode)
link = ManyToOne('Link')
. . .
#events.after_insert
def create_link(self):
"""
Create and persist the short link for this note. Must be done
in this after_insert event because the link table has a foreign
key that points back to the note. We need the note to be
already inserted so we can use its id.
"""
self.link = Link.build_link_for_note(self)
elixir.session.flush()
print("NOTE %s GOT LINK %s" % (self, self.link))
In the Link class I have
class Link(Entity):
. . .
#classmethod
def build_link_for_note(cls, note):
return Link(content='{"id": %d}' % note.id)
Both tables have autoincremented primary keys, so no worries there. The error that I get with this code is:
File ".../sqlalchemy/orm/session.py", line 1469, in flush
raise sa_exc.InvalidRequestError("Session is already flushing")
InvalidRequestError: Session is already flushing
I'll buy that. The #after_insert event gets called (I think) after the Note got stored to the database, which happened during the current session flush. And of course if I remove the elixir.session.flush() call, then of course it prints
NOTE <Note id:4 name:Danielle> GOT LINK <id:None content:{"t": 4}>
which again makes sense since I haven't been able to persist the link!
So my question is, how can I, create both a Note and a Link in a single request, so that the mutually dependent ids are available and properly recorded?
P.S. I understand that the schema here is a little unusal, and that I can solve this issue by either (1) spawning a task to create the Link asynchronously or (2) making the Link.content method create the link lazily. These solutions require some concurrency attention, so I am really hoping that a simple, direct SQLAlchemy solution with one session can work.
I'd advise against using Elixir's methods such as "save()" which mis-uses SQLAlchemy's API. Here is the aforementioned approach using standard SQLAlchemy events. Everything is achieved in one flush as well.
from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import event
import json
Base = declarative_base()
class Note(Base):
__tablename__ = "note"
id = Column(Integer, primary_key=True)
name = Column(String)
note_id = Column(Integer, ForeignKey('link.id'))
link = relationship("Link")
# if using __init__ here is distasteful to you,
# feel free to use the "init" event illustrated
# below instead
def __init__(self, name):
self.name = name
self.link = Link()
class Link(Base):
__tablename__ = "link"
id = Column(Integer, primary_key=True)
content = Column(String)
# using an event instead of Note.__init__
##event.listens_for(Note, "init")
#def init(target, args, kwargs):
# target.link = Link()
#event.listens_for(Note, "after_insert")
def after_insert(mapper, connection, target):
connection.execute(
Link.__table__.update().\
values(content=json.dumps({"t": target.id}))
)
e = create_engine("sqlite://", echo=True)
Base.metadata.create_all(e)
s = Session(e)
note = Note('Danielle')
s.add(note)
s.commit()
note = s.query(Note).first()
assert s.query(Link.content).scalar() == ('{"t": %d}' % note.id)
Since both objects have autogenerated IDs that come from the DB and want to store each other's IDs, you need to save both objects first, then save one of the objects once more, with the updated ID of other object.
So I'd go with removing the flush call and maybe calling save explicitly for each of the objects involved.