I´ve made a simple pipeline in Python to read from kafka, the thing is that the kafka cluster is on confluent cloud and I am having some trouble conecting to it.
Im getting the following log on the dataflow job:
Caused by: org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:820)
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:631)
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:612)
at org.apache.beam.sdk.io.kafka.KafkaIO$Read$GenerateKafkaSourceDescriptor.processElement(KafkaIO.java:1495)
Caused by: java.lang.IllegalArgumentException: Could not find a 'KafkaClient' entry in the JAAS configuration. System property 'java.security.auth.login.config' is not set
So I think Im missing something while passing the config since it mentions something related to it, Im really new to all of this and I know nothing about java so I dont know how to proceed even reading the JAAS documentation.
The code of the pipeline is the following:
import apache_beam as beam
from apache_beam.io.kafka import ReadFromKafka
from apache_beam.options.pipeline_options import PipelineOptions
import os
import json
import logging
os.environ['GOOGLE_APPLICATION_CREDENTIALS']='credentialsOld.json'
with open('cluster.configuration.json') as cluster:
data=json.load(cluster)
cluster.close()
def logger(element):
logging.INFO('Something was found')
def main():
config={
"bootstrap.servers":data["bootstrap.servers"],
"security.protocol":data["security.protocol"],
"sasl.mechanisms":data["sasl.mechanisms"],
"sasl.username":data["sasl.username"],
"sasl.password":data["sasl.password"],
"session.timeout.ms":data["session.timeout.ms"],
"auto.offset.reset":"earliest"
}
print('======================================================')
beam_options = PipelineOptions(runner='DataflowRunner',project='project',experiments=['use_runner_v2'],streaming=True,save_main_session=True,job_name='kafka-stream-test')
with beam.Pipeline(options=beam_options) as p:
msgs = p | 'ReadKafka' >> ReadFromKafka(consumer_config=config,topics=['users'],expansion_service="localhost:8088")
msgs | beam.FlatMap(logger)
if __name__ == '__main__':
main()
I read something about passing a property java.security.auth.login.config in the config dictionary but since that example is with java and I´am using python Im really lost at what I have to pass or even if that´s the property I have to pass etc.
btw Im getting the api key and secret from here and this is what I am passing to sasl.username and sasl.password
I faced the same error the first time I tried the beam's expansion service. The key sasl.mechanisms that you are supplying is incorrect, try with sasl.mechanism also you do not need to supply the username and password since you are connection is authenticated by jasl basically the consumer_config like below worked for me:
config={
"bootstrap.servers":data["bootstrap.servers"],
"security.protocol":data["security.protocol"],
"sasl.mechanism":data["sasl.mechanisms"],
"session.timeout.ms":data["session.timeout.ms"],
"group.id":"tto",
"sasl.jaas.config":f'org.apache.kafka.common.security.plain.PlainLoginModule required serviceName="Kafka" username=\"{data["sasl.username"]}\" password=\"{data["sasl.password"]}\";',
"auto.offset.reset":"earliest"
}
I got a partial answer to this question since I fixed this problem but got into another one:
config={
"bootstrap.servers":data["bootstrap.servers"],
"security.protocol":data["security.protocol"],
"sasl.mechanisms":data["sasl.mechanisms"],
"sasl.username":data["sasl.username"],
"sasl.password":data["sasl.password"],
"session.timeout.ms":data["session.timeout.ms"],
"group.id":"tto",
"sasl.jaas.config":f'org.apache.kafka.common.security.plain.PlainLoginModule required serviceName="Kafka" username=\"{data["sasl.username"]}\" password=\"{data["sasl.password"]}\";',
"auto.offset.reset":"earliest"
}
I needed to provide the sasl.jaas.config porpertie with the api key and secret of my cluster and also the service name, however, now Im facing a different error whe running the pipeline on dataflow:
Caused by: org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata
This error shows after 4-5 mins of trying to run the job on dataflow, actually I have no idea how to fix this but I think is related to my broker on confluent rejecting the connection, I think this could be related to the zone execution since the cluster is in a different zone than job region.
UPDATE:
I tested the code on linux/ubuntu and I dont know why but the expansión service gets downloaded automatically so you wont get unsoported signal error, still having some issues trying to autenticate to confluent kafka tho.
I'm trying to get the number of symbols of metatrader5 and I'm getting an error
TypeError: '>' not supported between instances of 'NoneType' and 'int'
Link to the documentation: https://www.mql5.com/en/docs/integration/python_metatrader5/mt5symbolstotal_py
code:
import MetaTrader5 as mt5
print("MetaTrader5 package author: ",mt5.__author__)
print("MetaTrader5 package version: ",mt5.__version__)
if not mt5.initialize():
print("initialize() failed, error code =",mt5.last_error())
quit()
symbols=mt5.symbols_total()
if symbols>0:
print("Total symbols =",symbols)
else:
print("symbols not found")
mt5.shutdown()
The problem is that the function is returning NoneType instead of a number.
Why it's returning a NoneType? How can i get the list of Symbols/Stocks?
Any clue?
I had the same problem too. If you are currently using a downloaded MT5 terminal from your broker, you can try using the official MT5 terminal instead. That seemed to have fixed my issue. Don't forget to specify the path to the correct MT5 terminal.exe afterwards within the initialization function initialize(path=...).
As to why this was causing an issue, I'm unsure myself. I happened across this post and it mentioned that there may have been modifications made by brokers.
Anyway, hope this works for you too!
To connect to your broker's server afterwards, within the MT5 terminal under Navigator->Accounts (Right Click)->Open an account->Search for your broker and enter your credentials.
My code:
import mido
import time
mido.set_backend('mido.backends.pygame')
output = mido.open_output()
output.send(mido.Message('note_on', note=64, velocity=60))
time.sleep(3)
output.close()
After the last line, the following error is printed:
Exception Exception: "PortMidi: `Bad pointer'" in <pypm.Output object at 0x025FF0B0> ignored
Apart from that, everything seems to work fine. However I'm developing a console app and this output is annoying. How can I get rid of this error?
I am using Windows 7 and Python 2.7.
You don't even have to set the RtMidi backend as it is the default, see mido backend documentation
I have encountered a problem after implementing the named parameters in RAW SQL Queries as per Python DB-API.
Earlier, my code was as follows (and this works fine, both on my DEV Server and my Client's test server)
cursor.execute("SELECT DISTINCT(TAG_STATUS) FROM TAG_HIST WHERE TAG_NBR = '%s' " %(TAG_NBR))
I changed it to the following
cursor.execute("SELECT DISTINCT(TAG_STATUS) FROM TAG_HIST WHERE TAG_NBR = :TAG_NBR " ,{'TAG_NBR':TAG_NBR})
This changed version (with named parameters) works fine on my Development Server
Windows XP Oracle XE
SQL*Plus: Release 11.2.0.2.0
cx_Oracle-5.1.2-11g.win32-py2.7
However, when deployed on my Client's Test Server, it does not.... execution of all queries fail.
Characteristics of my client's server are as follows
Windows Server 2003
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bi
cx_Oracle-5.1.2-10g.win32-py2.7
The error that I get is as follows
Traceback (most recent call last):
File "C:/Program Files/App_Logic/..\apps\views.py", line 400, in regularize_TAG
T_cursor.execute("SELECT DISTINCT(TAG_STATUS) FROM TAG_HIST WHERE TAG_NBR = :TAG_NBR " ,{'TAG_NBR':TAG_NBR})
DatabaseError: ORA-01460: unimplemented or unreasonable conversion requested
Appreciate if someone could help me through this issue.
This issue presents itself only when the cx_Oracle code is run inside the Web App (Hosted on Apache).
If i run the same code with named parameters from within the python command line then the query runs just fine.
Here is how this got solved.
I tried typecasting unicode to str and the results were positive.
This one worked for example
T_cursor.execute("SELECT DISTINCT(TAG_STATUS) FROM TAG_HIST WHERE TAG_NBR = :TAG_NBR", {'TAG_NBR': str(TAG_NBR)})
So in effect, unicode was getting mangled by getting encoded into the potentially non-unicode database character set.
To solve that, here is another option
import os
os.environ.update([('NLS_LANG', '.UTF8'),('ORA_NCHAR_LITERAL_REPLACE', 'TRUE'),])
import cx_Oracle
Above guarantees that we are really in UTF8 mode.
Second environment variable one is not an absolute necessity. And AFAIK there is no other way to set these variables (except before running app itself) due the fact that NLS_LANG is
read by OCI libs from the environment.
i started using couchdb with python-couchdb recently. The problem is when i use futon run my views written in python i get the following error message:
Error: os_process_error
{exit_status,1}
even for the default view it crashes.
def fun(doc):
yield None, doc
i haven't yet found much information regarding this issue so at this point im really lost. This is the log i get from couchdb:
{<0.3907.0>,crash_report,
[[{initial_call,{couch_file,init,['Argument__1']}},
{pid,<0.3907.0>},
{registered_name,[]},
{error_info,
{exit,
{os_process_error,{exit_status,1}},
[{gen_server,terminate,6},{proc_lib,init_p_do_apply,3}]}},
{ancestors,
[<0.3906.0>,couch_view,couch_secondary_services,couch_server_sup,
<0.33.0>]},
{messages,[]},
{links,[#Port<0.1483>,<0.3910.0>]},
{dictionary,[]},
{trap_exit,true},
{status,running},
{heap_size,377},
{stack_size,24},
{reductions,1423}],
[{neighbour,
[{pid,<0.3910.0>},
{registered_name,[]},
{initial_call,{couch_ref_counter,init,['Argument__1']}},
{current_function,{gen_server,loop,6}},
{ancestors,
[<0.3906.0>,couch_view,couch_secondary_services,
couch_server_sup,<0.33.0>]},
{messages,[]},
{links,[#Port<0.1483>,<0.3910.0>]},
{dictionary,[]},
{trap_exit,true},
{status,running},
{heap_size,377},
{stack_size,24},
{reductions,1423}],
[{neighbour,
[{pid,<0.3910.0>},
{registered_name,[]},
{initial_call,{couch_ref_counter,init,['Argument__1']}},
{current_function,{gen_server,loop,6}},
{ancestors,
[<0.3906.0>,couch_view,couch_secondary_services,
couch_server_sup,<0.33.0>]},
{messages,
[{'DOWN',#Ref<0.0.0.16475>,process,<0.3906.0>,
{os_process_error,{exit_status,1}}}]},
{links,[<0.3907.0>]},
{dictionary,[]},
{trap_exit,false},
{status,runnable},
{heap_size,233},
{stack_size,9},
{reductions,47}]}]]}}
Im running this on Ubuntu 10.04, with Django, couchdb and python-couchdb. The views on javascript works fine.
For couchdb-python query server exit status 1 means some error that.
What version of couchdb/couchdb-python you're uses?
Whats output will be if you run couchpy (or /usr/local/bin/couchpy or whatever you have setted in query_servers section for python key) directly from command line? Example as it should be:
$
~ $ couchpy
["reset"]
true
["add_fun", "def fun(doc): yield None, None"]
true
["map_doc", {}]
[[[null, null]]]
If p.2 works fine, try to enable CouchDB debug log level to trace query server commands and notice when it falls. If you're sure that this is a python query server bug, please write some story about it. Thanks(:
P.S. I hope that you have setup python query server correctly, but double things check never was useless(;