Superset. Error while sending report by email - python

Need help solving an issue with Superset (version 1.0.1) running on docker.
Sendings reports (dashboards) by email is configured. Small dashboards are sent fine.
But sending several report ends with this error:
Report Schedule execution failed when generating a screenshot.
[05/Oct/2021:16:00:06 +0300] "GET /static/assets/5d82d1b53c008164c101.chunk.js HTTP/1.1" 200 308964 "http://192.168.90.132:8088/superset/dashboard/49/" "Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0"
Message: Failed to convert data to an object
[2021-10-05 16:00:08,431: ERROR/ForkPoolWorker-7] Message: Failed to convert data to an object
[2021-10-05 16:00:13,752: WARNING/ForkPoolWorker-7] Message: Failed to decode response from marionette
, retrying in 0 seconds...
[2021-10-05 16:00:13,754: WARNING/ForkPoolWorker-7] Message: Tried to run command without establishing a connection
, retrying in 0 seconds...
[2021-10-05 16:00:13,755: WARNING/ForkPoolWorker-7] Message: Tried to run command without establishing a connection
, retrying in 0 seconds...
[2021-10-05 16:00:13,756: WARNING/ForkPoolWorker-7] Message: Tried to run command without establishing a connection
, retrying in 0 seconds...
Failed at generating thumbnail 'WebDriver' object has no attribute 'screenshot'
[2021-10-05 16:00:13,758: ERROR/ForkPoolWorker-7] Failed at generating thumbnail 'WebDriver' object has no attribute 'screenshot'
Report Schedule execution failed when generating a screenshot.
Traceback (most recent call last):
File "/app/superset/utils/celery.py", line 50, in session_scope
yield session
File "/app/superset/reports/commands/execute.py", line 374, in run
raise ex
File "/app/superset/reports/commands/execute.py", line 371, in run
session, self._model, self._scheduled_dttm
File "/app/superset/reports/commands/execute.py", line 344, in run
self._session, self._report_schedule, self._scheduled_dttm
File "/app/superset/reports/commands/execute.py", line 261, in next
raise ex
File "/app/superset/reports/commands/execute.py", line 257, in next
self.send()
File "/app/superset/reports/commands/execute.py", line 195, in send
notification_content = self._get_notification_content()
File "/app/superset/reports/commands/execute.py", line 175, in _get_notification_content
screenshot_data = self._get_screenshot()
File "/app/superset/reports/commands/execute.py", line 167, in _get_screenshot
raise ReportScheduleScreenshotFailedError()
superset.reports.commands.exceptions.ReportScheduleScreenshotFailedError: Report Schedule execution failed when generating a screenshot.
[2021-10-05 16:00:13,775: ERROR/ForkPoolWorker-7] Report Schedule execution failed when generating a screenshot.
Traceback (most recent call last):
File "/app/superset/utils/celery.py", line 50, in session_scope
yield session
File "/app/superset/reports/commands/execute.py", line 374, in run
raise ex
File "/app/superset/reports/commands/execute.py", line 371, in run
session, self._model, self._scheduled_dttm
File "/app/superset/reports/commands/execute.py", line 344, in run
self._session, self._report_schedule, self._scheduled_dttm
File "/app/superset/reports/commands/execute.py", line 261, in next
raise ex
File "/app/superset/reports/commands/execute.py", line 257, in next
self.send()
File "/app/superset/reports/commands/execute.py", line 195, in send
notification_content = self._get_notification_content()
File "/app/superset/reports/commands/execute.py", line 175, in _get_notification_content
screenshot_data = self._get_screenshot()
File "/app/superset/reports/commands/execute.py", line 167, in _get_screenshot
raise ReportScheduleScreenshotFailedError()
superset.reports.commands.exceptions.ReportScheduleScreenshotFailedError: Report Schedule execution failed when generating a screenshot.
Report state: Report Schedule execution failed when generating a screenshot.
[2021-10-05 16:00:13,776: INFO/ForkPoolWorker-7] Report state: Report Schedule execution failed when generating a screenshot.

Related

Quickstart.py - failed to connect to all addresses

I installed the Python libraries on my local PC, I also added my GA-4 Property ID. When I attempt to run the script I get the following error, note that the error actually makes sense because the IP Address failing doesn't have a valid SSL certificate:
Traceback (most recent call last):
File "\\usalbodd01\bod_Share\BODS_Tools\google\api_core\grpc_helpers.py", line 72, in error_remapped_callable
return callable_(*args, **kwargs)
File "\\usalbodd01\bod_Share\BODS_Tools\grpc\_channel.py", line 946, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "\\usalbodd01\bod_Share\BODS_Tools\grpc\_channel.py", line 849, in _end_unary_response_blocking
raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "failed to connect to all addresses; last error: UNKNOWN: ipv4:142.250.190.10:443: Ssl handshake failed: SSL_ERROR_SSL: error:1000007d:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED"
debug_error_string = "UNKNOWN:failed to connect to all addresses; last error: UNKNOWN: ipv4:142.250.190.10:443: Ssl handshake failed: SSL_ERROR_SSL: error:1000007d:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED {grpc_status:14, created_time:"2022-11-28T18:41:22.060505311+00:00"}"
>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "\\usalbodd01\bod_Share\BODS_Tools\GoogleQuickstart.py", line 51, in <module>
sample_run_report("")
File "\\usalbodd01\bod_Share\BODS_Tools\GoogleQuickstart.py", line 43, in sample_run_report
response = client.run_report(request)
File "\\usalbodd01\bod_Share\BODS_Tools\google\analytics\data_v1beta\services\beta_analytics_data\client.py", line 511, in run_report
response = rpc(
File "\\usalbodd01\bod_Share\BODS_Tools\google\api_core\gapic_v1\method.py", line 154, in __call__
return wrapped_func(*args, **kwargs)
File "\\usalbodd01\bod_Share\BODS_Tools\google\api_core\grpc_helpers.py", line 74, in error_remapped_callable
raise exceptions.from_grpc_error(exc) from exc
google.api_core.exceptions.ServiceUnavailable: 503 failed to connect to all addresses; last error: UNKNOWN: ipv4:142.250.190.10:443: Ssl handshake failed: SSL_ERROR_SSL: error:1000007d:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED
Have any of you run into this when attempting to run the quickstart.py script locally?
Regards,
Greg
I attempted to run the script after following the "TODOS". I installed the Google certificates locally. I then tested the IP Address the script is failing on, the address is not secured.
Your connection isn't private
Attackers might be trying to steal your information from 142.250.190.10 (for example, passwords, messages, or credit cards).
NET::ERR_CERT_AUTHORITY_INVALID

Forbidden: Request forbidden -- authorization will not help ([GET] https://api.anaconda.org/user -> 403)

When I open the anaconda navigator, it gives me the following error. The URL in which this error comes is file:///C:/Users/Heaper/AppData/Local/Temp/tmpmdsmzme5.html.
Strangely, I could open anaconda navigator yesterday, then today, after I turned on my laptop and clicked on anaconda navigator, this error showed up.
The error:
Navigator Error
An unexpected error occurred on Navigator start-up
Report
Please report this issue in the anaconda issue tracker
Main Error
('Forbidden: Request forbidden -- authorization will not help ([GET] https://api.anaconda.org/user -> 403)', 403)
Traceback
Traceback (most recent call last):
File "C:\Users\Heaper\anaconda3\lib\site-packages\anaconda_navigator\exceptions.py", line 72, in exception_handler
return_value = func(*args, **kwargs)
File "C:\Users\Heaper\anaconda3\lib\site-packages\anaconda_navigator\app\start.py", line 146, in start_app
window = run_app(splash)
File "C:\Users\Heaper\anaconda3\lib\site-packages\anaconda_navigator\app\start.py", line 65, in run_app
window = MainWindow(splash=splash)
File "C:\Users\Heaper\anaconda3\lib\site-packages\anaconda_navigator\widgets\main_window.py", line 165, in __init__
self.api = AnacondaAPI()
File "C:\Users\Heaper\anaconda3\lib\site-packages\anaconda_navigator\api\anaconda_api.py", line 1518, in AnacondaAPI
ANACONDA_API = _AnacondaAPI()
File "C:\Users\Heaper\anaconda3\lib\site-packages\anaconda_navigator\api\anaconda_api.py", line 83, in __init__
self._client_api = ClientAPI(config=self.config)
File "C:\Users\Heaper\anaconda3\lib\site-packages\anaconda_navigator\api\client_api.py", line 659, in ClientAPI
CLIENT_API = _ClientAPI(config=config)
File "C:\Users\Heaper\anaconda3\lib\site-packages\anaconda_navigator\api\client_api.py", line 95, in __init__
self.reload_client()
File "C:\Users\Heaper\anaconda3\lib\site-packages\anaconda_navigator\api\client_api.py", line 326, in reload_client
client.user()
File "C:\Users\Heaper\anaconda3\lib\site-packages\binstar_client\__init__.py", line 245, in user
self._check_response(res)
File "C:\Users\Heaper\anaconda3\lib\site-packages\binstar_client\__init__.py", line 230, in _check_response
raise ErrCls(msg, res.status_code)
binstar_client.errors.BinstarError: ('Forbidden: Request forbidden -- authorization will not help ([GET] https://api.anaconda.org/user -> 403)', 403)
A lot of sites and repositories are experiencing outages because of Let's Encrypt root certificate expiration. Things will eventually come back to normal in a few days: https://letsencrypt.org/docs/dst-root-ca-x3-expiration-september-2021/
This issue occured when your internet connection is filtered or partially blocked. For example some of companies blocks some of websites on their wi-fi. If you change your connection and try to run anaconda it will be fixed.

BrokenPipeError: [Errno 32] Broken pipe Neo4j and Python using Multi Processing

I am trying to use neo4j using the python driver. Implemented a program that exchanges the data between neo4j and Python frequently and each of the iterations of the program is independent. The program is running perfectly when no parallel processing is used. Then I am trying to implement this using parallel processing in python where I parallelize these independent iterations. I have got a 24 core machine. So I can run quite a few number of processes. Even in parallel execution, the program is executing until the number of processes is 5. For any number greater than this 90% times I am getting the following error.
---------------------------------------------------------------------------
RemoteTraceback Traceback (most recent call last)
RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/neo4j/__init__.py", line 828, in close
self.sync()
File "/usr/local/lib/python3.7/site-packages/neo4j/__init__.py", line 793, in sync
self.session.sync()
File "/usr/local/lib/python3.7/site-packages/neo4j/__init__.py", line 538, in sync
detail_count, _ = self._connection.sync()
File "/usr/local/lib/python3.7/site-packages/neobolt/direct.py", line 526, in sync
self.send()
File "/usr/local/lib/python3.7/site-packages/neobolt/direct.py", line 388, in send
self._send()
File "/usr/local/lib/python3.7/site-packages/neobolt/direct.py", line 408, in _send
self.socket.sendall(data)
File "/usr/local/lib/python3.7/ssl.py", line 1015, in sendall
v = self.send(byte_view[count:])
File "/usr/local/lib/python3.7/ssl.py", line 984, in send
return self._sslobj.write(data)
BrokenPipeError: [Errno 32] Broken pipe
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/multiprocessing/pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "/usr/local/lib/python3.7/multiprocessing/pool.py", line 44, in mapstar
return list(map(*args))
File "<ipython-input-4-616f793afd51>", line 9, in func
d2=run_query(streaming_query)
File "<ipython-input-2-01a2f4205218>", line 6, in run_query
result = session.read_transaction(lambda tx: tx.run(streaming_query))
File "/usr/local/lib/python3.7/site-packages/neo4j/__init__.py", line 710, in read_transaction
return self._run_transaction(READ_ACCESS, unit_of_work, *args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/neo4j/__init__.py", line 686, in _run_transaction
tx.close()
File "/usr/local/lib/python3.7/site-packages/neo4j/__init__.py", line 835, in close
self.session.commit_transaction()
File "/usr/local/lib/python3.7/site-packages/neo4j/__init__.py", line 630, in commit_transaction
self._connection.sync()
File "/usr/local/lib/python3.7/site-packages/neobolt/direct.py", line 526, in sync
self.send()
File "/usr/local/lib/python3.7/site-packages/neobolt/direct.py", line 388, in send
self._send()
File "/usr/local/lib/python3.7/site-packages/neobolt/direct.py", line 408, in _send
self.socket.sendall(data)
File "/usr/local/lib/python3.7/ssl.py", line 1015, in sendall
v = self.send(byte_view[count:])
File "/usr/local/lib/python3.7/ssl.py", line 984, in send
return self._sslobj.write(data)
BrokenPipeError: [Errno 32] Broken pipe
"""
The above exception was the direct cause of the following exception:
BrokenPipeError Traceback (most recent call last)
<ipython-input-5-da15b33c8ad4> in <module>
7 pool = multiprocessing.Pool(processes=num_processes)
8 start = time.time()
----> 9 result = pool.map(func, chunks)
10 end = time.time()
11 print(end-start)
/usr/local/lib/python3.7/multiprocessing/pool.py in map(self, func, iterable, chunksize)
288 in a list that is returned.
289 '''
--> 290 return self._map_async(func, iterable, mapstar, chunksize).get()
291
292 def starmap(self, func, iterable, chunksize=None):
/usr/local/lib/python3.7/multiprocessing/pool.py in get(self, timeout)
681 return self._value
682 else:
--> 683 raise self._value
684
685 def _set(self, i, obj):
BrokenPipeError: [Errno 32] Broken pipe
Also, I am receiving the following warnings
Failed to read from defunct connection Address(host='127.0.0.1', port=7687) (Address(host='127.0.0.1', port=7687))
Failed to read from defunct connection Address(host='127.0.0.1', port=7687) (Address(host='127.0.0.1', port=7687))
Failed to read from defunct connection Address(host='127.0.0.1', port=7687) (Address(host='127.0.0.1', port=7687))
Failed to read from defunct connection Address(host='127.0.0.1', port=7687) (Address(host='127.0.0.1', port=7687))
Failed to read from defunct connection Address(host='127.0.0.1', port=7687) (Address(host='127.0.0.1', port=7687))
Failed to write data to connection Address(host='127.0.0.1', port=7687) (Address(host='127.0.0.1', port=7687)); ("32; 'Broken pipe'")
Transaction failed and will be retried in 1.1551515321361832s (Failed to write to closed connection Address(host='127.0.0.1', port=7687) (Address(host='127.0.0.1', port=7687)))
The server neo4j debug log is as follows
2020-04-09 13:07:26.033+0000 INFO [o.n.l.i.StoreLogService] Rotated internal log file
2020-04-09 13:08:16.724+0000 ERROR [o.n.b.t.p.HouseKeeper] Fatal error occurred when handling a client connection: [id: 0xdb5b2521, L:/127.0.0.1:7687 ! R:/127.0.0.1:58086] javax.net.ssl.SSLException: bad record MAC
io.netty.handler.codec.DecoderException: javax.net.ssl.SSLException: bad record MAC
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:472)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:278)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965)
at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:799)
at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:433)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:330)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:909)
at java.lang.Thread.run(Thread.java:748)
Caused by: javax.net.ssl.SSLException: bad record MAC
at sun.security.ssl.Alerts.getSSLException(Alerts.java:208)
at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1709)
at sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:970)
at sun.security.ssl.SSLEngineImpl.readNetRecord(SSLEngineImpl.java:896)
at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:766)
at javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:624)
at io.netty.handler.ssl.SslHandler$SslEngineType$3.unwrap(SslHandler.java:295)
at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1301)
at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1203)
at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1247)
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:502)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:441)
... 17 more
Caused by: javax.crypto.BadPaddingException: bad record MAC
at sun.security.ssl.EngineInputRecord.decrypt(EngineInputRecord.java:238)
at sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:963)
... 26 more
Few points I would like to mention
Tried to use a single driver for the whole program and setting up session as and when required
I tried to use a driver for each process and I am still facing the issue.
This sounds odd but I tried to setup a driver as and when a db call is required and closed it immediately after fetching the data. Here I am not facing the broken pipe error but Connection is being refused after hitting the limit.
I want to know what is the ideal way to setup a driver. Also please help me in fixing these issues.
Thanks

docker container in detach mode exits instantly

I am using docker SDK for python and trying to create a container. Following is the code I am executing:
import docker
client = docker.DockerClient(base_url='tcp://10.41.70.76:2375')
image = client.images.get('siab_user_one')
container = client.containers.run(image.tags[0], detach=True)
container.exec_run("ls")
But, the above code throws the following error:
Traceback (most recent call last):
File "/Users/aditya/workspace/term/lib/python3.6/site-packages/docker/api/client.py", line 261, in _raise_for_status
response.raise_for_status()
File "/Users/aditya/workspace/term/lib/python3.6/site-packages/requests/models.py", line 940, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 409 Client Error: Conflict for url: http://10.41.70.76:2375/v1.35/containers/ccdb556fb234eeb86b19d37c30e9d64e428bf42a8d2b70784225dcf3c5347859/exec
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "dock.py", line 7, in <module>
container.exec_run("ls")
File "/Users/aditya/workspace/term/lib/python3.6/site-packages/docker/models/containers.py", line 196, in exec_run
workdir=workdir,
File "/Users/aditya/workspace/term/lib/python3.6/site-packages/docker/utils/decorators.py", line 19, in wrapped
return f(self, resource_id, *args, **kwargs)
File "/Users/aditya/workspace/term/lib/python3.6/site-packages/docker/api/exec_api.py", line 80, in exec_create
return self._result(res, True)
File "/Users/aditya/workspace/term/lib/python3.6/site-packages/docker/api/client.py", line 267, in _result
self._raise_for_status(response)
File "/Users/aditya/workspace/term/lib/python3.6/site-packages/docker/api/client.py", line 263, in _raise_for_status
raise create_api_error_from_http_exception(e)
File "/Users/aditya/workspace/term/lib/python3.6/site-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
raise cls(e, response=response, explanation=explanation)
docker.errors.APIError: 409 Client Error: Conflict ("Container ccdb556fb234eeb86b19d37c30e9d64e428bf42a8d2b70784225dcf3c5347859 is not running")
Even after running the container in detached mode, container exits soon after creation.
PS: The docker image is present locally and was created manually.
It works fine with remote images.
I think you need to supply a command to prevent the container from exiting, try this:
container = client.containers.run(image.tags[0], command=["tail", "-f", "/dev/null"],detach=True)
or using tty=True with detach=True will help also

IOError: unsupported XML-RPC protocol while running yum command

When I try to turn any yum command I get the follwing message. I disabled and enabled SSL before this error occurred. As the system said RHNS-CA-CERT has expired, I removed the certificate and downloaded it using wget command. Then I tried to update the certificate using the yum command and that's where the problem started.
Here's the error message:
Loaded plugins: rhnplugin
Exception RuntimeError: 'maximum recursion depth exceeded in __subclasscheck__' in <type 'exceptions.AttributeError'> ignored
Traceback (most recent call last):
File "/usr/bin/yum", line 29, in <module>
yummain.user_main(sys.argv[1:], exit_code=True)
File "/usr/share/yum-cli/yummain.py", line 285, in user_main
errcode = main(args)
File "/usr/share/yum-cli/yummain.py", line 105, in main
base.getOptionsConfig(args)
File "/usr/share/yum-cli/cli.py", line 228, in getOptionsConfig
self.conf
File "/usr/lib/python2.6/site-packages/yum/__init__.py", line 891, in <lambda>
conf = property(fget=lambda self: self._getConfig(),
File "/usr/lib/python2.6/site-packages/yum/__init__.py", line 362, in _getConfig
self.plugins.run('init')
File "/usr/lib/python2.6/site-packages/yum/plugins.py", line 184, in run
func(conduitcls(self, self.base, conf, **kwargs))
File "/usr/share/yum-plugins/rhnplugin.py", line 118, in init_hook
login_info = up2dateAuth.getLoginInfo(timeout=timeout)
File "/usr/share/rhn/up2date_client/up2dateAuth.py", line 219, in getLoginInfo
login(timeout=timeout)
File "/usr/share/rhn/up2date_client/up2dateAuth.py", line 170, in login
server = rhnserver.RhnServer(timeout=timeout)
File "/usr/share/rhn/up2date_client/rhnserver.py", line 154, in __init__
timeout=timeout)
File "/usr/share/rhn/up2date_client/rpcServer.py", line 160, in getServer
timeout=timeout)
File "/usr/lib/python2.6/site-packages/rhn/rpclib.py", line 169, in __init__
self._reset_host_handler_and_type()
File "/usr/lib/python2.6/site-packages/rhn/rpclib.py", line 315, in _reset_host_handler_and_type
raise IOError, "unsupported XML-RPC protocol"
IOError: unsupported XML-RPC protocol
OK, my guess is you are running against RHN Classic (rhn.redhat.com). There were erratum fixing this expired certificate and here comes relevant knowleadge base article:
System connection to RHN fails with "The certificate is expired, or certificate verify failed" errors
https://access.redhat.com/solutions/353033
Traceback with IOError: unsupported XML-RPC protocol leads me to guess that you have incorrect serverURL in /etc/sysconfig/rhn/up2date. It should look like this:
serverURL=https://xmlrpc.rhn.redhat.com/XMLRPC

Categories