Python - catch exception within exception? - python

I have this exception defined:
class ArgumentsException(Exception):
"""Exception that is raised when incorrect arguments are used."""
pass
Now I run my test where I run my program through sh package. And when it raises this expected exception, sh catches that exception itself and then reraises his own exception. Is there a way for me to check if my original exception was raised somehow?
For example when I run this code (this code is expected to raise that exception):
sh.python3(
self.main_py_path,
self.live_cfg_path,
self.workflow_cfg_path)
I get this exception instead:
Traceback (most recent call last):
File "/home/oerp/src/devops-tools/tests/test_main.py", line 153, in test_full_workflow_1
self.workflow_cfg_path)
File "/usr/local/lib/python3.5/dist-packages/sh.py", line 1427, in __call__
return RunningCommand(cmd, call_args, stdin, stdout, stderr)
File "/usr/local/lib/python3.5/dist-packages/sh.py", line 774, in __init__
self.wait()
File "/usr/local/lib/python3.5/dist-packages/sh.py", line 792, in wait
self.handle_command_exit_code(exit_code)
File "/usr/local/lib/python3.5/dist-packages/sh.py", line 815, in handle_command_exit_code
raise exc
sh.ErrorReturnCode_1:
RAN: /usr/bin/python3 /home/oerp/src/devops-tools/main.py /home/oerp/src/devops-tools/tests/configs/__live__.py /home/oerp/src/devops-tools/tests/configs/__workflow__.py
STDOUT:
STDERR:
Traceback (most recent call last):
File "/home/oerp/src/devops-tools/main.py", line 204, in <module>
state = _get_state(args.state, ignore_state=args.ignore_state)
File "/home/oerp/src/devops-tools/main.py", line 68, in _get_state
"__state__.py file must be provided if --ignore-state flag "
exceptions.ArgumentsException: __state__.py file must be provided if --ignore-state flag is not used.
Well I can do something like:
self.assertTrue('ArgumentsException' in str(e.stderr))
But maybe there is more elegant way to check my exception?

Related

cellranger: how to convert a gtf file to string

I am using a cellranger mkref and faced with a strange python problem with GTF (custome gtf):
Traceback (most recent call last):
File "/home/user/cellranger-6.0.1/lib/python/cellranger/reference.py", line 750, in validate_gtf
subprocess.check_output(cmd, stderr=subprocess.STDOUT)
File "/home/user/cellranger-6.0.1/external/anaconda/lib/python3.7/subprocess.py", line 411, in check_output
**kwargs).stdout
File "/home/user/cellranger-6.0.1/external/anaconda/lib/python3.7/subprocess.py", line 512, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['gtf_to_gene_index', '/home/user/cellranger-6.0.1/indexes', '/home/user/cellranger-6.0.1/indexes/tmp74f_vsxg.json']' returned non-zero exit status 1.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/user/cellranger-6.0.1/bin/rna/mkref", line 139, in <module>
main()
File "/home/user/cellranger-6.0.1/bin/rna/mkref", line 130, in main
reference_builder.build_gex_reference()
File "/home/user/cellranger-6.0.1/lib/python/cellranger/reference.py", line 613, in build_gex_reference
self.validate_gtf()
File "/home/user/cellranger-6.0.1/lib/python/cellranger/reference.py", line 753, in validate_gtf
raise GexReferenceError("Error detected in GTF file: " + exc.output) from exc
TypeError: can only concatenate str (not "bytes") to str
Also, I have the similar gtf file, which cellranger accepts without problems. I compared those files (moreover, the firs one i made from the second one):
file 1: text/plain; charset=us-ascii
file 2: text/plain; charset=us-ascii
Also, I checked with cat -vE and the files is the same
How can I change the file?
Thanks in advance!
I encountered the same issue. The problem was in duplicated IDs in my GTF file. Removing those duplicates solved the issue. See the discussion on Cellranger GitHub: https://github.com/10XGenomics/cellranger/issues/125

py4j.protocol.Py4JError: An error occurred while calling o112.save

I'm running a pyspark job submit on a university server:
My configuration is :
--master yarn --deploy-mode cluster --num-executors 150 --executor-cores 4 --executor-memory 28g --driver-memory 28g
My first few steps runs correctly :
df = spark.read.format('csv') \
.option('header',True) \
.option('multiLine', True) \
.load(data_file)
df.show()
udf_function = udf(stamp, StringType())
new_df = df.withColumn("column_a", udf_function(struct([df[x] for x in df.columns])))
new_df.show()
When I try to run the following commands separately, I get two very similar errors:
Command 1:
new_df.select("column_a").distinct().show(100)
Error:
ERROR:root:Exception while sending command.
Traceback (most recent call last):
File "/hadoop4/yarn/nm/usercache/apps/appcache/application_1593105789029_2249545/container_e01_1593105789029_2249545_02_000002/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1159, in send_command
raise Py4JNetworkError("Answer from Java side is empty")
py4j.protocol.Py4JNetworkError: Answer from Java side is empty
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/hadoop4/yarn/nm/usercache/apps/appcache/application_1593105789029_2249545/container_e01_1593105789029_2249545_02_000002/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 985, in send_command
response = connection.send_command(command)
File "/hadoop4/yarn/nm/usercache/apps/appcache/application_1593105789029_2249545/container_e01_1593105789029_2249545_02_000002/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1164, in send_command
"Error while receiving", e, proto.ERROR_ON_RECEIVE)
py4j.protocol.Py4JNetworkError: Error while receiving
Traceback (most recent call last):
File "python_stamp.py", line 93, in <module>
main()
File "python_stamp.py", line 82, in main
new_df.select("planning_cluster_id").distinct().show(100)
File "/hadoop4/yarn/nm/usercache/apps/appcache/application_1593105789029_2249545/container_e01_1593105789029_2249545_02_000002/pyspark.zip/pyspark/sql/dataframe.py", line 380, in show
Command 2:
new_df.write.mode("overwrite").format("csv").option("delimiter", ",").option("header", "true").save(save_path)
Error:
ERROR:root:Exception while sending command.
Traceback (most recent call last):
File "/hadoop1/yarn/nm/usercache/apps/appcache/application_1593105789029_2249417/container_e01_1593105789029_2249417_02_000002/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1159, in send_command
raise Py4JNetworkError("Answer from Java side is empty")
py4j.protocol.Py4JNetworkError: Answer from Java side is empty
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/hadoop1/yarn/nm/usercache/apps/appcache/application_1593105789029_2249417/container_e01_1593105789029_2249417_02_000002/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 985, in send_command
response = connection.send_command(command)
File "/hadoop1/yarn/nm/usercache/apps/appcache/application_1593105789029_2249417/container_e01_1593105789029_2249417_02_000002/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1164, in send_command
"Error while receiving", e, proto.ERROR_ON_RECEIVE)
py4j.protocol.Py4JNetworkError: Error while receiving
Traceback (most recent call last):
File "python_stamp.py", line 91, in <module>
main()
File "python_stamp.py", line 83, in main
new_df.write.mode("overwrite").format("csv").option("delimiter", ",").option("header", "true").save(save_path)
File "/hadoop1/yarn/nm/usercache/apps/appcache/application_1593105789029_2249417/container_e01_1593105789029_2249417_02_000002/pyspark.zip/pyspark/sql/readwriter.py", line 738, in save
File "/hadoop1/yarn/nm/usercache/apps/appcache/application_1593105789029_2249417/container_e01_1593105789029_2249417_02_000002/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in __call__
File "/hadoop1/yarn/nm/usercache/apps/appcache/application_1593105789029_2249417/container_e01_1593105789029_2249417_02_000002/pyspark.zip/pyspark/sql/utils.py", line 63, in deco
File "/hadoop1/yarn/nm/usercache/apps/appcache/application_1593105789029_2249417/container_e01_1593105789029_2249417_02_000002/py4j-0.10.7-src.zip/py4j/protocol.py", line 336, in get_return_value
py4j.protocol.Py4JError: An error occurred while calling o112.save
Does anyone know the reason behind it? I'm pretty confident that it's not because of any memory error, as the previous steps which show the table, load the table all are running correctly.
Additional information: When I run all of these commands on pyspark shell, they run perfectly well.

Logging as JSON lines with python

I have developed a function to create log files with python into JSON line files:
import logging
import sys
from datetime import datetime
def log_start(log_prefix):
now = datetime.now()
log_id = str(now).replace(':', '').replace(' ', '').replace('.', '').replace('-', '')[:14]
log_name = '/mnt/jarvis/logs/{}_{}.txt'.format(log_prefix, log_id)
root = logging.getLogger()
if root.handlers:
root.handlers = []
logging.basicConfig(level=logging.INFO, filename=log_name, filemode='a+',
format='''{{"log_id":"{}", "created_date":"%(asctime)s.%(msecs)03d", "action_text":"%(message)s"}}'''.format(
log_id),
datefmt="%Y-%m-%dT%H:%M:%S")
root = logging.getLogger()
root.setLevel(logging.INFO)
handler = logging.StreamHandler(sys.stdout)
handler.setLevel(logging.INFO)
formatter = logging.Formatter(
'''{{"log_id":"{}", "created_date":"%(asctime)s.%(msecs)03d", "action_text":"%(message)s"}}'''.format(
log_id),
datefmt="%Y-%m-%dT%H:%M:%S")
handler.setFormatter(formatter)
root.addHandler(handler)
return log_name, log_id
And it works just fine. However, I run into issues if the logging message has things like new line characters or double quotes on the message, it is no longer valid JSON. Is there a way to make the %(message) be string that is a "valid JSON string" without me having to correct for it every time?
EDIT
An example of this issue, is that I want to see the traceback in the logs, so a traceback like this would cause issues because of the quotes and \n characters:
Traceback (most recent call last):
File "/var/task/lego/bricks/tableau/workbook.py", line 65, in refresh_extracts
output = subprocess.check_output(command, shell=True)
File "/var/lang/lib/python3.6/subprocess.py", line 356, in check_output
**kwargs).stdout
File "/var/lang/lib/python3.6/subprocess.py", line 438, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '***REDACTED***' returned non-zero exit status 1.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/task/jarvis/event_triggered/refresh_dashboard.py", line 21, in refresh_dashboard
workbook.refresh_extracts()
File "/var/task/lego/bricks/tableau/workbook.py", line 73, in refresh_extracts
traceback.format_exc()))
File "/var/task/lego/power_functions/error_handling/graceful_fail.py", line 16, in graceful_fail
raise RuntimeError('This is a graceful fail, notifications sent based on attributes.')
RuntimeError: This is a graceful fail, notifications sent based on attributes.
You can conditionally test and format your string to/from JSON as needed. Here I am checking for the specific JSONDecodeError that is often thrown during this case, but you can trap any kind of exception you like, or many of them. Consider building your message string with a check like this:
import json
bad_json = '''
Traceback (most recent call last):
File "/var/task/lego/bricks/tableau/workbook.py", line 65, in refresh_extracts
output = subprocess.check_output(command, shell=True)
File "/var/lang/lib/python3.6/subprocess.py", line 356, in check_output
**kwargs).stdout
File "/var/lang/lib/python3.6/subprocess.py", line 438, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '***REDACTED***' returned non-zero exit status 1.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/task/jarvis/event_triggered/refresh_dashboard.py", line 21, in refresh_dashboard
workbook.refresh_extracts()
File "/var/task/lego/bricks/tableau/workbook.py", line 73, in refresh_extracts
traceback.format_exc()))
File "/var/task/lego/power_functions/error_handling/graceful_fail.py", line 16, in graceful_fail
raise RuntimeError('This is a graceful fail, notifications sent based on attributes.')
RuntimeError: This is a graceful fail, notifications sent based on attributes.
'''
try:
test_message = json.loads(bad_json) # this fails in the above case
except json.decoder.JSONDecodeError:
good_json = json.dumps({"message": bad_json})
test_message = json.loads(good_json)
print(test_message)
Result (this can be dumped into more readable text using json.dumps(your_message_string) if you like):
{'message': '\nTraceback (most recent call last):\n File "/var/task/lego/bricks/tableau/workbook.py", line 65, in refresh_extracts\n output = subprocess.check_output(command, shell=True)\n File "/var/lang/lib/python3.6/subprocess.py", line 356, in check_output\n **kwargs).stdout\n File "/var/lang/lib/python3.6/subprocess.py", line 438, in run\n output=stdout, stderr=stderr)\nsubprocess.CalledProcessError: Command \'***REDACTED***\' returned non-zero exit status 1.\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File "/var/task/jarvis/event_triggered/refresh_dashboard.py", line 21, in refresh_dashboard\n workbook.refresh_extracts()\n File "/var/task/lego/bricks/tableau/workbook.py", line 73, in refresh_extracts\n traceback.format_exc()))\n File "/var/task/lego/power_functions/error_handling/graceful_fail.py", line 16, in graceful_fail\n raise RuntimeError(\'This is a graceful fail, notifications sent based on attributes.\')\nRuntimeError: This is a graceful fail, notifications sent based on attributes.\n'}
This could be a function, lambda, etc. that you pass through the logging formatter.

How do i override windows permissions for python file windows 10

This is my current code:
import psutil
count = 0
while count < 3000000000:
for process in psutil.process_iter():
if process.name().lower() == 'chrome.exe':
print(process)
process.terminate()
count = count + 1
else:
print('no')
print(count)
its scanning all of he processes running. It then crashes when it scans:
' WindowsInternal.ComposableShell.Experiences.TextInput.Inpu...'
This is the windows service for the on screen keyboard. I tried to stop it running however it still runs. I was wondering if i could give my python file full permissions so this stops happening.
This is the error code from the terminal:
Traceback (most recent call last):
File "C:\Users\benmi\PycharmProjects\HelloWorld\venv\lib\site-packages\psutil_common.py", line 449, in wrapper
ret = self._cache[fun]
AttributeError: _cache
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\benmi\PycharmProjects\HelloWorld\venv\lib\site-packages\psutil_pswindows.py", line 679, in wrapper
return fun(self, *args, **kwargs)
File "C:\Users\benmi\PycharmProjects\HelloWorld\venv\lib\site-packages\psutil_common.py", line 452, in wrapper
return fun(self)
File "C:\Users\benmi\PycharmProjects\HelloWorld\venv\lib\site-packages\psutil_pswindows.py", line 766, in exe
exe = cext.proc_exe(self.pid)
PermissionError: [WinError 24] The program issued a command but the command length is incorrect: '(originated from NtQuerySystemInformation)'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:/Users/benmi/PycharmProjects/HelloWorld/TerminateProgram.py", line 5, in
if process.name().lower() == 'chrome.exe':
File "C:\Users\benmi\PycharmProjects\HelloWorld\venv\lib\site-packages\psutil__init__.py", line 630, in name
name = self._proc.name()
File "C:\Users\benmi\PycharmProjects\HelloWorld\venv\lib\site-packages\psutil_pswindows.py", line 750, in name
return os.path.basename(self.exe())
File "C:\Users\benmi\PycharmProjects\HelloWorld\venv\lib\site-packages\psutil_pswindows.py", line 681, in wrapper
raise convert_oserror(err, pid=self.pid, name=self._name)
psutil.AccessDenied: psutil.AccessDenied (pid=8612)

twitterImgBot stops working after some hours

I'm trying to get this, https://github.com/joaquinlpereyra/twitterImgBot, to work
and it works and it seems ok.
But after some hours, it stops working and this error comes up:
*python3 twitterbot.py
Traceback (most recent call last):
File "/home/user/.local/lib/python3.7/site-packages/tweepy/binder.py", line 118, in build_path
value = quote(self.session.params[name])
KeyError: 'id'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "twitterbot.py", line 209, in <module>
main()
File "twitterbot.py", line 200, in main
orders()
File "twitterbot.py", line 118, in orders
timeline.delete_tweet_by_id(tweet.in_reply_to_status_id, api)
File "/home/user/Skrivebord/twitterboot/lo/bot/timeline.py", line 12, in delete_tweet_by_id
api.destroy_status(id_to_delete)
File "/home/user/.local/lib/python3.7/site-packages/tweepy/binder.py", line 245, in _call
method = APIMethod(args, kwargs)
File "/home/user/.local/lib/python3.7/site-packages/tweepy/binder.py", line 71, in __init__
self.build_path()
File "/home/user/.local/lib/python3.7/site-packages/tweepy/binder.py", line 120, in build_path
raise TweepError('No parameter value found for path variable: %s' % name)
tweepy.error.TweepError: No parameter value found for path variable: id*
It seems like the Python has some problem because if I make a new install on a another PC it works for some hours and then stops.
Strange.
This is likely because tweet is not in reply to a status, so has an in_reply_to_status_id attribute that's None, so API.destroy_status is called with an id of None.

Categories