SonarQube Python Plugin - scanning python code: Fail to decorate - python

I am trying to run a scan in the sample project (I am actually trying to scan a much larger project, but the problem is the same. I am working on the sample project because is much simpler) and it is giving me the following error:
INFO: ------------------------------------------------------------------------
INFO: EXECUTION FAILURE
INFO: ------------------------------------------------------------------------
Total time: 29.790s
Final Memory: 14M/379M
INFO: ------------------------------------------------------------------------
ERROR: Error during Sonar runner execution
ERROR: Unable to execute Sonar
ERROR: Caused by: Fail to decorate 'org.sonar.api.resources.File#56ffc188[key=src/__init__.py,deprecatedKey=__init__.py,path=src/__init__.py,dir=[root],filename=__init__.py,language=Python]'
ERROR: Caused by: 0
ERROR:
ERROR: To see the full stack trace of the errors, re-run SonarQube Runner with the -e switch.
ERROR: Re-run SonarQube Runner using the -X switch to enable full debug logging.
Anyone has any idea of what this means? This is the sample project right out of the box, and my sonnar runner config is firly simple:
#----- Default SonarQube server
sonar.host.url=http://localhost:9000
#----- MySQL
sonar.jdbc.url=jdbc:mysql://localhost:3306/sonar?useUnicode=true&characterEncoding=utf8
#----- Global database settings
sonar.jdbc.username=sonar
sonar.jdbc.password=sonar
#----- Default source code encoding
sonar.sourceEncoding=UTF-8
#----- Security (when 'sonar.forceAuthentication' is set to 'true')
sonar.login=admin
sonar.password=admin
I have enabled all pylint rules, within a new profile just with those rules. With the default profile as default (Sonar Way) the error does not happen. Curious thing is that I enabled a random rule from pylint and it worked. So maybe one (or more) rule is messing with the analysis.
Pylint integration is essential for me.
The stacktrace, although not very helpful is the following:
ERROR: Error during Sonar runner execution
org.sonar.runner.impl.RunnerException: Unable to execute Sonar
at org.sonar.runner.impl.BatchLauncher$1.delegateExecution(BatchLauncher.java:91)
at org.sonar.runner.impl.BatchLauncher$1.run(BatchLauncher.java:75)
at java.security.AccessController.doPrivileged(Native Method)
at org.sonar.runner.impl.BatchLauncher.doExecute(BatchLauncher.java:69)
at org.sonar.runner.impl.BatchLauncher.execute(BatchLauncher.java:50)
at org.sonar.runner.api.EmbeddedRunner.doExecute(EmbeddedRunner.java:102)
at org.sonar.runner.api.Runner.execute(Runner.java:100)
at org.sonar.runner.Main.executeTask(Main.java:70)
at org.sonar.runner.Main.execute(Main.java:59)
at org.sonar.runner.Main.main(Main.java:53)
Caused by: org.sonar.api.utils.SonarException: Fail to decorate 'org.sonar.api.resources.File#3f3674b2[key=src/__init__.py,deprecatedKey=__init__.py,path=src/__init__.py,dir=[root],filename=__init__.py,language=Python]'
at org.sonar.batch.phases.DecoratorsExecutor.executeDecorator(DecoratorsExecutor.java:103)
at org.sonar.batch.phases.DecoratorsExecutor.decorateResource(DecoratorsExecutor.java:86)
at org.sonar.batch.phases.DecoratorsExecutor.decorateResource(DecoratorsExecutor.java:78)
at org.sonar.batch.phases.DecoratorsExecutor.decorateResource(DecoratorsExecutor.java:78)
at org.sonar.batch.phases.DecoratorsExecutor.execute(DecoratorsExecutor.java:70)
at org.sonar.batch.phases.PhaseExecutor.execute(PhaseExecutor.java:126)
at org.sonar.batch.scan.ModuleScanContainer.doAfterStart(ModuleScanContainer.java:222)
at org.sonar.api.platform.ComponentContainer.startComponents(ComponentContainer.java:93)
at org.sonar.api.platform.ComponentContainer.execute(ComponentContainer.java:78)
at org.sonar.batch.scan.ProjectScanContainer.scan(ProjectScanContainer.java:235)
at org.sonar.batch.scan.ProjectScanContainer.scanRecursively(ProjectScanContainer.java:230)
at org.sonar.batch.scan.ProjectScanContainer.doAfterStart(ProjectScanContainer.java:223)
at org.sonar.api.platform.ComponentContainer.startComponents(ComponentContainer.java:93)
at org.sonar.api.platform.ComponentContainer.execute(ComponentContainer.java:78)
at org.sonar.batch.scan.ScanTask.scan(ScanTask.java:65)
at org.sonar.batch.scan.ScanTask.execute(ScanTask.java:52)
at org.sonar.batch.bootstrap.TaskContainer.doAfterStart(TaskContainer.java:128)
at org.sonar.api.platform.ComponentContainer.startComponents(ComponentContainer.java:93)
at org.sonar.api.platform.ComponentContainer.execute(ComponentContainer.java:78)
at org.sonar.batch.bootstrap.BootstrapContainer.executeTask(BootstrapContainer.java:171)
at org.sonar.batch.bootstrapper.Batch.executeTask(Batch.java:95)
at org.sonar.batch.bootstrapper.Batch.execute(Batch.java:67)
at org.sonar.runner.batch.IsolatedLauncher.execute(IsolatedLauncher.java:48)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.sonar.runner.impl.BatchLauncher$1.delegateExecution(BatchLauncher.java:87)
... 9 more
Caused by: java.lang.ArrayIndexOutOfBoundsException: 0
at org.sonar.plugins.core.issue.tracking.FileHashes.getHash(FileHashes.java:75)
at org.sonar.plugins.core.issue.IssueTracking.setChecksumOnNewIssues(IssueTracking.java:69)
at org.sonar.plugins.core.issue.IssueTracking.track(IssueTracking.java:54)
at org.sonar.plugins.core.issue.IssueTrackingDecorator.doDecorate(IssueTrackingDecorator.java:138)
at org.sonar.plugins.core.issue.IssueTrackingDecorator.decorate(IssueTrackingDecorator.java:112)
at org.sonar.batch.phases.DecoratorsExecutor.executeDecorator(DecoratorsExecutor.java:95)
... 36 more
Thanks for any help!

I encountered the same problem. I had to deactivate two rules to stop this error message:
Missing docstring
docstring should be defined
I also deactivate the deprecated rules but I don't know if it had any effect.

Related

Conditional early exit from full test suite in Pytest

I have a parameterized pytest test suite. Each parameter is a particular website, and the test suite runs using Selenium automation. After accounting for parameters, I have hundreds of tests in total, and they all run sequentially.
Once a week, Selenium will fail for a variety of reasons. Connection lost, could not instantiate chrome instance, etc. If it fails once in the middle of a test run, it'll crash all upcoming tests. Here's an example fail log:
test_example[parameter] failed; it passed 0 out of the required 1 times.
<class 'selenium.common.exceptions.WebDriverException'>
Message: chrome not reachable
(Session info: chrome=91.0.4472.106)
[<TracebackEntry test.py:122>, <TracebackEntry another.py:92>, <TracebackEntry /usr/local/lib/python3.7/site-packages/selenium/webdriver/remote/webdriver.py:669>, <TracebackEntry /usr/local/lib/python3.7/site-packages/selenium/webdriver/remote/webdriver.py:321>, <TracebackEntry /usr/local/lib/python3.7/site-packages/selenium/webdriver/remote/errorhandler.py:242>]
Ideally, I'd like to exit the suite as soon as a Selenium failure has occurred, because I know that all the upcoming tests will also fail.
Is there a method of this kind:
def pytest_on_test_fail(err): # this will be a pytest hook
if is_selenium(err): # user defined function
pytest_earlyexit() # this will be a pytest function
Or some other mechanism that will let me early exit out of the full test suite based on the detected condition.
After some more testing I got this to work. This uses the pytest_exception_interact hook and the pytest.exit function.
WebDriverException is the parent class of all Selenium issues (see source code).
def pytest_exception_interact(node, call, report):
error_class = call.excinfo.type
is_selenium_issue = issubclass(error_class, WebDriverException)
if is_selenium_issue:
pytest.exit('Selenium error detected, exiting test suite early', 1)

Error running PySpark code in Jupyter Notebook on RaspberryPi Cluster with Hadoop/Spark/Yarn

I'm trying to run example code in Jupyter Notebook with PySpark using Yarn cluster.
I think my cluster works fine. I can see all nodes running
yarn node -list -all
OpenJDK Client VM warning: You have loaded library /opt/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
Total Nodes:3
Node-Id Node-State Node-Http-Address Number-of-Running-Containers
pi2:40045 RUNNING pi2:8042 0
pi1:38067 RUNNING pi1:8042 0
pi3:35139 RUNNING pi3:8042 0
I'm using example notebook available here: https://github.com/dmanning21h/pi-cluster/blob/master/notebooks/picluster.ipynb
I can run all the notebook code except this cell:
# Show label counts by class
df.groupBy("label") \
.count() \
.orderBy(col("count").desc()) \
.show()
Spark gets stuck on this job with following error:
Exception in thread "map-output-dispatcher-0" java.lang.UnsatisfiedLinkError: /opt/hadoop/lib/native/libzstd-jni.so: /opt/hadoop/lib/native/libzstd-jni.so: wrong ELF class: ELFCLASS64 (Possible cause: architecture word width mismatch)
Unsupported OS/arch, cannot find /linux/arm/libzstd-jni.so or load zstd-jni from system libraries. Please try building from source the jar or providing libzstd-jni in your system.
at java.base/java.lang.ClassLoader$NativeLibrary.load0(Native Method)
at java.base/java.lang.ClassLoader$NativeLibrary.load(ClassLoader.java:2442)
at java.base/java.lang.ClassLoader$NativeLibrary.loadLibrary(ClassLoader.java:2498)
at java.base/java.lang.ClassLoader.loadLibrary0(ClassLoader.java:2694)
at java.base/java.lang.ClassLoader.loadLibrary(ClassLoader.java:2659)
at java.base/java.lang.Runtime.loadLibrary0(Runtime.java:830)
at java.base/java.lang.System.loadLibrary(System.java:1873)
at com.github.luben.zstd.util.Native.load(Native.java:73)
at com.github.luben.zstd.util.Native.load(Native.java:60)
at com.github.luben.zstd.ZstdOutputStream.<clinit>(ZstdOutputStream.java:15)
at org.apache.spark.io.ZStdCompressionCodec.compressedOutputStream(CompressionCodec.scala:224)
at org.apache.spark.MapOutputTracker$.serializeMapStatuses(MapOutputTracker.scala:913)
at org.apache.spark.ShuffleStatus.$anonfun$serializedMapStatus$2(MapOutputTracker.scala:210)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.apache.spark.ShuffleStatus.withWriteLock(MapOutputTracker.scala:72)
at org.apache.spark.ShuffleStatus.serializedMapStatus(MapOutputTracker.scala:207)
at org.apache.spark.MapOutputTrackerMaster$MessageLoop.run(MapOutputTracker.scala:457)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Exception in thread "map-output-dispatcher-1" java.lang.NoClassDefFoundError: Could not initialize class com.github.luben.zstd.ZstdOutputStream
at org.apache.spark.io.ZStdCompressionCodec.compressedOutputStream(CompressionCodec.scala:224)
at org.apache.spark.MapOutputTracker$.serializeMapStatuses(MapOutputTracker.scala:913)
at org.apache.spark.ShuffleStatus.$anonfun$serializedMapStatus$2(MapOutputTracker.scala:210)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.apache.spark.ShuffleStatus.withWriteLock(MapOutputTracker.scala:72)
at org.apache.spark.ShuffleStatus.serializedMapStatus(MapOutputTracker.scala:207)
at org.apache.spark.MapOutputTrackerMaster$MessageLoop.run(MapOutputTracker.scala:457)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
I've also tried sollution from this comment but it's not working.

"Solver did not exit normally" - Jupyter / Python3 / Ubuntu

I am trying to get my first Pyomo model running on my Ubuntu VM (Azure). I have Python3 and the COIN-OR solvers installed on this machine. No matter what solver I try, I get the same result.
Edit: changing the solver to couenne (it's a nonlinear problem) the Jupyter output looks like this. When I open the log files in the tmp directory, there is nothing in the couenne.log file and the pyomo files are the problem formulation. So I assume that Pyomo isn't communicating with the Couenne solver at all?
Solver log file: '/tmp/tmpezw0sov2_couenne.log'
Solver solution file: '/tmp/tmpq6afa7e8.pyomo.sol'
Solver problem files: ('/tmp/tmpq6afa7e8.pyomo.nl',)
ERROR: Solver (asl) returned non-zero return code (-1)
ERROR: See the solver log above for diagnostic information.
---------------------------------------------------------------------------
ApplicationError Traceback (most recent call last)
<ipython-input-6-486e3a9173f4> in <module>()
20 #instance = model.create_instance()
21 opt = SolverFactory('couenne', executable = solverpath_exe)
---> 22 opt.solve(model,tee=True,keepfiles=True)
23 #solver=SolverFactory(solvername,executable=solverpath_exe)
/home/ralphasher/.local/lib/python3.6/site-packages/pyomo/opt/base/solvers.py in solve(self, *args, **kwds)
598 logger.error("Solver log:\n" + str(_status.log))
599 raise pyutilib.common.ApplicationError(
--> 600 "Solver (%s) did not exit normally" % self.name)
601 solve_completion_time = time.time()
602 if self._report_timing:
ApplicationError: Solver (asl) did not exit normally
The "catch-all" exception is raised because the solver runs as a separate, non-Python process, so Python really can't tell what exactly went wrong with it, it just sees that the process has quit abnormally.
As such, solver log is the thing to go to as this is where the solver itself writes its status updates, so the specific error, whatever it is, should be reflected there.
If solver log is empty, this most probably means that the solver has failed to start at all (if solver process is run with stream redirection, the log file is opened -- thus created -- before the solver command is exec'd, so this is a common symptom when there's a problem with a program's startup). Since pyomo is the thing that starts the solver, getting the details of what exactly happens at the time of solver's start is where the answer lies.
According to pyomo solve command — Pyomo 5.6.6 documentation, you can use --info or --verbose command line options to increase the verbosity of the pyomo log.
If that still doesn't produce anything revealing, time to bring out the big guns:
run pyomo under pdb (pyomo is just a script so you can pass it to python -m pdb like any other; make sure to use the same python executable as in the script's shebang) and step through the code in pyomo machinery to see what exactly it does with the solver process (what info it passes, how it invokes it)
you'll be able to see defects in this process if there are (e.g. no info is actually passed) or repeat the same operations by hand to see the result firsthand; and/or
run the command under strace -f (to also monitor the solver's child process) and see if there are any obvious errors like an error to exec the solver command or any errors opening files and such.

U-SQL, Python, local execution, "device not found" error

I'm trying to run a U-SQL job with Python extension locally using VS 2017.
I followed these steps:
https://1drv.ms/w/s!AvdZLquGMt47g0NultCKgm38sejs
https://blogs.msdn.microsoft.com/azuredatalake/2017/02/20/enabling-u-sql-advanced-analytics-for-local-execution/
And then I tried to run this:
https://learn.microsoft.com/en-us/azure/data-lake-analytics/data-lake-analytics-u-sql-python-extensions
It works fine if I run it in Azure, but if I try to run it locally, the error I get is: "The device is not read".
Details:
Start : 2017-08-16 14:35:13
Initialize : 2017-08-16 14:35:13
GraphParse : 2017-08-16 14:35:13
Run : 2017-08-16 14:35:13
Start 'Root' : 2017-08-16 14:35:13
End 'Root(Success)' : 2017-08-16 14:35:13
Start '1_SV1_Extract' : 2017-08-16 14:35:13
End '1_SV1_Extract(Error)' : 2017-08-16 14:35:14
Completed with 'Error' : 2017-08-16 14:35:14
Execution failed with error '1_SV1_Extract Error : '{"diagnosticCode":195887147,"severity":"Error","component":"RUNTIME","source":"User","errorId":"E_RUNTIME_USER_UNHANDLED_EXCEPTION_FROM_USER_CODE","message":"An unhandled exception from user code has been reported","description":"Unhandled exception from user code: \"The device is not ready.\r\n\"\nThe details includes more information including any inner exceptions and the stack trace where the exception was raised.","resolution":"Make sure the bug in the user code is fixed.","helpLink":"","details":"==== Caught exception System.IO.IOException\n\n at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)\r\n at System.IO.Directory.InternalCreateDirectory(String fullPath, String path, Object dirSecurityObj, Boolean checkHost)\r\n at System.IO.Directory.InternalCreateDirectoryHelper(String path, Boolean checkHost)\r\n at System.IO.Compression.ZipFileExtensions.ExtractToDirectory(ZipArchive source, String destinationDirectoryName)\r\n at System.IO.Compression.ZipFile.ExtractToDirectory(String sourceArchiveFileName, String destinationDirectoryName, Encoding entryNameEncoding)\r\n at Microsoft.MetaAnalytics.LanguageWorker.UsqlPyExecution.LocatePython(String version) in C:\\Users\\shravan\\Source\\Repos\\VSTS\\USqlExtensions\\lang\\python\\AFx\\Product\\Source\\Modules\\LanguageWorker\\LanguageWorker.Dll\\UsqlExecution.cs:line 146\r\n at Microsoft.MetaAnalytics.LanguageWorker.UsqlPyExecution.InvokeLanguage(String version, String scriptname, IList`1 infiles, IList`1 outfiles, IObserver`1 stringLogger) in C:\\Users\\shravan\\Source\\Repos\\VSTS\\USqlExtensions\\lang\\python\\AFx\\Product\\Source\\Modules\\LanguageWorker\\LanguageWorker.Dll\\UsqlExecution.cs:line 89\r\n at Microsoft.MetaAnalytics.LanguageWorker.UsqlPyExecution.Run(IRowset input, IUpdatableRow output, String script, String version) in C:\\Users\\shravan\\Source\\Repos\\VSTS\\USqlExtensions\\lang\\python\\AFx\\Product\\Source\\Modules\\LanguageWorker\\LanguageWorker.Dll\\UsqlExecution.cs:line 42\r\n at Extension.Python.Reducer.<Reduce>d__6.MoveNext() in C:\\Users\\shravan\\Source\\Repos\\VSTS\\USqlExtensions\\lang\\python\\ExtPy\\PyReducer.cs:line 56\r\n at ScopeEngine.SqlIpReducer<Extract_0_Data0,Process_1_Data0,ScopeEngine::KeyComparePolicy<Extract_0_Data0,3> >.GetNextRow(SqlIpReducer<Extract_0_Data0\\,Process_1_Data0\\,ScopeEngine::KeyComparePolicy<Extract_0_Data0\\,3> >* , Process_1_Data0* output) in c:\\users\\e\\source\\repos\\usqlapplication1\\usqlapplication1\\bin\\debug\\1b720f51a8b3caea\\script_fe316531c87f021f\\sqlmanaged.h:line 2788\r\n at std._Func_class<void>.()(_Func_class<void>* )\r\n at RunAndHandleClrExceptions(function<void __cdecl(void)>* code)","internalDiagnostics":""}
'
'
Execution failed !
I'm aware that the blog post mentions that running Python extensions locally is not officially supported, but they do make it sound like it should at least be possible somehow?
I don't get any errors if I run U-SQL scripts without using the Python extension locally.
Is there anything I'm missing? Is there any logging I could turn on to find out more? Has anyone had success running Python with U-SQL locally?
(Azure Data Lake team here)
There was a recent update in how the Python distribution is located in the Azure Data Lake Analytics service. While the change improved vertex startup times, it also broke some basic assumptions on how local execution of U-SQL scripts works.
The team is working on an alternative solution that will let a locally executing U-SQL script use an existing Python distribution that is installed on the same local machine.

fast-r-cnn: caffe.LayerParameter" has no field named "roi_pooling_param

When I tried to run ./tools/demo.py of fast-r-cnn. working on UBUNTU 16.04
I got the following error, although the caffe is successfully installed!!
./tools/demo.py
WARNING: Logging before InitGoogleLogging() is written to STDERR
W0823 14:12:46.105280 4444 _caffe.cpp:122] DEPRECATION WARNING - deprecated use of Python interface
W0823 14:12:46.105316 4444 _caffe.cpp:123] Use this instead (with the named "weights" parameter):
W0823 14:12:46.105319 4444 _caffe.cpp:125] Net('/home/hana/Documents/try/fast-rcnn-master/models/VGG16/test.prototxt', 1, weights='/home/hana/Documents/try/fast-rcnn-master/data/fast_rcnn_models/vgg16_fast_rcnn_iter_40000.caffemodel')
[libprotobuf ERROR google/protobuf/text_format.cc:274] Error parsing text-format caffe.NetParameter: 392:21: Message type "caffe.LayerParameter" has no field named "roi_pooling_param".
F0823 14:12:46.106595 4444 upgrade_proto.cpp:79] Check failed: ReadProtoFromTextFile(param_file, param) Failed to parse NetParameter file: /home/hana/Documents/try/fast-rcnn-master/models/VGG16/test.prototxt
*** Check failure stack trace: ***
Aborted (core dumped)
Please any help?
Faster-RCNN requires its own branch of caffe. This branch includes roi_pooling_layer and its associated parameters.
Follow the installation instructions of Faster RCNN to get the correct branch of caffe.

Categories