Can coverity be used for scanning python code base. If yes, then what inputs to be given in cov-build command? It would be good to have whole sequence of cov commands for scanning python code.
Assuming you have a dummy project that looks like:
src/
file.py
file2.py
tests/
test1.py
test2.py
3rdparty/
skip.py
setup.py
And you want to analyse everything except the 3rdparty folder, you can execute the following commands:
cov-configure --python
cov-build --dir foo \
--no-command \
--fs-capture-search ./ \
--fs-capture-search-exclude-regex ./3rdparty
cov-analyze --dir foo \
--all \
--aggressiveness-level high
cov-format-errors --dir foo \
--html-output results
cov-commit-defects --dir foo \
--host coverity.mycompany.com \
--stream MYSTREAM \
--auth-key-file mycoverity.key
Explanation:
cov-configure
Inform Coverity that you will be scanning Python code
cov-build
Inform Coverity to build your code. Since Python is not compiled, does not need to be built (--no-command) but it still needs to know where to get the sources from (--fs-capture-search). You can add more than one --fs-capture-search or --fs-capture-search-exclude-regex arguments to suit your needs.
cov-analyze
Perform the actual code analysis. There's a ton of switches you can pass but I think that for Python those would be enough
cov-format-errors
Generate a useful HTML report inside a new results folder. Other output formats are supported, not just HTML.
cov-commit-defects
Commit the scan results into your Coverity Connect central server at the specified stream. For the commit to work you have to identify yourself using a Coverity key file (you download this key from the Coverity server Web UI), and this file needs to be readonly for the user (i.e. chmod 400 mycoverity.key)
NOTE: All the above works fine against my company's internal Coverity server (i.e. paid product). For the free for open source version of Coverity, things might be different (have not tested it). For the later case I'd look at some of the open source projects: https://github.com/search?q=--fs-capture-search&type=Code
Related
I have 10001920 images.And their name is train_0, train_1, ....
I tried to copy them like
!gsutil -m cp -r /content/train/* gs://{my_bucket_name}/data
And I failed b.c it was too long. So I decided to use wild card like
!gsutil -m cp -r /content/train/train_1????.png gs://{my_bucket_name}/data
And I wanted to upload iterative way. After using 'for statement' to generate command line,
for script in script_list:
os.system(script)
And returns
31512
I just wanna know how can I upload those huge files to GCS.
Please give me some ideas
I don't think * should be used. It's not used that way in the documentation. I'd just try:
!gsutil -m cp -r ./content/train gs://{my_bucket_name}/data
This explains the failure number:
Also, although most commands normally fail upon encountering an error when the -m flag is disabled, all commands continue to try all operations when -m is enabled with multiple threads or processes, and the number of failed operations (if any) are reported as an exception at the end of the command's execution.
I am working with a python package that I installed called bacpypes for communicating with building automation equipment, right in the very beginning going thru the pip install & git clone of the repository; the readthedocs calls out to:
Updating the INI File
Now that you know what these values are going to be, you can configure the BACnet portion of your workstation. Change into the samples directory that you checked out earlier, make a copy of the sample configuration file, and edit it for your site:
$ cd bacpypes/samples
$ cp BACpypes~.ini BACpypes.ini
The problem that I have (is not enough knowledge) is there isn't a sample configuration file that I can see in bacpypes/samples directory. Its only a .py files nothing with an .ini extension or name of BACpypes.ini
If I open up the samples directory in terminal and run cp BACpypes~.ini BACpypes.ini I get an error cp: cannot stat 'BACpypes~.ini': No such file or directory
Any tips help thank you...
There's a sample .ini in the documentation, a couple of paragraphs after the commands you copied. It looks like this
[BACpypes]
objectName: Betelgeuse
address: 192.168.1.2/24
objectIdentifier: 599
maxApduLengthAccepted: 1024
segmentationSupported: segmentedBoth
maxSegmentsAccepted: 1024
vendorIdentifier: 15
foreignPort: 0
foreignBBMD: 128.253.109.254
foreignTTL: 30
I'm not sure why you couldn't copy BACpypes~.ini. I know tilda could be expanded by your shell so you could try to escape it with
cp BACpypes\~.ini BACpypes.ini
Though I assume it isn't needed now that you have a default configuration file.
I was doing a "Hello World" exercise with Twitter's Pants build tool. After cloned the "pants" repo - source, I successfully configured pants on my local.
First, I created an nested dir in the repo as:
$ mkdir -p mark/python/project_test
Then, I created two files in that dir to specify my app and BUILD as:
$ touch mark/python/project_test/Hello_world.py
$ touch mark/python/project_test/BUILD
Hello_World.py:
print "Hello World!"
BUILD:
python_binary(name="myapp",
source="Hello_world.py"
)
It ran perfectly when i run it with ./pants like:
$ ./pants run mark/python/project_test:myapp
$ Hello World!
Then, I was trying to add dependencies by change the "Hello_world.py" as:
import utility
print "Hello World!", utility.user(), "!"
I also created the utility.py in the same dir as:
import os
def user():
return os.environ['USER']
As I add dependencies to my original app, I also modified the BUILD as:
python_library(name="app-lib",
source=globs("*py")
)
python_binary(name="myapp",
source="hello_world.py",
dependencies=[pants(':app-lib')]
)
However, when I called the ./pants with the same command, it returned error:
$ ./pants run mark/python/project_test:myapp
Exception caught: (<class 'pants.base.cmd_line_spec_parser.BadSpecError'>)
Exception message: name 'pants' is not defined
while executing BUILD file BuildFile(mark/python/project_test/BUILD,
FileSystemProjectTree(/Users/mli/workspace/source))
Loading addresses from 'mark/python/project_test' failed.
when translating spec mark/python/project_test:myapp
There are currently three files on my dir:
$ ls mark/python/project_test
$ BUILD Hello_world.py utility.py
Why my app can't load the library from utility.py and what is the right way to arrange folder tree and BUILD files?
I am sort of new to build tool and would really appreciate if somebody could provide a bit context of using pants when answering the question. Thanks!!:)
I was able to make your project run with a few small adjustments. Your issues were:
There used to be a pants() wrapper for pants shortcuts but it doesn't exist anymore. I think you have the syntax slightly wrong even if it did.
You used source and sources interchangeably when they are in fact distinct.
For number 2, it is perhaps a subtle distinction:
python_binary has one source - the entrypoint for the created binary.
python_library has sources - an arbitrary number of files to be imported into other projects.
If you change your BUILD file to match the definition below, you should have success rerunning your invocation. Good luck!
python_library(
name='app-lib',
sources=globs('*.py'),
)
python_binary(
name="myapp",
source="hello_world.py",
dependencies=[':app-lib']
)
My file editor creates temporary files prefixed with a ..
I am running:
watchmedo shell-command -p '*.py' -R -c 'echo "${watch_src_path}"'
I see events for the temporary files as I am editing, then two events on file save (presumably because it does a delete and write).
I would like to see one event -- only when I save a file.
Is there a way for me to do this with just the CLI? I am not interested in creating a python script and using the watchdog API directly.
Use the --ignore-patterns (-i) switch.
watchmedo shell-command \
-p'*.py' \
-R \
-c'echo "${watch_src_path}"'\
--ignore-patterns="*/.*"
Note that watchmedo is matching on the full watch_src_path so your ignore pattern can't be as simple as ".*" like you'd think at first. Also all the pitfalls of wildcards are in effect, so if you were doing something silly like working in a hidden directory /path/to/some/.hidden/dir then you'd have to have a fancier pattern.
You also might want the --ignore-directories (-D) switch if the directory-related event is causing you annoyance too (this one is just boolean, no argument needed).
I'm trying to get a response from Nagios by using the following Python code and instructions:
http://skipperkongen.dk/2011/12/06/hello-world-plugin-for-nagios-in-python/
From some reason I never get to have OK from Nagios and it's always comes back with the message: Return code 126 is out of bounds - plugin may be missing
I installed nagiosplugin 1.0.0, and still nothing seems to be working
In parallel I have some other services (not python files) that work e.g. http check, current users, and SSH
What am I doing wrong? I'm trying to solve that for few days already
Getting Nagios to utilize your new plug-in is quite easy. You should make changes to three files and restart Nagios — that’s all it takes.
The first file is /etc/nagios/command-plugins.cfg (leave comment please if you know path to this file or analog in ubuntu). Assumed that plugin file is placed in /usr/lib/nagios/plugins/ directory:
command[check_hello_world]=/usr/lib/nagios/plugins/check_helloworld.py -m 'some message'
Drop down one directory to /etc/nagios/objects/commands.cfg (for ubuntu user should create cfg file in that dir /etc/nagios-plugins/config/):
define command {
command_name check_hello_world
command_line $USER1$/check_hello_world.py -m 'some message'
}
Save the file and open up /etc/nagios/objects/localhost.cfg (in ubuntu path to service definition files located in /etc/nagios3/nagios.cfg and by default cfg_dir=/etc/nagios3/conf.d. So, to define new service in ubuntu user should create cfg file in that dir, for example hello.cfg). Locate this section:
#
# SERVICE DEFINITIONS
#
and add new entry:
define service {
use local-service ; Name of service template to use
host_name localhost
service_description Check using the hello world plugin (always returns OK)
check_command check_hello_world
}
All that remains is to restart Nagios and to verify that plug-in is working. Restart Nagios by issuing the following command:
/etc/init.d/nagios restart
http://www.linux-mag.com/id/7706/
ubuntuforums.org - Thread: My Notes for Installing Nagios on Ubuntu Server 12.04 LTS
I had to prepend the path to python2.7 even though the shebang in the file specified it.
In the command definition I had this:
command_line /usr/local/bin/python2.7 $USER1$/check_rabbit_queues.py --host $HOSTADDRESS$ --password $ARG1$
Even though the top of the actual python file had:
#!/usr/bin/env python2.7
Even though the script executed and returned just fine from the command line without specifying the interpreter.
Nothing else I tried seemed to work.