I am trying to write a Python program using the pyvmomi library to "erase" a virtual hard drive associated with a VM. The way this is done manually is to remove the virtual disk and create a new virtual disk with the same specs. I am expecting that I will need to do the same thing with pyvmomi so I have started down that path. My issue is that I can use ReconfigVM_Task to remove the virtual drive but that leaves the VMDK file itself.
I originally tried using DeleteVStorageObject_Task (since DeleteVirtualDisk_Task is deprecated) to remove the virtual disk file but that requires the ID of the object (the VMDK file) which I am unable to find anywhere. Theoretically that's available from the VirtualDisk property vDiskId but that is null. In further research it seems to only be populated for first class disks.
So I am instead trying to delete the VMDK file directly using DeleteDatastoreFile_Task but when I do that I end up with a XXXX-flat.vmdk file in the datastore so it seems to not actually delete the file.
Any idea on where I'm going wrong here or how to better do this? The VMWare SDK documentation for pyvmomi is...lacking.
Thanks!
You'll have to perform a ReconfigVM_Task operation. The keypoint for this is that the file operation should be destroy. Here's the raw output from performing the operation in the UI:
spec = vim.vm.ConfigSpec()
spec_deviceChange_0 = vim.vm.device.VirtualDeviceSpec()
spec_deviceChange_0.fileOperation = 'destroy'
spec_deviceChange_0.device = vim.vm.device.VirtualDisk()
spec_deviceChange_0.device.shares = vim.SharesInfo()
spec_deviceChange_0.device.shares.shares = 1000
spec_deviceChange_0.device.shares.level = 'normal'
spec_deviceChange_0.device.capacityInBytes = 8589934592
spec_deviceChange_0.device.storageIOAllocation = vim.StorageResourceManager.IOAllocationInfo()
spec_deviceChange_0.device.storageIOAllocation.shares = vim.SharesInfo()
spec_deviceChange_0.device.storageIOAllocation.shares.shares = 1000
spec_deviceChange_0.device.storageIOAllocation.shares.level = 'normal'
spec_deviceChange_0.device.storageIOAllocation.limit = -1
spec_deviceChange_0.device.storageIOAllocation.reservation = 0
spec_deviceChange_0.device.backing = vim.vm.device.VirtualDisk.FlatVer2BackingInfo()
spec_deviceChange_0.device.backing.backingObjectId = ''
spec_deviceChange_0.device.backing.fileName = '[kruddy_2TB_01] web01/web01_2.vmdk'
spec_deviceChange_0.device.backing.split = False
spec_deviceChange_0.device.backing.writeThrough = False
spec_deviceChange_0.device.backing.datastore = search_index.FindByUuid(None, "datastore-14", True, True)
spec_deviceChange_0.device.backing.eagerlyScrub = True
spec_deviceChange_0.device.backing.contentId = 'e26f44020e7897006bec81b1fffffffe'
spec_deviceChange_0.device.backing.thinProvisioned = False
spec_deviceChange_0.device.backing.diskMode = 'persistent'
spec_deviceChange_0.device.backing.digestEnabled = False
spec_deviceChange_0.device.backing.sharing = 'sharingNone'
spec_deviceChange_0.device.backing.uuid = '6000C292-7895-54ee-a55c-49d0036ef1bb'
spec_deviceChange_0.device.controllerKey = 200
spec_deviceChange_0.device.unitNumber = 0
spec_deviceChange_0.device.nativeUnmanagedLinkedClone = False
spec_deviceChange_0.device.capacityInKB = 8388608
spec_deviceChange_0.device.deviceInfo = vim.Description()
spec_deviceChange_0.device.deviceInfo.summary = '8,388,608 KB'
spec_deviceChange_0.device.deviceInfo.label = 'Hard disk 2'
spec_deviceChange_0.device.diskObjectId = '148-3000'
spec_deviceChange_0.device.key = 3000
spec_deviceChange_0.operation = 'remove'
spec.deviceChange = [spec_deviceChange_0]
spec.cpuFeatureMask = []
managedObject.ReconfigVM_Task(spec)
Kyle Ruddy got me pointed in the right direction. Here's a code snippit showing how I made it work for future people searching for information on how to do this:
#Assuming dev is already set to the vim.vm.device.VirtualDisk you want to delete...
virtual_hdd_spec = vim.vm.device.VirtualDeviceSpec()
virtual_hdd_spec.fileOperation = vim.vm.device.VirtualDeviceSpec.FileOperation.destroy
virtual_hdd_spec.operation = vim.vm.device.VirtualDeviceSpec.Operation.remove
virtual_hdd_spec.device = dev
spec = vim.vm.ConfigSpec()
spec.deviceChange = [virtual_hdd_spec]
WaitForTask(vm.ReconfigVM_Task(spec=spec))
The API documentation for this is at https://pubs.vmware.com/vi3/sdk/ReferenceGuide/vim.vm.device.VirtualDeviceSpec.html
Related
I am trying to create a memory scanner. similar to Cheat Engine. but only for extract information.
I know how to get the pid (in this case is "notepad.exe"). But I don't have any Idea about how to know wicht especific adress belong to the program that I am scanning.
Trying to looking for examples. I could see someone it was trying to scan every adress since one point to other. But it's to slow. Then I try to create a batch size (scan a part of memory and not one by one each adress). The problem is if the size is to short. still will take a long time. and if it is to long, is possible to lose many adress who are belong to the program. Because result from ReadMemoryScan is False in the first Adress, but It can be the next one is true. Here is my example.
import ctypes as c
from ctypes import wintypes as w
import psutil
from sys import stdout
write = stdout.write
import numpy as np
def get_client_pid(process_name):
pid = None
for proc in psutil.process_iter():
if proc.name() == process_name:
pid = int(proc.pid)
print(f"Found '{process_name}' PID = ", pid,f" hex_value = {hex(pid)}")
break
if pid == None:
print('Program Not found')
return pid
pid = get_client_pid("notepad.exe")
if pid == None:
sys.exit()
k32 = c.WinDLL('kernel32', use_last_error=True)
OpenProcess = k32.OpenProcess
OpenProcess.argtypes = [w.DWORD,w.BOOL,w.DWORD]
OpenProcess.restype = w.HANDLE
ReadProcessMemory = k32.ReadProcessMemory
ReadProcessMemory.argtypes = [w.HANDLE,w.LPCVOID,w.LPVOID,c.c_size_t,c.POINTER(c.c_size_t)]
ReadProcessMemory.restype = w.BOOL
GetLastError = k32.GetLastError
GetLastError.argtypes = None
GetLastError.restype = w.DWORD
CloseHandle = k32.CloseHandle
CloseHandle.argtypes = [w.HANDLE]
CloseHandle.restype = w.BOOL
processHandle = OpenProcess(0x10, False, int(pid))
# addr = 0x0FFFFFFFFFFF
data = c.c_ulonglong()
bytesRead = c.c_ulonglong()
start = 0x000000000000
end = 0x7fffffffffff
batch_size = 2**13
MemoryData = np.zeros(batch_size, 'l')
Size = MemoryData.itemsize*MemoryData.size
index = 0
Data_address = []
for c_adress in range(start,end,batch_size):
result = ReadProcessMemory(processHandle,c.c_void_p(c_adress), MemoryData.ctypes.data,
Size, c.byref(bytesRead))
if result: # Save adress
Data_address.extend(list(range(c_adress,c_adress+batch_size)))
e = GetLastError()
CloseHandle(processHandle)
I decided from 0x000000000000 to 0x7fffffffffff Because cheat engine scan this size. I am still a begginer with this kind of this about memory scan. maybe there are things that I can do to improve the efficiency.
I suggest you take advantage of existing python libraries that can analyse Windows 10 memory.
I'm no specialist but I've found Volatility. Seems to be pretty useful for your problem.
For running that tool you need Python 2 (Python 3 won't work).
For running python 2 and 3 in the same Windows 10 machine, follow this tutorial (The screenshots are in Spanish but it can easily be followed).
Then see this cheat sheet with main commands. You can dump the memory and then operate on the file.
Perhaps this leads you to the solution :) At least the most basic command pslist dumps all the running processes addresses.
psutil has proc.memory_maps()
pass the result as map to this function
TargetProcess eaxample 'Calculator.exe'
def get_memSize(self,TargetProcess,map):
for m in map:
if TargetProcess in m.path:
memSize= m.rss
break
return memSize
if you use this function, it returns the memory size of your Target Process
my_pid is the pid for 'Calculator.exe'
def getBaseAddressWmi(self,my_pid):
PROCESS_ALL_ACCESS = 0x1F0FFF
processHandle = win32api.OpenProcess(PROCESS_ALL_ACCESS, False, my_pid)
modules = win32process.EnumProcessModules(processHandle)
processHandle.close()
base_addr = modules[0] # for me it worked to select the first item in list...
return base_addr
to get the base address of your prog
so you search range is from base_addr to base_addr + memSize
I am using Windows 10 and running the code in Jupyter Notebook (in Chrome).
This is my code:
if __name__ == '__main__':
import itertools
MOD03_path = r"C:\Users\saviosebastian\MYD03.A2008001.0000.006.2012066122450.hdf"
MOD06_path = r"C:\Users\saviosebastian\MYD06_L2.A2008001.0000.006.2013341193524.hdf"
satellite = 'Aqua'
yr = [2008]
mn = [1] #np.arange(1,13)
dy = [1]
# latitude and longtitude boundaries of level-3 grid
lat_bnd = np.arange(-90,91,1)
lon_bnd = np.arange(-180,180,1)
nlat = 180
nlon = 360
TOT_pix = np.zeros(nlat*nlon)
CLD_pix = np.zeros(nlat*nlon)
### To use Spark in Python
spark = SparkSession\
.builder\
.appName("Aggregation")\
.getOrCreate()
filenames0=['']*500
i=0
for y,m,d in itertools.product(yr,mn,dy):
#-------------find the MODIS prodcts--------------#
date = datetime.datetime(y,m,d)
JD01, JD02 = gcal2jd(y,1,1)
JD1, JD2 = gcal2jd(y,m,d)
JD = np.int((JD2+JD1)-(JD01+JD02) + 1)
granule_time = datetime.datetime(y,m,d,0,0)
while granule_time <= datetime.datetime(y,m,d,23,55): # 23,55
print('granule time:',granule_time)
**[MOD03_fp = 'MYD03.A{:04d}{:03d}.{:02d}{:02d}.006.?????????????.hdf'.format(y,JD,granule_time.hour,granule_time.minute)][1]**
MOD06_fp = 'MYD06_L2.A{:04d}{:03d}.{:02d}{:02d}.006.?????????????.hdf'.format(y,JD,granule_time.hour,granule_time.minute)
MOD03_fn, MOD06_fn =[],[]
for MOD06_flist in os.listdir(MOD06_path):
if fnmatch.fnmatch(MOD06_flist, MOD06_fp):
MOD06_fn = MOD06_flist
for MOD03_flist in os.listdir(MOD03_path):
if fnmatch.fnmatch(MOD03_flist, MOD03_fp):
MOD03_fn = MOD03_flist
if MOD03_fn and MOD06_fn: # if both MOD06 and MOD03 products are in the directory
I am getting the following error:
Do you know any solution to this problem?
I can't give you a specific answer without knowledge of the directory system on your computer, but for now it's obvious that there is something wrong with the name of the directory that you are referencing. Use File Explorer to make sure that the directory actually exists, and also make sure that you haven't misspelled the name of the file, which could easily happen given the filename.
You are giving the full path along with file name. The os.listdir(path) method in python is used to get the list of all files and directories in the specified directory. If we don’t specify any directory, then list of files and directories in the current working directory will be returned.
You can just write "C:/Users/saviosebastian" in path.
Same goes for os.chdir("C:/Users/saviosebastian").
So, I'm trying to build an SNMP polling service to get some inventory data off my network devices. I'm able to connect to the devices using either the netsnmp or easysnmp modules.
The issue comes along when I try to change which MIBs to use for querying some of the more enterprise-specific stuff like the "chStackUnitTable" of a Force10 network device.
Since I can't say which mib to load prior to querying the device's sysObjectId.0 oid, I have to query the device first, then tell the net-snmp bindings (which both netsnmp and easysnmp rely on) to look in a specific directory, by setting the os.environ['MIBDIRS'] variable.
The problem seems to be that the net-snmp bindings ignore changes to the MIBDIRS environment variable after the first method call using those bindings.
Examples
Not working but the order I want
Example using a Force10 S3048-ON switch:
import os
import netsnmp
mib_dir_root = "/opt/project/var/lib/snmp/mibs"
session_options = {'DestHost': "10.0.0.254", 'Version': 2, 'Community': "public"}
s = netsnmp.Session(**session_options)
vl = netsnmp.VarList(netsnmp.Varbind('sysObjectID', 0))
_r = s.get(vl)
obj_id = vl[0].val
print('{:s}.{:s}: {:s}'.format(vl.tag, vl.iid, vl.val))
# output: "sysObjectID.0: .1.3.6.1.4.1.6027.1.3.23"
# We can now determine which MIB to use to get the interesting stuff (serial number,
# service tag, etc) by inspecting the obj_id. In this case we know we want to query
# the chStackUnitTable of the F10-S-SERIES-CHASSIS-MIB mib.
# Let's add the MIB directory to our MIBDIRS environment variable
mib_dir = os.path.join(mib_dir_root, 'Force10')
os.environ['MIBDIRS'] = "+{:s}".format(mib_dir))
# We also have the annoyance here of having another mib (F10-M-SERIES-CHASSIS-MIB)
# that has the same OID name of 'chStackUnitTable' at a different numeric OID. So we
# need to specify the MIB explicitly
mib = 'F10-S-SERIES-CHASSIS-MIB'
oid = 'chStackUnitTable'
vl = netsnmp.VarList(netsnmp.Varbind('{:s}:{:s}'.format(mib, oid)))
s.walk(vl)
# output:
# MIB search path: /home/username/.snmp/mibs;/usr/share/snmp/mibs
# Cannot find module (F10-S-SERIES-CHASSIS-MIB): At line 1 in (none)
# snmp_build: unknown failure
Working but bad
However, if I add the MIBDIRS environment variable prior to calling netsnmp bindings, it works:
import os
import netsnmp
mib_dir_root = "/opt/project/var/lib/snmp/mibs"
mib_dirs = ['Force10', 'Cisco', 'Dell']
mib_dirs = [os.path.join(mib_dir_root, d) for d in mib_dirs if os.path.isdir(os.path.join(mib_dir_root, d))]
os.environ['MIBDIRS'] = "+{:s}".format(";".join(mib_dirs))
print(os.environ['MIBDIRS'])
# output:
# +/opt/project/var/lib/snmp/mibs/Force10;/opt/project/var/lib/snmp/mibs/Cisco;/opt/project/var/lib/snmp/mibs/Dell;
session_options = {'DestHost': "10.0.0.254", 'Version': 2, 'Community': "public"}
s = netsnmp.Session(**session_options)
vl = netsnmp.VarList(netsnmp.Varbind('sysObjectID', 0))
_r = s.get(vl)
obj_id = vl[0].val
print('{:s}.{:s}: {:s}'.format(vl.tag, vl.iid, vl.val))
# output: "sysObjectID.0: .1.3.6.1.4.1.6027.1.3.23"
mib = 'F10-S-SERIES-CHASSIS-MIB'
oid = 'chStackUnitTable'
vl = netsnmp.VarList(netsnmp.Varbind('{:s}:{:s}'.format(mib, oid)))
_r = s.walk(vl)
cols = ['chStackUnitSerialNumber', 'chStackUnitModelID', 'chStackUnitCodeVersion', 'chStackUnitServiceTag']
for v in vl:
if v.tag in cols:
print('{:s}.{:s}: {:s}'.format(v.tag, v.iid, v.val))
# output:
# chStackUnitModelID.1: S3048-ON-01-FE-52T
# chStackUnitCodeVersion.1: 9.8(0.0P2)
# chStackUnitSerialNumber.1: NA
# chStackUnitServiceTag.1: <REDACTED>
The problem I have with this solution is scalability. I plan on supported a number of different devices and will require a MIB directory for each manufacturer. This means the MIBDIRS and the MIB search path will become quite un-wieldy. Not to mention that the net-snmp bindings will probably flake-out at some stage since it has to search through potentially thousands of MIB files.
Is there a way to clear out the bindings after the first snmp queries are done, set the MIBDIRS variable, then re-import the netsnmp module? I've tried using reload(netsnmp) but that doesn't seem to work.
Desired code-like-text
Ideally, something like this:
...
sess.get(object_id)
# determine which mib dir to point to
os.environ['MIBDIRS'] = "+" + "path_to_mib_dir"
# magic reloading of netsnmp
sess = netsnmp.Session(**session_options)
varlist = netsnmp.VarList(netsnmp.Varbind(mib + ":" + table_oid))
sess.walk(varlist)
...
# Profit!!!
I'm trying to generate a PDF from an odt file using Python and the OpenOffice UNO bridge.
It works fine so far, the only problem i'm facing are the export options.
By default, OO is using the existing PDF export settings (the one used the last time, or the default if the first time). But I need set these settings manually, for example "UseTaggedPDF" has to be true.
This is part of the code where i export the PDF:
try:
properties=[]
p = PropertyValue()
p.Name = "FilterName"
p.Value = "writer_pdf_Export"
properties.append(p)
p = PropertyValue()
p.Name = "UseTaggedPDF"
p.Value = True
properties.append(p)
document.storeToURL(outputUrl, tuple(properties))
finally:
document.close(True)
The PDF is generated but not tagged. What's wrong with this?
Finaly found the solution on http://www.oooforum.org/forum/viewtopic.phtml?t=70949
try:
# filter data
fdata = []
fdata1 = PropertyValue()
fdata1.Name = "UseTaggedPDF"
fdata1.Value = True
fdata.append(fdata1)
fdata.append(fdata1)
args = []
arg1 = PropertyValue()
arg1.Name = "FilterName"
arg1.Value = "writer_pdf_Export"
arg2 = PropertyValue()
arg2.Name = "FilterData"
arg2.Value = uno.Any("[]com.sun.star.beans.PropertyValue", tuple(fdata) )
args.append(arg1)
args.append(arg2)
document.storeToURL(outputUrl, tuple(args))
finally:
document.close(True)
I have integrated collective.documenviewer on my plone website. This is used for viewing PDF and other office files online.
One of the optional add-on products is plone.app.async which in turn uses zc.async. Now, the installation went well without errors. But when I save a file, an error is generated that I can't figure out: Below is the error:
2012-08-29T12:52:03 ERROR collective.documentviewer Error using plone.app.async with collective.documentviewer. Converting pdf without plone.app.async...
Traceback (most recent call last):
File "/home/frank/apps/myplonesite/plone/eggs/collective.documentviewer-2.2a1-py2.7.egg/collective/documentviewer/async.py", line 143, in queueJob
runner = JobRunner(object)
File "/home/frank/apps/myplonesite/plone/eggs/collective.documentviewer-2.2a1-py2.7.egg/collective/documentviewer/async.py", line 50, in __init__
self.queue = self.async.getQueues()['']
File "/home/frank/apps/myplonesite/plone/eggs/plone.app.async-1.2-py2.7.egg/plone/app/async/service.py", line 100, in getQueues
return self._conn.root()[KEY]
File "/home/frank/apps/myplonesite/plone/../../python27/lib/python2.7/UserDict.py", line 23, in __getitem__
raise KeyError(key)
KeyError: 'zc.async'
These are the versions that I am using:
plone.app.async = 1.2
zc.async = 1.5.4
How do I do away with the KeyError issue ?
UPDATE: Below is my buildout
[buildout]
newest = false
allow-picked-versions = false
index = http://dist.candid.org/candid
extends =
versions.cfg
parts =
lxml
svneggs
svnproducts
zeo
instance
worker
paster
plonesite
versions = versions
find-links =
http://dist.candid.org/candid
develop =
../src/candid.main
../src/ploned.ui
../src/z3c.traverser
../src/repoze.whooze
../src/marginalia
../src/ore.alchemist
../src/alchemist.ui
../src/alchemist.catalyst
../src/alchemist.traversal
../src/alchemist.security
../src/portal.auth
eggs =
Plone
Products.PloneHelpCenter
Products.LinguaPlone
candid
alchemist.ui
alchemist.catalyst
alchemist.traversal
alchemist.security
ploned.ui
candidcms.plonepas
candidcms.policy
candidcms.theme
psycopg2
Products.Scrawl
collective.contacts
collective.tabr
candidcms.workspaces
lotr.repository
archetypes.multifile
Products.ATVocabularyManager
collective.dynatree
collective.portlet.explore
z3c.json
collective.js.jqueryui
python-cjson
collective.plonetruegallery
lotr.templates
portal.auth
Products.PloneFormGen
quintagroup.pfg.captcha
collective.documentviewer
five.intid
plone.app.async
zcml =
candidcms.plonepas
candidcms.policy
candidcms.theme
candid.portal
candidcms.workspaces
archetypes.multifile
lotr.templates
collective.contacts
collective.tabr
collective.portlet.explore
[instance]
recipe = plone.recipe.zope2instance
user = uadmin:uadmin
eggs =
${buildout:eggs}
Products.CMFPlone
Paste
PasteScript
PasteDeploy
repoze.tm2
repoze.retry
repoze.who
zcml =
${buildout:zcml}
zcml-additional =
<include package="plone.app.async" file="single_db_instance.zcml" />
environment-vars =
ZC_ASYNC_UUID ${buildout:directory}/var/instance-uuid.txt
products =
${svnproducts:location}
# !+XAPIAN PATH(mn, apr-2012) hardcoded path to candid xapian installation
# temporary fix because plone uses the 'candid.portal' package which is in the
# candid.main package. Once the candid.portal package is factored out this entry
# should be removed.
extra-paths =
../parts/xapian/lib/python
[lxml]
recipe = z3c.recipe.staticlxml
egg = lxml
force = false
build-libxslt = true
build-libxml2 = true
libxslt-url = http://candid-portal.googlecode.com/files/libxslt-1.1.24.tar.gz
libxml2-url = http://candid-portal.googlecode.com/files/libxml2-2.6.32.tar.gz
[svnproducts]
recipe = infrae.subversion
urls =
http://candid-portal.googlecode.com/svn/plone.products/CandidHelpCenter/branches/plone4 CandidHelpCenter
[svneggs]
recipe = infrae.subversion
as_eggs = true
urls =
http://candid-portal.googlecode.com/svn/plone.products/candidcms.plonepas/trunk/ candidcms.plonepas
http://candid-portal.googlecode.com/svn/plone.products/candidcms.policy/trunk/ candidcms.policy
http://candid-portal.googlecode.com/svn/plone.products/candidcms.theme/trunk/ candidcms.theme
http://candid-portal.googlecode.com/svn/plone.products/candidcms.workspaces/trunk/ candidcms.workspaces
http://lotr.googlecode.com/svn/lab/apps/lotr.repository/ lotr.repository
http://lotr.googlecode.com/svn/trunk/products/lotr.templates/ lotr.templates
[paster]
recipe = zc.recipe.egg
eggs = ${instance:eggs}
# !+XAPIAN PATH(mn, apr-2012) hardcoded path to candid xapian installation
extra-paths =
../parts/xapian/lib/python
scripts = paster
[zeo]
recipe = plone.recipe.zeoserver
file-storage = ${buildout:directory}/var/filestorage/Data.fs
blob-storage = ${buildout:directory}/var/blobstorage
eggs = ${instance:eggs}
[worker]
recipe = plone.recipe.zope2instance
user = ${instance:user}
eggs = ${instance:eggs}
zcml = ${instance:zcml}
zserver-threads = 2
debug-mode = on
verbose-security = on
zeo-client = true
blob-storage = ${zeo:blob-storage}
shared-blob = on
eggs = ${instance:eggs}
zcml-additional =
<include package="plone.app.async" file="single_db_worker.zcml" />
environment-vars =
ZC_ASYNC_UUID ${buildout:directory}/var/worker-uuid.txt
[plonesite]
recipe = collective.recipe.plonesite
site-id = plone
admin-user = uadmin
instance = instance
profiles-initial =
Products.CMFPlone:dependencies
Products.CMFPlone:plone-content
lotr.repository:default
candidcms.policy:default
candidcms.theme:default
collective.dynatree:default
candidcms.workspaces:default
lotr.templates:default
Products.FacultyStaffDirectory:default
Products.PlonePopoll:default
Products.PloneFormGen:default
quintagroup.pfg.captcha:default
collective.documentviewer:default
products-initial =
Products.CMFPlone
archetypes.multifile
candidHelpCenter
LinguaPlone
collective.plonetruegallery
collective.tabr
Products.PloneFormGen
quintagroup.pfg.captcha
This would happen if the queues for plone.app.async have not been set up. plone.app.async & zc.async are (over)complicated and actually do require you reading the README ;)
You should have a look at the instructions provided with plone.app.async at their pypi page, in particular the buildout configuration.
Unless you include the necessary zcml (for your "normal", as well as your "worker" instance) your queues will not be setup.
This looks like an issue with collective.documentviewer. I am the author and actually think I fixed this at some point. What version are you running?