Proxmoxer : how to create LXC container specifying disk size - python

I'm running version 5.2 of proxmox, and 1.0.2 of proxmoxer python library. Latest as of today.
So far I didn't manage to create a LXC container specifying a disk size and will always default to 4G. I didn't find this option in Proxmox documentation...
I am using :
node.lxc.create(vmid=204,
ostemplate='local:vztmpl/jessie-TM-v0.3.tar.gz',
hostname='helloworld',
storage='raid0',
memory=2048,
swap=2048,
cores=2,
password='secret',
net0='name=eth0,bridge=vmbr0,ip=dhcp')
Adding something like rootfs='raid0:204/vm-204-disk-1.raw,size=500G' will disable disk image creation and look for an already existing image.
Anyway, I don't really know where to go next. Am I supposed to create a disk image before hand? I didn't find how to do this for LXC. No problems with qemu.
Thanks for any help.

did you try to create your container by the web interface ?
Can you access it ?

After some time spent on this, looking at the code of proxmox ansible module helped.
So, to specify a disk size when creating a LXC container using Proxmox API, one need to simply HTTP POST :
rootfs=10
For a 10G HDD. Without anything else.

Related

Configure Anti-Aliasing parameter in Arcigs Server Service in python

I need to set the anti-aliasing parameters for map services via python. Not having much luck in locating on how to configure this.
I have tried looking into the.sddraft file as mentioned in the documentation: https://desktop.arcgis.com/en/arcmap/10.6/analyze/arcpy-mapping/createmapsddraft.htm
However, when I open the .sddraft file as a xml, I cannot find the textAntiAliasingMode or AntiAliasingMode parameter in the file. Therefore I cannot use xml.dom.minidom to update it.
Any ideas??

Running SyntaxNet with designated instance (in Python-level)

Could you please let me know how I designate which instance to use when training/testing SyntaxNet?
In other tensorflow models we can easily change configurations by editing Python code:
ex) tf.device('/cpu:0') => tf.device('/gpu:0').
I could run parsey mcparseface model via running demo.sh and I followed back symbolic links to find device configurations.
Maybe I misedBut I cannot find gpu configuration python codes in demo.sh, parser_eval.py and context.proto.
When I search with query 'device' in tensorflow/models, I could see several C files such as syntaxnet/syntaxnet/unpack_sparse_features.cc contain line using tensorflow::DEVICE_CPU;
So.. is to change C codes in these files the only way to change device configuration for SyntaxNet?
I hope there is a simpler way to change the setting in Python level.
Thanks in advance.
You can refer to this page for instructions on running syntax net on GPU: https://github.com/tensorflow/models/issues/248
Tensorflow would automatically assign devices including GPU to the ops: https://www.tensorflow.org/versions/r0.11/how_tos/using_gpu/index.html. You can also manually specify the device when building the graph.

Bokeh Server Files: load_from_config = False

on Bokeh 0.7.1
I've noticed that when I run the bokeh-server, files appear in the directory that look like bokeh.data, bokeh.server, and bokeh.sets, if I use the default backend, or redis.db if I'm using redis. I'd like to run my server from a clean start each time, because I've found that if the files exist, over time, my performance can be severely impacted.
While looking through the API, I found the option to turn "load_from_config" from True to False. However, tinkering around with this didn't seem to resolve the situation (it seems to only control log-in information, on 0.7.1?). Is there a good way to resolve this and eliminate the need for me to manually remove these files each time? What is the advantage of having these files in the first place?
These files are to store data and plots persistently on a bokeh-server, for instance if you want to publish a plot so that it will always be available. If you are just using the server locally and always want a "clean slate" you can run with --backend=memory to use the in-memory data store for Bokeh objects.

Is it still possible to have standalone Python elements in GStreamer 1.0?

I have an application written with gst-python for GStreamer 0.10 that I am trying to port to GStreamer 1.0.
In my application, I have some custom elements written in Python (subclasses of gst.BaseSrc and gst.BaseTransform). Each python-element has its own file and is placed in /usr/lib/gstreamer-0.10/python so that gst-launch and gst-inspect can pick them up (which they do).
This is very handy, since it makes it possible for me to experiment with different pipelines directly on the command line.
Now that I am trying to port my application (according to this guide https://wiki.ubuntu.com/Novacut/GStreamer1.0) it looks like even if it is still possible to write python-elements with PyGI, it seems like the possibility to store them in separate files and have them integrated in GStreamer is gone.
All examples I have found talks about placing the elements in the program you are writing and then registering them with a call Gst.Element.register, but if I would do so, it would only be possible to reach my custom elements directly from this program and I want them to work standalone (with gst-launch) without having to write my filter chains in a program.
So does anyone know if this is still possible with GStreamer 1.0?
In order to help other people struggling with this, I am now answering this myself.
After some deep research I have now found out that it has not been possible to have any standalone python elements before gst-python 1.4.0 was released on 2014-10-20.
For the release notes take a look here:
http://gstreamer.freedesktop.org/releases/gst-python/1.4.0.html
I dont know if you have the same issue that I had, but in the example from
https://wiki.ubuntu.com/Novacut/GStreamer1.0 there is a mistake that was causing an error when I tried to register a new plugin. It is using
__gstdetails__ = (
'Dmedia File Source',
'Source/File',
'Resolves a dmedia ID to a file path, then acts like a filesrc',
'Jason Gerard DeRose <jderose#novacut.com>',
)
when it has to be:
__gstmetadata__ = (
'Dmedia File Source',
'Source/File',
'Resolves a dmedia ID to a file path, then acts like a filesrc',
'Jason Gerard DeRose <jderose#novacut.com>',
)

Getting mount type information in python on OSX

Is there a way in which I can get some information about the mounts I have in the folder /Volumes in OSX?
I want to be able to tell the difference between disk images like dmgs and other types, like hard disks or network mounts.
I tried parsing the output of mount -v and looking if read-only is in the line but I doubt that's a particularly accurate way of telling, and also not a good method either.
Is there any module or method that will give me this information?
Have a look at the diskutil(8) and hdiutil(1) tools.

Categories