I am using proxmoxer to manipulate machines on ProxMox (create, delete etc).
Every time I am creating a machine, I provide a description which is being written in ProxMox UI in section "Notes".
I am wondering how can I retrieve that information?
Best would be if it can be done with ProxMox, but if there is not a way to do it with that Python module, I will also be satisfied to do it with plain ProxMox API call.
The description parameter is only a message to show in proxmox UI, and it's not related to any function
You could use https://github.com/baseblack/Proxmoxia to get started, I asked this very same question on the forum as I need to generate some reports from a legacy system with dozens of VMs (and descriptions).
Let me know if you still need this, perhaps we can collaborate on it.
Related
I would like to have a script ran by cron or an anki background job that will automatically read in a file (e.g. csv, tsv) containing all my flashcards and update the flashcard database in anki automatically so that i don't have to manually import my flashcards 1000 times a week.
any have any ideas how this can be achieved ?
Some interesting links I've came across, including from answers, that might provide a lead towards solutions:
https://github.com/langfield/ki
https://github.com/patarapolw/ankisync
https://github.com/towercity/anki-cli
The most robust approach there is so far is to have your collection under git, and use Ki to make Anki behave like a remote repository, so it's very easy to synchronise. The only constraint is the format of your collection. Each card is kept as a single file, and there is no real way around this.
I'm the maintainer of ki, one of the tools you linked! I really appreciate the shout-out #BlackBeans.
It's hard to give you perfect advice without more details about your workflow, but it sounds to me like you've got the source-of-truth for your notes in tabular files, and you import these files into Anki when you've made edits or added new notes.
If this is the case, ki may be what you're looking for. As #BlackBeans mentioned, this tool allows you to convert Anki notes into markdown files, and more generally, handles moving your data from your collection to a git repository, and back.
Basically, if the reason why you've got stuff in tabular files is (1) because you want to version it, (2) because you want to use an external editor, or (3) because your content is generated programmatically, then you might gain some use from using ki.
Feel free to open an issue describing your use case in detail. I'd love to give you some support in figuring out a good workflow if you think it would be helpful. I am in need of more user feedback at the moment, so you'd be helping me out, too!
I've built some report tools using Pywikibot. As things are growing it now takes up to 2 hours to finish the reports so I'm looking to speed things up. Main ideas:
Disable throttling, the script is read-only, so page.get(throttle=False) handles this
Cache
Direct database access
Unfortunately I can't find much documentation about caching and db access. Only way seems to dive into the code, and well, there's limited information about database access in user-config.py. If there is any, where can I find good documentation about pywikibot caching and direct db access?
And, are there other ways to speed things up?
Use PreloadingGenerator so that pages are loaded in batches. Or MySQLPageGenerator if you use direct DB access.
See examples here.
I'm using "-pt:1" option in the command to make one edit per second.
I'm currently running the command
python pwb.py category add -pt:1 -file:WX350.txt -to:"Taken with Sony DSC-WX350"
https://www.mediawiki.org/wiki/Manual:Pywikibot/Global_Options
Looks like pagegenerators is indeed a good way to speed up things. The best documentation for that is directly in the source.
Even in there it's not directly clear where to put the MySQL connection details. (Will update this hopefully.)
Disable throttling, the script is read-only, so page.get(throttle=False) handles this
"throttle" parameter of Page.get() is not supported since Pywikibot 2.0 (formerly known as rewrite) and was removed in 5.0.0. Pywikibot 2.0+ has not activated a get throttle by default. Decreasing putthrottle is only for putting a page to the wiki and may be restricted by local policies. Never touch maxlag parameter which is server related.
If you are using multiple sites the first run needs a lot of time until all site objects are cached. PreloadingGenerator can be be used for bulk load of page contents but decreases speed if meta data are required only. In summary speeding up your script depends on you implementation and your need.
Using PreloadingGenerator from pagegenerators is the simplest way to speed some programs that need to read a lot from online wikis, as other answers have already pointed.
Alternative ways are:
Download a dump of the wiki and read it locally. Wikimedia projects offer dumps updated about once a week.
Create an account on Wikimedia Labs and work from there enjoying from faster connection with Wikipedias and updated dumps.
Modifying throttle might put you in danger of getting blocked if the target wiki has a policy against it - and I'm afraid Wikipedia has such a policy.
You can download all the data in advance in a dump file in this site
http://dumps.wikimedia.org
You can then use a two passes - first pass reads the data from the local dump,
then the second pass reads only the remote pages for which you found issues in the local dump.
Example:
dump_file = hewiktionary-latest-pages-articles.xml.bz2
all_wiktionary = XmlDump(dump_file).parse()
gen = (pywikibot.Page(site, p.title) for p in all_wiktionary if report_problem(p))
gen = pagegenerators.PreloadingGenerator(gen)
for page in gen:
report_problem(page)
I know that there is possibility to create a python script to send the data to the Bug Tracking System to create new ticket.
However the problem on my side is that there are fields in the ticket that are mandatory while creating it. For these fields there are more options to choose from. These values should be chose by the user. The problem here is that with the script from the tutorial on Klocwork official pages I can't really choose specific option for the field.
Is there some way to create the ticket with python scripts in more steps (retrieving values for fields, choosing options for fields and only then creating the ticket itself) instead of only clicking on the button which does all the work in one step?
Thank you a lot,
Jakub
I work in Klocwork Support and I answered a similar question on the Klocwork support forums as well, which may have also been from you.
The integration method uses a python script run on the Klocwork server side to push the issue to the Bug Tracking system when the user clicks a button. Currently, there is no way to display additional dialogs or UI to the user when they push an issue to the Bug tracker.
One possible workaround is to have the user specify this information in a comment on the defect, which can then be read by the python script and used when submitting the issue to the Bug Tracking system. issue.history is an array of StatusHistoryEvent objects that represent each citing status change and/or comment. So you can easily parse the comments by looping through the events:
for event in issue.history
text = event.comment
# parse out the values depending on how you saved them in the comment
I'm trying to use autoscaling to create new EC2 instances whenever average CPU load on existing instances goes high. Here's the situation:
I'm setting up autoscaling using this boto script (with keys and image names removed). http://balti.ukcod.org.uk/~francis/tmp/start_scaling_ptdaemon.nokeys.py
I've got min_size set to 2, and the AutoScalingGroup correctly creates an initial 2 instances, which both work fine. I'm pretty sure this means the LaunchConfiguration is right.
When load goes up to nearly 100% on both those two instances, nothing happens.
Some questions / thoughts:
Is there any way of debugging this? I can't find any API calls that give me detals of what Autoscaling is doing, or thinks it is doing. Are there any tools that give feedback either on what it is doing, or on whether it has set things up correctly?
It would be awesome if Autoscaling appeared in the AWS Console.
I'm using EU west availability zone. Is there any reason that should cause trouble with Autoscaling?
Is there any documentation of the "dimensions" parameter when creating a trigger? I have no idea what it means, and have just copied its fields from an example. I can't find any documentation about it that doesn't self-referentially say it is a "dimension", without explaining what that means or what the possible values are.
Thanks for any help!
I'm sure you've already found these and it would be good to use AWS tool first before the Python tool to get the idea.:)
http://ec2-downloads.s3.amazonaws.com/AutoScaling-2009-05-15.zip
http://docs.amazonwebservices.com/AutoScaling/latest/DeveloperGuide/
Cheers,
Rodney
Also, take a look at something like http://alestic.com/2011/11/ec2-schedule-instance for a simple example of how to use the tools with a demo script provided.
An artistic project will encourage users to ring a number and leave a voice-mail on an automated service. These voice-mails will be collected and edited into a half-hour radio show.
I want to make a temporary system (with as little as possible programming) which will:
Allow me to establish a public telephone number (preferably in the UK)
Allow members of the public to call in and receive a short pre-recorded message
Leave a message of their own after the beep.
At the end of the project I'd like to be able to download and convert the recorded audio into a format that I can edit with a free audio-editor.
I do not mind paying to use a service if it means I can get away with doing less programming work. Also it's got to be reliable because once recorded it will be impossible to re-record the audio clips. Once set up the whole thing will run for at most 2 weeks.
I'm a python programmer with some basic familiarity with VOIP, however I'd prefer not to set up a big complex system like Asterisk since I do not ever intend to use the system again once the project is over. Whatever I do has to be really simple and disposable. Also I have access to Linux and FreeBSD systems (no Windows, sorry).
Thanks!
I use twilio, very easy, very fun.
Skype has a voicemail feature which sounds perfect for this and I suppose you would need a SkypeIn number as well
You may want to check out asterisk. I don't think it will become any easier than using an existing system.
Maybe you can find someone in the asterisk community to help set up such a system.
Take a look at Sipgate.co.uk, they provide a free UK dial in number and free incoming calls. Not so relavant for you but they also have a python api.
They are a SIP provider and there are many libraries (e.g. http://trac.pjsip.org/repos/wiki/Python_SIP_Tutorial ) for sip in python - so you could set up a python application to login to your sipgate account, pick up and incoming calls and dump the sound to a wav/mp3 whatever.