Running SyntaxNet with designated instance (in Python-level) - python

Could you please let me know how I designate which instance to use when training/testing SyntaxNet?
In other tensorflow models we can easily change configurations by editing Python code:
ex) tf.device('/cpu:0') => tf.device('/gpu:0').
I could run parsey mcparseface model via running demo.sh and I followed back symbolic links to find device configurations.
Maybe I misedBut I cannot find gpu configuration python codes in demo.sh, parser_eval.py and context.proto.
When I search with query 'device' in tensorflow/models, I could see several C files such as syntaxnet/syntaxnet/unpack_sparse_features.cc contain line using tensorflow::DEVICE_CPU;
So.. is to change C codes in these files the only way to change device configuration for SyntaxNet?
I hope there is a simpler way to change the setting in Python level.
Thanks in advance.

You can refer to this page for instructions on running syntax net on GPU: https://github.com/tensorflow/models/issues/248
Tensorflow would automatically assign devices including GPU to the ops: https://www.tensorflow.org/versions/r0.11/how_tos/using_gpu/index.html. You can also manually specify the device when building the graph.

Related

Getting compiler error when trying to verify a contract importing from #uniswap/v3-periphery

I'm trying to perform a simple Swap from DAI to WETH with Uniswap in my own SmartContract on the Kovan Testnet. Unfortunately my transaction keeps getting reverted even after setting the gas limit manually.
I also discovered that I can not verify the contract on Kovan via etherscan-API nor manually. Instead I keep getting this error for every library I import:
Source "#uniswap/v3-periphery/contracts/interfaces/ISwapRouter.sol" not found: File import callback not supported
Accordingly I have the feeling something is going wrong during compilation and I'm stuck without any further ideas to work out my problem.
Here are a couple infos on what I've tried so far and how to reproduce:
Brownie Version 1.16.4, Tested on Windows 10 and Ubuntu 21.04
I've tried:
Importing libraries with Brownie package manager
Importing libraries with npm and using relative paths
All kinds of different compiler remappings in the brownie-config.yaml
Adding all dependency files to project folders manually
Here's a link to my code for reproducing my error:
https://github.com/MjCage/swap-demo
It'd be fantastic if someone could help.
It's very unlikely that something is "going wrong during compilation". If your contract compiles but what it does does not match the sources, you have found a very serious codegen bug in the compiler and you should report it so that it can be fixed quickly. From experience I'd say that it's much more likely that you have a bug in your contract though.
As for the error during verification - the problem is that to properly compile a multi-file project, you have to provide all the source files and have them in the right directories. This applies to library code as well so if your contract imports ISwapRouter.sol, you need to also submit that file and all files it in turn imports too.
The next hurdle is that as far as I can tell, the multi-file verification option at Etherscan only allows you to submit files from a single directory so it only gets their names, not the whole paths (not sure if it's different via the API). You need Etherscan to see the file as #uniswap/v3-periphery/contracts/interfaces/ISwapRouter.sol but it sees just ISwapRouter.sol instead and the compiler will not treat them as the same (both could exist after all).
The right solution is to use the Standard JSON verification option - this way you submit the whole JSON input that your framework passes to the compiler and that includes all files in the project (including libraries) and relevant compiler options. The issue is that Brownie does not give you this input directly. You might be able to recreate it from the JSON it stores on disk (Standard JSON input format is documented at Compiler Input and Output JSON Description) but that's a bit of manual work. Unfortunately Brownie does not provide any way to request this on the command line. The only other way to get it that I know of is to use Brownie's API and call compiler.generate_input_json().
Since this is a simple project with just one contract and does not have deep dependencies, it might be easier for you to follow #Jacopo Mosconi's answer and just "flatten" the contract by replacing all imports by sources pasted directly into the main contract. You might also try copying the file to your project dir and altering the import so that it only contains the file name, without any path component - this might pass the multi-file verification. Flattening is ultimately how Brownie and many other frameworks currently do verification and Etherscan's check is lax enough to allow sources modified in such a way - it only checks bytecode so you can still verify even if you completely change the import structure, names, comments or even any code that gets removed by the optimizer.
the compiler can't find ISwapRouter.sol
you can add the code of ISwapRouter.sol directly on your swap.sol and delate that line from your code, this is the code https://github.com/Uniswap/v3-periphery/blob/main/contracts/interfaces/ISwapRouter.sol

How can one download the outputs of historical Azure ML experiment Runs via the python API

I'm trying to write a script which can download the outputs from an Azure ML experiment Run after the fact.
Essentially, I want to know how I can get a Run by its runId property (or some other identifier).
I am aware that I have access to the Run object when I create it for the purposes of training. What I want is a way to recreate this Run object later in a separate script, possibly from a completely different environment.
What I've found so far is a way to get a list of ScriptRun objects from an experiment via the get_runs() function. But I don't see a way to use one of these ScriptRun objects to create a Run object representing the original Run and allowing me to download the outputs.
Any help appreciated.
I agree that this could probably be better documented, but fortunately, it's a simple implementation.
this is how you get a run object for an already submitted run for azureml-sdk>=1.16.0 (for the older approach see my answer here)
from azureml.core import Workspace
ws = Workspace.from_config()
run = ws.get_run('YOUR_RUN_ID')
once you have the run object, you can call methods like
.get_file_names() to see what files are available (the logs in azureml-logs/ and logs/azureml/ will also be listed)
.download_file() to download an individual file
.download_files() to download all files that match a given prefix (or all the files)
See the Run object docs for more details.

Couldn't open file yolov3_custom_last.weights when trying to run darknet detection

I've been trying to use YOLO (v3) to implement and train an object detection of Tank with OpenImage dataset.
I have tried to get help from this tutorial and my code pretty much looks like it.
Also I'm using Google Colab and Google Drive services.
everything is going fine through my program. But I hit an error at the final step when I'm running the darknet to train detection.
!./darknet detector train "data/obj.data" cfg/yolov3_custom.cfg "darknet53.conv.74" -dont_show
after 100 iterations, when it's trying to save the progress in the backup folder I've addressed in obj.data file, I get the following error:
Saving weights to /content/drive/My\Drive/YOLOv3/backup/yolov3_custom_last.weights
Couldn't open file: /content/drive/My\Drive/YOLOv3/backup/yolov3_custom_last.weights
at first, I thought I made a mistake in using the address; So, I tried checking the address by using ls command
!ls /content/drive/My\Drive/YOLOv3/backup/
and the result was an empty folder (However, not an error meaning I've written the address correctly and that it is accessable in my google drive).
Here are contents of data.object file:
classes = 1
train = data/train.txt
valid = data/test.txt
names = data/obj.names
backup = /content/drive/My\ Drive/YOLOv3/backup
also I've made required changes in config file so I don't think that the problem is about that. But just to make sure here are the changes I've made in my yolov3.cfg file:
Fist of all we will comment lines 3 and 4 (batch, subdivisions) to unset testing mode
We will uncomment lines 6 and 7(batch, subdivisions) to set to training mode
We change our max_batches value to 2000 * number_of_classes (if there's one class like our case, set to 4000)
We change our step tuple-like values to 80%, 90% value of our max_baches value. In this case it will be 3200, 3600.
For all YOLO layers and convoloutional layers before them, changed the classes value to number of classes, In this case 1, and change the value of filters according to following formula(In this case, 18)
Formula for conv layers filters value: (number_of_classes + 5)*3
I searched the error and found this issue on Github.
However, I tried the following methods recommended there and the problem is still the same:
Removing and recreating the backup folder
Tried to adding the line backup = backup in my yolo.data file in folder .cfg but there was no such file in cfg folder.
Creating an empty yolov3_custom_last.weights in backup folder
The other solutions mentioned in this issue was about when you are running YOLO on your PC and not google Colab.
Also, here my tree structure of the folder YOLOv3 which is stored in my Google Drive My Drive(main folder).
YOLOv3
darknet53.conv.74
obj.data
obj.names
Tank.zip
yolov3.weights
yolov3_custom.cfg
yolov3_custom1.cfg.txt
So, I'm kinda stuck and I have no idea what could fix this. I'd appreciate some help.
I have solved the problem with changing the local drive address with ln command.
The problem wasn't from my code rather from the way way yolov3 developers were handling space in directory addresses! Apparently as much as I figured in their docs, they are not handling space in directory quite well.
So I created a virtual address which does not have space like My Drive has.
P.S: As you know My Drive folder is already there in your google drive so you can't actually rename it.
Here is the code you can use to achieve this:
!ln -s /content/drive/My\ Drive/ /mydrive
I was getting the same error on my Windows 10, and I think I managed to fix it. All I had to do was move my "weights" folder (with my yolov3.weights inside), closer to the .exe file, which was in the "x64" folder, in my case. After that, the error stopped appearing and the app was able to predict the test image normally.

Proxmoxer : how to create LXC container specifying disk size

I'm running version 5.2 of proxmox, and 1.0.2 of proxmoxer python library. Latest as of today.
So far I didn't manage to create a LXC container specifying a disk size and will always default to 4G. I didn't find this option in Proxmox documentation...
I am using :
node.lxc.create(vmid=204,
ostemplate='local:vztmpl/jessie-TM-v0.3.tar.gz',
hostname='helloworld',
storage='raid0',
memory=2048,
swap=2048,
cores=2,
password='secret',
net0='name=eth0,bridge=vmbr0,ip=dhcp')
Adding something like rootfs='raid0:204/vm-204-disk-1.raw,size=500G' will disable disk image creation and look for an already existing image.
Anyway, I don't really know where to go next. Am I supposed to create a disk image before hand? I didn't find how to do this for LXC. No problems with qemu.
Thanks for any help.
did you try to create your container by the web interface ?
Can you access it ?
After some time spent on this, looking at the code of proxmox ansible module helped.
So, to specify a disk size when creating a LXC container using Proxmox API, one need to simply HTTP POST :
rootfs=10
For a 10G HDD. Without anything else.

Automate multiple dependent python program

I've multiple python scripts. Each script is dependent on other i.e. the first script uses output of the second script, the second script uses output of the third and so on. Is there anyway i can link up the scripts such that i can automate the whole process. I came across Talend Data Integration Tool but i can't figure out how to use it. Any reference or help would be highly useful.
You did not state what operating system/platform you are using, but the problem seems like a good fit for make.
You specify dependencies between files in your Makefile, along with rules on how to generate one file from the others.
Example:
# file-1 depends on input-file, and is generated via f1-from-input.py
file-1: input-file
f1-from-input.py --input input-file --output file-1
# file-2 depends on file-1, and is generated via f2-from-f1.py
file-2: file-1
f2-from-f1.py < file-1 > file-2
# And so on
For documentation, check out the GNU Make Manual, or one of the million tutorials on the internet.
i found this link it show how to call a python script from Talend and use it's output (not sure if it wait for the code to finish)
The main concept is to
run the python script from Talend Studio
By using tSystem component

Categories