I'm collecting data from the iPad's accelerometer in only one direction, and it comes out quite noisy. I have looked around for noise reducing filters, but haven't found one that I understood (case in point, the Kalman filter). I guess I have two questions, is there actual significant noise associated with the accelerometer, as it appears, and if so how can I reduce it? Even if you have a link to a noise filter with an explanation I would be very grateful.
My app itself is written in swift, and my data analysis is written in python, if that matters.
I've used some simple easing that smoothes out any spikes in the values. It'll add a bit of latency, but you can determine the balance of latency vs. smoothness to suit your application by adjusting the easing property.
import UIKit
import CoreMotion
class MyViewController: UIViewController {
var displayLink: CADisplayLink?
let motionQueue = NSOperationQueue()
var acceleration = CMAcceleration()
var smoothAcceleration = CMAcceleration() {
didSet {
// Update whatever needs acceleration data
}
}
var easing: Double = 10.0
override func viewDidLoad() {
super.viewDidLoad()
self.displayLink = CADisplayLink(target: self, selector: "updateDisplay:" )
self.displayLink?.addToRunLoop( NSRunLoop.currentRunLoop(), forMode: NSDefaultRunLoopMode )
var coreMotionManager = CMMotionManager()
coreMotionManager.startAccelerometerUpdatesToQueue( self.motionQueue ) { (data: CMAccelerometerData!, error: NSError!) in
self.acceleration = data.acceleration
}
}
func updateDisplay( displayLink: CADisplayLink ) {
var newAcceleration = self.smoothAcceleration
newAcceleration.x += (self.acceleration.x - self.smoothAcceleration.x) / self.easing
newAcceleration.y += (self.acceleration.y - self.smoothAcceleration.y) / self.easing
newAcceleration.z += (self.acceleration.z - self.smoothAcceleration.z) / self.easing
self.smoothAcceleration = newAcceleration
}
}
Related
Introduction:
For education purpose I developed a Java class that enables students to load Tensorflow models in the Tensorflow SavedModel format and use them for classification purpose in Java. For example, they can create a model online with Google's Teachable Machine, download that and use that model right in Java. This also works with many image classification models on tfhub.dev. Thereby I tried to use the new but not well documented Java API and not the deprecated old libtensorflow-API (when I understood everything correctly). As I use BlueJ for that, everything is based on pure Java code linking the required libraries directly in BlueJ's preferences after downloading them. The documentation in the Java code shows where to download the libraries.
Note: I know that "the normal way today" is using Gradle or Maven or sth. but students do not work with these tools. Another note: In the following I only use a few Code excerpts in order to simplify everything to fit into this minimum example.
Problem:
The results of all my loaded models in Java are OK but not that performant as in Python resp. the online demonstrations linked on the Tensorflow website, mainly in Jupyter notebooks. So there seems to be one step wrong in my code.
As a representative test I will now compare the performance of the MoveNet model when using Python and Java. The MoveNet model "Thunder" detects 17 keypoints of a body in an image with 256x256 pixels. I will use exactly the same image (the same file without touching and resizing it) in both setups (I uploaded it to my webspace; this step was done when updating this text, however there are no differences in the results).
Python:
The MoveNet Model comes with a nice online Python demo in a Jupyter notebook:
https://www.tensorflow.org/hub/tutorials/movenet
The code can be found here (Note: I linked to the same image as in my Java project by uploading it to my webspace and linking to it) and the classification result of the image looks like this:
Java:
My Java-based approach ends up in an image like this:
I think that it is not bad, but it isn't perfect. With other models like e.g. Google's imagenet_mobilenet model I get similar results that are ok, but I suppose they are always a bit better when running online demos in Jupyter notebooks. I do not have more evidence - only a feeling. Im some cases the same image from the online demo is recognized as a different class - but not always. I might provide more data on that later.
Assumption and work done yet:
There might be an error in the data structures or algorithms on them in my Java code. I really searched the web for some weeks now, but I am unsure if my code really is precise, mainly as there are too few examples out there. E.g., I tried to change the order of RGB or the way it is calculated in the method that converts an image into a ND array. However, I saw no significant changes. Maybe the error is anywhere else. However, probably it is just as it is. If my code works well and is correct, that it is also ok for me - but I am still wondering why there are differences. Thanks for answers!
Code:
Here is a fully working example with two classes (I know, the Frame with the Panel drawing is bad - I coded this just fast for this example)
/**
* 1. TensorFlow Core API Library: org.tensorflow -> tensorflow-core-api
* https://mvnrepository.com/artifact/org.tensorflow/tensorflow-core-api
* -> tensorflow-core-api-0.4.0.jar
*
* 2. additionally click "View All" and open:
* https://repo1.maven.org/maven2/org/tensorflow/tensorflow-core-api/0.4.0/
* Download the correct native library for your OS
* -> tensorflow-core-api-0.4.0-macosx-x86_64.jar
* -> tensorflow-core-api-0.4.0-windows-x86_64.jar
* -> tensorflow-core-api-0.4.0-linux-x86_64.jar
*
* 3. TensorFlow Framework Library: org.tensorflow -> tensorflow-framework
* https://mvnrepository.com/artifact/org.tensorflow/tensorflow-framework/0.4.0
* -> tensorflow-framework-0.4.0.jar
*
* 4. Protocol Buffers [Core]: com.google.protobuf -> protobuf-java
* https://mvnrepository.com/artifact/com.google.protobuf/protobuf-java
* -> protobuf-java-4.0.0-rc-2.jar
*
* 5. JavaCPP: org.bytedeco -> javacpp
* https://mvnrepository.com/artifact/org.bytedeco/javacpp
* -> javacpp-1.5.7.jar
*
* 6. TensorFlow NdArray Library: org.tensorflow -> ndarray
* https://mvnrepository.com/artifact/org.tensorflow/ndarray
* -> ndarray-0.3.3.jar
*/
import org.tensorflow.SavedModelBundle;
import org.tensorflow.Tensor;
import org.tensorflow.ndarray.IntNdArray;
import org.tensorflow.ndarray.NdArrays;
import org.tensorflow.ndarray.Shape;
import org.tensorflow.types.TInt32;
import java.util.HashMap;
import java.util.Map;
import java.awt.image.BufferedImage;
import javax.imageio.ImageIO;
import java.awt.Color;
import java.io.File;
import javax.swing.JFrame;
import javax.swing.JButton;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import java.awt.BorderLayout;
public class MoveNetDemo {
private SavedModelBundle model;
private String inputLayerName;
private String outputLayerName;
private String keyName;
private BufferedImage image;
private float[][] output;
private int width;
private int height;
public MoveNetDemo(String pFoldername, int pImageWidth, int pImageHeight) {
width = pImageWidth;
height = pImageHeight;
model = SavedModelBundle.load(pFoldername, "serve");
// Read input and output layer names from file
inputLayerName = model.signatures().get(0).getInputs().keySet().toString();
outputLayerName = model.signatures().get(0).getOutputs().keySet().toString();
inputLayerName = inputLayerName.substring(1, inputLayerName.length()-1);
outputLayerName = outputLayerName.substring(1, outputLayerName.length()-1);
keyName = model.signatures().get(0).key();
}
// not necessary here
public String getModelInformation() {
String infos = "";
for (int i=0; i<model.signatures().size(); i++) {
infos += model.signatures().get(i).toString();
}
return infos;
}
public void setData(String pFilename) {
image = null;
try {
image = ImageIO.read(new File(pFilename));
}
catch (Exception e) {
}
}
public BufferedImage getData() {
return image;
}
private IntNdArray fillIntNdArray(IntNdArray pMatrix, BufferedImage pImage) {
try {
int w = pImage.getWidth();
int h = pImage.getHeight();
for (int i = 0; i < h; i++) {
for (int j = 0; j < w; j++) {
Color mycolor = new Color(pImage.getRGB(j, i));
int red = mycolor.getRed();
int green = mycolor.getGreen();
int blue = mycolor.getBlue();
pMatrix.setInt(red, 0, j, i, 0);
pMatrix.setInt(green, 0, j, i, 1);
pMatrix.setInt(blue, 0, j, i, 2);
}
}
}
catch (Exception e) {
}
return pMatrix;
}
public void run() {
Map<String, Tensor> feed_dict = null;
IntNdArray input_matrix = NdArrays.ofInts(Shape.of(1, width, height, 3));
input_matrix = fillIntNdArray(input_matrix, image);
Tensor input_tensor = TInt32.tensorOf(input_matrix);
feed_dict = new HashMap<>();
feed_dict.put(inputLayerName, input_tensor);
Map<String, Tensor> res = model.function(keyName).call(feed_dict);
Tensor output_tensor = res.get(outputLayerName);
output = new float[17][3];
for (int i= 0; i<17; i++) {
output[i][0] = output_tensor.asRawTensor().data().asFloats().getFloat(i*3)*256;
output[i][1] = output_tensor.asRawTensor().data().asFloats().getFloat(i*3+1)*256;
output[i][2] = output_tensor.asRawTensor().data().asFloats().getFloat(i*3+2);
}
}
public float[][] getOutputArray() {
return output;
}
public static void main(String[] args) {
MoveNetDemo im = new MoveNetDemo("/Users/myname/Downloads/Code/TF_Test_04_NEW/movenet_singlepose_thunder_4", 256, 256);
im.setData("/Users/myname/Downloads/Code/TF_Test_04_NEW/test.jpeg");
JFrame jf = new JFrame("TEST");
jf.setSize(300, 300);
jf.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
ImagePanel ip = new ImagePanel(im.getData());
jf.add(ip, BorderLayout.CENTER);
JButton st = new JButton("RUN");
st.addActionListener(new ActionListener() {
public void actionPerformed(ActionEvent e) {
im.run();
ip.update(im.getOutputArray());
}
});
jf.add(st, BorderLayout.NORTH);
jf.setVisible(true);
}
}
and the ImagePanel class:
import javax.swing.JPanel;
import java.awt.image.BufferedImage;
import java.awt.Graphics;
import java.awt.Color;
public class ImagePanel extends JPanel {
private BufferedImage image;
private float[][] points;
public ImagePanel(BufferedImage pImage) {
image = pImage;
}
public void update(float[][] pPoints) {
points = pPoints;
repaint();
}
#Override
protected void paintComponent(Graphics g) {
super.paintComponent(g);
g.drawImage(image, 0,0,null);
g.setColor(Color.GREEN);
if (points != null) {
for (int j=0; j<17; j++) {
g.fillOval((int)points[j][0], (int)points[j][1], 5, 5);
}
}
}
}
I found the answer. I mixed up height and width twice! No idea, why this behaves so strange (nearly correct but not perfect) but it works now.
In the Jupyter notebook it says:
input_image: A [1, height, width, 3]
so I changed the method fillIntArray to:
private IntNdArray fillIntNdArray(IntNdArray pMatrix, BufferedImage pImage) {
try {
int w = pImage.getWidth();
int h = pImage.getHeight();
for (int i = 0; i < h; i++) {
for (int j = 0; j < w; j++) {
Color mycolor = new Color(pImage.getRGB(j, i));
int red = mycolor.getRed();
int green = mycolor.getGreen();
int blue = mycolor.getBlue();
pMatrix.setInt(red, 0, i, j, 0); // switched j and i
pMatrix.setInt(green, 0, i, j, 1); // switched j and i
pMatrix.setInt(blue, 0, i, j, 2); // switched j and i
}
}
}
catch (Exception e) {
}
return pMatrix;
}
and accordingly in the run()-method:
IntNdArray input_matrix = NdArrays.ofInts(Shape.of(1, height, width, 3));
In the Jupyter notebook you can toggle the helper functions for visualization and see that at first y and then x coordinates are taken. Height first, then width. Changing this in the ImagePanel class too, solves the problem and the classification is as expected and the same quality as in the online demonstration!
if (points != null) {
for (int j=0; j<17; j++) {
// switched 0 and 1
g.fillOval((int)points[j][1], (int)points[j][0], 5, 5);
}
}
Here it is:
I am building an Android app that will allow the user to get a picture either by taking it in real time or uploading it from their saved images. Then, it will go through a machine learning script in python to determine their location. Before I completely connect to the algorithm, I am trying a test program that just returns a double.
from os.path import dirname, join
import csv
import random
filename = join(dirname(__file__), "new.csv")
def testlat():
return 30.0
def testlong():
return 30.0
These returned values are used in a Kotlin file that will then send those values to the Google Maps activity on the app for the location to be plotted.
class MainActivity : AppCompatActivity() {
var lat = 0.0
var long = 0.0
var dynamic = false
private val cameraRequest = 1888
lateinit var imageView: ImageView
lateinit var button: Button
private val pickImage = 100
private var imageUri: Uri? = null
var active = false
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
// Accesses the info image button
val clickMe = findViewById<ImageButton>(R.id.imageButton)
// Runs this function when the info icon is pressed by the user
// It will display the text in the variable infoText
clickMe.setOnClickListener {
Toast.makeText(this, infoText, Toast.LENGTH_LONG).show()
}
if (ContextCompat.checkSelfPermission(applicationContext, Manifest.permission.CAMERA)
== PackageManager.PERMISSION_DENIED) {
ActivityCompat.requestPermissions(
this,
arrayOf(Manifest.permission.CAMERA),
cameraRequest
)
}
imageView = findViewById(R.id.imageView)
val photoButton: Button = findViewById(R.id.button2)
photoButton.setOnClickListener {
val cameraIntent = Intent(MediaStore.ACTION_IMAGE_CAPTURE)
startActivityForResult(cameraIntent, cameraRequest)
dynamic = true
}
/*
The below will move to external photo storage once button2 is clicked
*/
button = findViewById(R.id.button)
button.setOnClickListener {
val gallery = Intent(Intent.ACTION_PICK, MediaStore.Images.Media.INTERNAL_CONTENT_URI)
startActivityForResult(gallery, pickImage)
}
// PYTHON HERE
if (! Python.isStarted()) {
Python.start(AndroidPlatform(this))
}
}
override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
super.onActivityResult(requestCode, resultCode, data)
if (resultCode == RESULT_OK && requestCode == pickImage) {
imageUri = data?.data
imageView.setImageURI(imageUri)
// PYTHON HERE
val py = Python.getInstance()
val pyobj = py.getModule("main")
this.lat = pyobj.callAttr("testlat").toDouble()
this.long = pyobj.callAttr("testlong").toDouble()
/* Open the map after image has been received from user
This will be changed later to instead call the external object recognition/pathfinding
scripts and then pull up the map after those finish running
*/
val mapsIntent = Intent(this, MapsActivity::class.java)
startActivity(mapsIntent)
}
}
}
I set up chaquopy and the gradle is building successfully, but everytime I get to the python part of emulating the app, it crashes. I'm not quite sure why that is; I thought maybe the program was too much for the phone to handle but it is a very basic python script so I doubt that's the issue.
If your app crashes, you can find the stack trace in the Logcat.
In this case, it's probably caused by the line return = 30.0. The correct syntax is return 30.0.
Thanks to this great answer I was able to figure out how to run a preflight check for my documents using Python and the InDesign script API. Now I wanted to work on automatically adjusting the text size of the overflowing text boxes, but was unable to figure out how to retrieve a TextBox object from the Preflight object.
I referred to the API specification, but all the properties only seem to yield strings which do not uniquely define the TextBoxes, like in this example:
Errors Found (1):
Text Frame (R=2)
Is there any way to retrieve the violating objects from the Preflight, in order to operate on them later on? I'd be very thankful for additional input on this matter, as I am stuck!
If all you need is to find and to fix the overset errors I'd propose this solution:
Here is the simple Extendscript to fix the text overset error. It decreases the font size in the all overflowed text frames in active document:
var doc = app.activeDocument;
var frames = doc.textFrames.everyItem().getElements();
var f = frames.length
while(f--) {
var frame = frames[f];
if (frame.overflows) resize_font(frame)
}
function resize_font(frame) {
app.scriptPreferences.enableRedraw = false;
while (frame.overflows) {
var texts = frame.parentStory.texts.everyItem().getElements();
var t = texts.length;
while(t--) {
var characters = texts[t].characters.everyItem().getElements();
var c = characters.length;
while (c--) characters[c].pointSize = characters[c].pointSize * .99;
}
}
app.scriptPreferences.enableRedraw = true;
}
You can save it in any folder and run it by the Python script:
import win32com.client
app = win32com.client.Dispatch('InDesign.Application.CS6')
doc = app.Open(r'd:\temp\test.indd')
profile = app.PreflightProfiles.Item('Stackoverflow Profile')
print('Profile name:', profile.name)
process = app.PreflightProcesses.Add(doc, profile)
process.WaitForProcess()
errors = process.processResults
print('Errors:', errors)
if errors[:4] != 'None':
script = r'd:\temp\fix_overset.jsx' # <-- here is the script to fix overset
print('Run script', script)
app.DoScript(script, 1246973031) # run the jsx script
# 1246973031 --> ScriptLanguage.JAVASCRIPT
# https://www.indesignjs.de/extendscriptAPI/indesign-latest/#ScriptLanguage.html
process = app.PreflightProcesses.Add(doc, profile)
process.WaitForProcess()
errors = process.processResults
print('Errors:', errors) # it should print 'None'
if errors[:4] == 'None':
doc.Save()
doc.Close()
input('\nDone... Press <ENTER> to close the window')
Thanks to the exellent answer of Yuri I was able solve my problem, although there are still some shortcomings.
In Python, I load my documents and check if there are any problems detected during the preflight. If so, I move on to adjusting the text frames.
myDoc = app.Open(input_file_path)
profile = app.PreflightProfiles.Item(1)
process = app.PreflightProcesses.Add(myDoc, profile)
process.WaitForProcess()
results = process.processResults
if "None" not in results:
# Fix errors
script = open("data/script.jsx")
app.DoScript(script.read(), 1246973031, variables.resize_array)
process.WaitForProcess()
results = process.processResults
# Check if problems were resolved
if "None" not in results:
info_fail(card.name, "Error while running preflight")
myDoc.Close(1852776480)
return FLAG_PREFLIGHT_FAIL
I load the JavaScript file stored in script.jsx, that consists of several components. I start by extracting the arguments and loading all the pages, since I want to handle them individually. I then collect all text frames on the page in an array.
var doc = app.activeDocument;
var pages = doc.pages;
var resizeGroup = arguments[0];
var condenseGroup = arguments[1];
// Loop over all available pages separately
for (var pageIndex = 0; pageIndex < pages.length; pageIndex++) {
var page = pages[pageIndex];
var pageItems = page.allPageItems;
var textFrames = [];
// Collect all TextFrames in an array
for (var pageItemIndex = 0; pageItemIndex < pageItems.length; pageItemIndex++) {
var candidate = pageItems[pageItemIndex];
if (candidate instanceof TextFrame) {
textFrames.push(candidate);
}
}
What I wanted to achieve was a setting where if one of a group of text frames was overflowing, the text size of all the text frames in this group are adjusted as well. E.g. text frame 1 overflows when set to size 8, no longer when set to size 6. Since text frame 1 is in the same group as text frame 2, both of them will be adjusted to size 6 (assuming the second frame does not overflow at this size).
In order to handle this, I pass an array containing the groups. I now check if the text frame is contained in one of these groups (which is rather tedious, I had to write my own methods since InDesign does not support modern functions like filter() as far as I am concerned...).
// Check if TextFrame overflows, if so add all TextFrames that should be the same size
for (var textFrameIndex = 0; textFrameIndex < textFrames.length; textFrameIndex++) {
var textFrame = textFrames[textFrameIndex];
// If text frame overflows, adjust it and all the frames that are supposed to be of the same size
if (textFrame.overflows) {
var foundResizeGroup = filterArrayWithString(resizeGroup, textFrame.name);
var foundCondenseGroup = filterArrayWithString(condenseGroup, textFrame.name);
var process = false;
var chosenGroup, type;
if (foundResizeGroup.length > 0) {
chosenGroup = foundResizeGroup;
type = "resize";
process = true;
} else if (foundCondenseGroup.length > 0) {
chosenGroup = foundCondenseGroup;
type = "condense";
process = true;
}
if (process) {
var foundFrames = findTextFramesFromNames(textFrames, chosenGroup);
adjustTextFrameGroup(foundFrames, type);
}
}
}
If this is the case, I adjust either the text size or the second axis of the text (which condenses the text for my variable font). This is done using the following functions:
function adjustTextFrameGroup(resizeGroup, type) {
// Check if some overflowing textboxes
if (!someOverflowing(resizeGroup)) {
return;
}
app.scriptPreferences.enableRedraw = false;
while (someOverflowing(resizeGroup)) {
for (var textFrameIndex = 0; textFrameIndex < resizeGroup.length; textFrameIndex++) {
var textFrame = resizeGroup[textFrameIndex];
if (type === "resize") decreaseFontSize(textFrame);
else if (type === "condense") condenseFont(textFrame);
else alert("Unknown operation");
}
}
app.scriptPreferences.enableRedraw = true;
}
function someOverflowing(textFrames) {
for (var textFrameIndex = 0; textFrameIndex < textFrames.length; textFrameIndex++) {
var textFrame = textFrames[textFrameIndex];
if (textFrame.overflows) {
return true;
}
}
return false;
}
function decreaseFontSize(frame) {
var texts = frame.parentStory.texts.everyItem().getElements();
for (var textIndex = 0; textIndex < texts.length; textIndex++) {
var characters = texts[textIndex].characters.everyItem().getElements();
for (var characterIndex = 0; characterIndex < characters.length; characterIndex++) {
characters[characterIndex].pointSize = characters[characterIndex].pointSize - 0.25;
}
}
}
function condenseFont(frame) {
var texts = frame.parentStory.texts.everyItem().getElements();
for (var textIndex = 0; textIndex < texts.length; textIndex++) {
var characters = texts[textIndex].characters.everyItem().getElements();
for (var characterIndex = 0; characterIndex < characters.length; characterIndex++) {
characters[characterIndex].setNthDesignAxis(1, characters[characterIndex].designAxes[1] - 5)
}
}
}
I know that this code can be improved upon (and am open to feedback), for example if a group consists of multiple text frames, the procedure will run for all of them, even though it need only be run once. I was getting pretty frustrated with the old JavaScript, and the impact is negligible. The rest of the functions are also only helper functions, which I'd like to replace with more modern version. Sadly and as already stated, I think that they are simply not available.
Thanks once again to Yuri, who helped me immensely!
I have a graph of about 5000 nodes and 5000 links, that i can visualize in Chrome thanks to the vivagraph javascript library (webgl is faster than svg - in d3 for example).
My workflow is :
Building with the networkx python library and output the result as a json file.
Load the json and construct the graph with the vivagraph javascript library.
Nodes positions are computed by the js library
The problem is that it takes time to render the layout with well positionned nodes.
My approach is to pre-compute the nodes position in networkx for example. The really good point on this approach is that it minimize client work on the browser. But i can't achieve good positions on the webpage. I need help on this step.
The relevant python code for the node position computation is :
## positionning
try:
# Position nodes using Fruchterman-Reingold force-directed algorithm.
pos=nx.spring_layout(G)
for k,v in pos.iteritems():
# scaling tentative
# from small float like 0.5555 to higher values
# casting to int because precision is not important
pos[k] = [ int(i*1000) for i in v.tolist() ]
except Exception, e:
print "positionning failed"
raise
## setting positions
try:
# set position of nodes as a node attribute
# that will be used with the js library
nx.set_node_attributes(G,'pos', pos)
except Exception, e:
print "getting positions failed"
raise e
# output all the stuff
d = json_graph.node_link_data(G)
with open(args.output,'w') as f:
json.dump(d,f)
Then in my page, in javascript :
/*global Viva*/
function graph(file){
var file = file;
$.getJSON(file, function(data) {
var graphGenerator = Viva.Graph.generator();
graph = Viva.Graph.graph();
# building the graph with the json data :
data.nodes.forEach(function(n,i) {
var node = graph.addNode(n.id,{d: n.d});
# node position is defined in the json element attribute 'pos'
node.position = {
x : n.pos[0],
y : n.pos[1]
};
})
# adding links between nodes
data.links.forEach(function(l,i) {
graph.addLink(data.nodes[l.source].id, data.nodes[l.target].id);
})
var max_link = 55
var min_link = 1
var colors = d3.scale.linear().domain([min_link,max_link]).range(['#F0F0F0','#252525']);
var layout = Viva.Graph.Layout.forceDirected(graph, {
springLength : 80,
springCoeff : 0.0008,
dragCoeff : 0.001,
gravity : -5.0,
theta : 0.8
});
var graphics = Viva.Graph.View.webglGraphics();
graphics
.node(function(node){
# color and size of nodes
color = colors(node.links.length)
if(node.id == "root"){
// pin node on canvas, so no position update
node.isPinned = true;
size = 60;
} else {
size = 20+(7-node.id.length)*(7-node.id.length);
}
return Viva.Graph.View.webglSquare(size,color);
})
.link(function(link) {
# color on links
fromId = link.fromId;
toId = link.toId;
if(toId == "root" || fromId == "root"){
return Viva.Graph.View.webglLine("#252525");
} else {
if( fromId[0] == toId[0]){
linkcolor = linkcolors(fromId[0])
return Viva.Graph.View.webglLine(linkcolor);
} else {
linkcolor = averageRGB(linkcolors(fromId[0]),linkcolors(toId[0]))
return Viva.Graph.View.webglLine('#'+linkcolor);
}
}
});
renderer = Viva.Graph.View.renderer(graph,
{
layout : layout,
graphics : graphics,
enableBlending: false,
renderLinks : true,
prerender : true
});
renderer.run();
});
}
I am now trying Gephi, but i don't want to use the gephi toolkit as i am not used to java.
If somebody got some hints on this, please avoid me hundred of trials and maybe failure ;)
Spring Layout assumes that the edge weights uphold the metric property, i.e Weight(A,B)+Weight(A,C) > Weight(B,C). If this is not the case, then networkx tries to place them as realistic as possible.
You could try to adjust this by
pos=nx.spring_layout(G,k=\alpha, iterations=\beta)
# where 0.0<\alpha<1.0 and \beta>0
# k is the minimum distance between the nodes
# iterations specify the simulated annealing runs
# This code works only on Networkx 1.8 and not earlier versions
I have a field in my admin page that I'd like to display in Scientific Notation.
Right now it displays something ugly like this. How can I display this as 4.08E+13?
Right now I'm using a standard Decimal field in the model.
Any advice is greatly appreciated.
I'm on Django 1.2.
You have to use %e to get the scientific notation format:
Basic Example:
x = 374.534
print("%e" % x)
# 3.745340e+02
Precision of 2
x = 374.534
print("{0:.2E}".format(x))
# 3.75E+02
x = 12345678901234567890.534
print("{0:.2E}".format(x))
# 1.23E+19
Precision of 3
print("{0:.3E}".format(x))
# 1.235E+19
Well, here's a work around since I can't figure out how to do this within the Django Python code. I have the admin pages run some custom javascript to do the conversion after the page is loaded.
Details:
Create this javascript file called "decimal_to_sci_not.js" and place it in your media directory:
/*
Finds and converts decimal fields > N into scientific notation.
*/
THRESHOLD = 100000;
PRECISION = 3;
function convert(val) {
// ex. 100000 -> 1.00e+5
return parseFloat(val).toPrecision(PRECISION);
}
function convert_input_fields() {
var f_inputs = django.jQuery('input');
f_inputs.each(function (index, domEl) {
var jEl = django.jQuery(this);
var old_val = parseFloat(jEl.val());
if (old_val >= THRESHOLD) {
jEl.val(convert(old_val));
}
});
}
function convert_table_cells() {
//Look through all td elements and replace the first n.n number found inside
//if greater than or equal to THRESHOLD
var cells = django.jQuery('td');
var re_num = /\d+\.\d+/m; //match any numbers w decimal
cells.each(function (index, domEl) {
var jEl = django.jQuery(this);
var old_val_str = jEl.html().match(re_num);
var old_val = parseFloat(old_val_str);
if (old_val >= THRESHOLD) {
jEl.html(jEl.html().replace(old_val_str,convert(old_val)));
}
});
}
django.jQuery(document).ready( function () {
convert_input_fields();
convert_table_cells();
});
Then update your admin.py code classes to include the javascript file:
class MyModel1Admin(admin.ModelAdmin):
class Media:
js = ['/media/decimal_to_sci_not.js']
admin.site.register(MyModel1,MyModel1Admin)