Generate texture from equation as a function of time - python

I want to generate the texture (only color) of a mesh object in blender dynamically, such that the texture depends on time (or frame) in a animation. For example:
color(x,y,t) = cos(x+t)*sin(y+t)
I already found the video texture module in the API (bge.texture) but it is part of the blender game engine. As far as I know this can not be used to render animations.

By right-clicking on a colour swatch you have an option to Add Driver, you can then use python expressions to calculate the value to use. Drivers resemble keyframes but the values are calculated instead of fixed.
Once you have added the driver you use the graph editor to adjust it, in the graph editor header is a menu to chose between f-Curve Editor and Drivers. To see available functions you can use the python console Autocomplete to list them, you also have the option of adding your own functions to the namespace. You can also define variables that can extract values from other objects for you to use in your expressions.

Related

Select vertex from a mesh to parse as an argument to a script

I have made a Python script which takes a mesh and the index of a vertex to apply an operation. Unfortunately, Trimesh -the library I use to process the mesh- does not have an option to select an vertex by clicking on it.
Is it possible to visually select an vertex on the mesh, get the identifier on on that vertex and pass it to another script? Is it worth using a 3rd party program like Meshlab or Blender to select that input or am I missing a more obvious route?
Trimesh does not have a GUI, just some rendering tools. However, it has tons of features to programatically find vertices of interest if you are able to come up with rules or characteristics of it. The docs, source code examples, and issues all have useful implementations to browse through.
This borders on not answering "the question" but you do mention using third party tools as questions within your explanation, so...
I didn't get to implementing this in trimesh / Python. I used Meshlab to select the vertices I wanted, and manually copied the indices over to Python variables. Trimesh will re-order vertices as part of its optimization process. So if you go this route, be sure to use process=False to maintain vertex order. E.g.:
trimesh.load('./my.obj', process=False)

A way to extrude 2d text in The Foundry Nuke via Python

I'd like to know if there's a way to turn Nuke's text (contained in Text node) into polygonal object and then extrude it along Z axis? It's possible in Blackmagic Fusion, it's even possible in Apple Motion 5. Who knows how to do it in Nuke via Python?
logoPlate = nuke.nodes.Text(name="forExtrusion")
logoPlate['font'].setValue("~/Library/Fonts/Cuprum-Bold.ttf")
logoPlate['xjustify'].setValue("center")
logoPlate['yjustify'].setValue("center")
logoPlate['box'].setValue([0,0,512,256])
logoPlate['translate'].setValue([-20, 50])
logoPlate['size'].setValue(48)
logoPlate['message'].setValue("TV Channel logo")
logoPlate.setInput(0,nuke.selectedNode())
I am not interested in using exported obj, fbx or abc from 3D packages or any third party plugins.
The only method to extrude a text at the moment (in NUKE version 10.5) is to trace a text logo with Polygon shape tool using ModelBuilder node.
modelBuilder = nuke.createNode('ModelBuilder')
camera = nuke.createNode('Camera2')
nuke.toNode('ModelBuilder1').setSelected(True)
nuke.toNode('Camera1').setSelected(True)
nuke.connectNodes(2, camera)
nukescripts.connect_selected_to_viewer(0)
n = nuke.toNode('ModelBuilder1')
k = n.knob('shape').setValue(6) #'Polygon' tool in dropdown 'shape' menu
k.execute()
After tracing the logo I used Extrude from ModelBuilder's context menu and then baked out a geometry. But you can use only straight lines due to nature of polygonal modeling in NUKE.
No NURBS geometry.
script = nuke.thisNode()['bakeMenu'].value()
eval(script)
Typically you would use a 3D modelling program like Modo, Maya, Cinema 4D, etc. Create and output your text as a model and import it into Nuke. To create 3D text directly in Nuke, you need the Geometry Tools plugin. Then simply use the PolyText node.
Documentation on PolyText
Download site for Geometry Tools

How to drag objects around in a Gtk.Fixed widget using mouse events?

I have a custom object(a small circle) placed at some point inside a Gtk.Fixed() widget .Is there a way to drag this object around using the mouse. I am not able to map the Mouse Press/Release/Motion events to make this work.
I would prefer solution in Python using PyGobject but any other language will do also be fine if explanation is provided
More Details:
I am trying to make a font editor where these objects I mentioned above will be the control points of the bezier curves in the Glyph outlines
Here is an image of the concept design:
https://github.com/sugarlabs/edit-fonts-activity/blob/gh-pages/files/img/wireframe_concept_01_first_prototype.svg
I need to able to move the points shown to edit the outline of the letter shown
GtkFixed is not designed to do drawing work. It's made to locate widgets (such as buttons and such) on a fixed grid (รก la Windows).
If you would like to move elements of a drawing, have a look at eg. GooCanvas. Each element on a goocanvas can have events connected, which can then be used to move it around. You can even use CanvasGroup to group primitives (circle, rectangle etc), and move them together (even change other properties such as color, linewidth). The toolbox actually contains curves etc. It's easy to create a 'handle' using a small rectangle.
Here's an example of a simple goocanvas program, and you can find download links, references manuals and other useful stuff here.
I don't know if this is a tool you need, or just a learning exercise. If the former, then do have a look at FontForge, an open source font editor, and incredibly versatile.

How to get an ROI based tooltip in Python + any imaging library

Running Python, I have an image and some data calculated for different ROIs (regions of interest).
I would like to display that image, and have a tooltip pop up whenever I am over one of those regions of interest.
This is mainly for debugging purposes - so I don't care that things will be very pretty, or integrate into any other sort of GUI - just that I can easily understand what value I calculated for each part of the image.
Also - I don't mind which imaging/display library to use for that purpose. I normally work with PIL, or directly with numpy arrays - but other libraries are just as good for me.
Thanks!
If it's for debugging you can simply get the position of mouse clicks and print the value for the corresponding ROI. I would use OpenCV as it has SetMouseCallback() and you can define ROIs by polygons and then test what polygon gets the click, see this example. If you've never used OpenCV before then maybe this is not the best option.

Game map from Code

It's a long one so you might want to get that cup of tea/coffee you've been holding off on ;)
I run a game called World of Arl, it's a turn based strategy game akin to Risk or Diplomacy. Each player has a set of cities, armies and whatnot. The question revolves around the display of these things. Currently the map is created using a background image with CSS positioning of team icons on top of that to represent cities. You can see how it looks here: WoA Map
The background image for the map is located here: Map background and created in Omnigraffle. It's not designed to draw maps but I'm hopelessly incompetent with photoshop and this works for my purposes just fine.
The problem comes that I want to perform such fun things as pathfinding and for that I need to have the map somehow stored in code. I have tried using PIL, I have looked at incorporating it with Blender, I tried going "old school" and creating tiles as from many older games and finally I tried to use SVG. I say this so you can see clearly that it's not through lack of trying that I have this problem ;)
I want to be able to store the map layout in code and both create an image from it and use it for things such as pathfinding. I'm using Python but I suspect that most answers will be generic. The cities other such things are stored already and easily drawn on, I want to store the layout of the landmass and features on the landmass.
As for pathfinding, each type of terrain has a movement cost and when the map is stored as just an image I can't access the terrain of a given area. In addition to pathfinding I wish to be able to know the terrain for various things related to the game, cities in mountains produce stone for example.
Is there a good way to do this and what terms should I have used in Google because the terms I tried all came up with unrelated stuff (mapping being something completely different most of the time).
Edit 2:
Armies can be placed anywhere on the map as can cities, well, anywhere but in the water where they'd sink, drown and probably complain (in that order).
After chatting to somebody on MSN who made me go over the really minute details and who has a better understanding of the game (owing to the fact that he's played it) it's occurring to me that tiles are the way to go but not the way I had initially thought. I put the bitmap down as it is now but also have a data layer of tiles, each tile has a given terrain type and thus pathfinding and suchlike can be done on it yet at the same time I still render using Omnigraffle which works pretty well.
I will be making an editor for this as suggested by Adam Smith. I don't know that graphs will be relevant Xynth but I've not had a chance to look into them fully yet.
I really appreciate all those that answered my question, thanks.
I'd store a game map in code as a graph.
Each node would represent a country/city and each edge would represent adjacency. Once you have a map like that, I'm sure you can find many resources on AI (pathfinding, strategy, etc.) online.
If you want to be able to build an image of the map programattically, consider adding an (x, y) coordinate and an image for each node. That way you can display all of the images at the given coordinates to build up a map view.
The key thing to realize here is that you don't have to use just one map. You can use two maps:
The one you already have which is drawn on screen
A hidden map which isn't drawn but which is used for path finding, collision detection etc.
The natural next question then is where does this second map come from? Easy, you create your own tool which can load your first map, and display it. Your tool will then let you draw boundaries around you islands and place markers at your cities. These markers and boundaries (simple polygons e.g.) are stored as your second map and is used in your code to do path finding etc.
In fact you can have your tool emit python code which creates the graphs and polygons so that you don't have to load any data yourself.
I am just basically telling you to make a level editor. It isn't very hard to do. You just need some buttons to click on to define what you are adding. e.g. if you are adding a polygon. Then you can just add each mouse coordinate to an array each time you click on your mouse if you have toggled your add polygon button. You can have another button for adding cities so that each time you click on the map you will record that coordinate for the city and possibly a corresponding name that you can provide in a text box.
You're going to have to translate your map into an abstract representation of some kind. Either a grid (hex or square) or a graph as xynth suggests. That's the only way you're going to be able to apply things like pathfinding algorithms to it.
IMO, the map should be rendered in the first place instead of being a bitmap. What you should be doing is to have separate objects each knowing its dimensions clearly such as a generic Area class and classes like City, Town etc derived from this class. Your objects should have all the information about their location, their terrain etc and should be rendered/painted etc. This way you will have exact knowledge of where everything lies.
Another option is to keep the bitmap as it is and keep this information in your objects as their data. By doing this the objects won't have a draw function but they will have precise information of their placement etc. This is sort of duplicating the data but if you want to go with the bitmap option, I can't think of any other way.
If you just want to do e.g. 2D hit-testing on the map, then storing it yourself is fine. There are a few possibilities for how you can store the information:
A polygon per island
Representing each island as union of a list rectangles (commonly used by windowing systems)
Creating a special (maybe greyscale) bitmap of the map which uses a unique solid colour for each island
Something more complex (perhaps whatever Omnigiraffe's internal representation is)
Asuming the map is fixed (not created on the fly) its "correct" to use a bitmap as graphical representation - you want to make it as pretty as possible.
For any game related features such as pathfinding or whatever fancy stuff you want to add you should add adequate data structures, even if that means some data is redundant.
E.g. describe the boundaries of the isles as polygon splines (either manually or automatically created from the bitmap, thats up to you and how much effort you want to spend and is needed to get the functionality you want).
To sum it up: create data structures matching the problems you have to solve, the bitmap is fine for looks but avoid doing pathfining or other stuff on it.

Categories