How to render blocks using columns for a game? - python

I am using a website called pixelpad.io to make my game, which is in Python. I am trying to make a simple platformer where the player can move horizontally and jump on blocks. Because of this website, I have to specify the co ordinates of each block for each level I make. Since this is all on a browser and I spawned all of the blocks in at the start of the game, the fps has been running slow. A friend of mine said to figure out the x co ordinate of left side of screen compared to the player along with the co ordinate of the right side. He then said to do some math to figure out which columns are visible, and generate the blocks in those columns. I somewhat understand his explanation but I am still confused on how to code it. How should I store and use all the block information for each column since the level is preset? I have a couple types of sprites for different blocks, so I'm not too sure how to store that information either. When the player is outside of a column that was rendered, does it destroy itself? I need an explanation for this, pseudo code, or an easier alternative to use.

I'm not sure what API you are using but I'm guessing it's not pygame. However, this pygame tutorial about optimization might help. I found the basic theory of it very useful in learning how to optimize my pygame game projects.
https://youtu.be/s3aYw54R5KY

Related

Is there a way to use an f-curve to interactively retime keys in Maya?

WHAT: I'm looking for a way to create an f-curve to dynamically retime an arbitrary number of keys in Maya. Something like an Animation Layer, but for timing on a per-asset basis.
WHAT NOT: I've got a great linear retime-from-pivot script already. Used it for years, love it, not looking for that.
WHY: Lets say I've animated several goblins racing toward an enemy. Maybe they start the shot behind a gate or something because I don't want to change the timing of the start of the shot, and I don't want to change the end of the shot because I've already animated the hero striking them, and I can't change the length of the shot because the shot length is locked.
The director wants one of the goblins to get a little ahead while they are running, and then to slow back down into the current overall shot timing. This director is more of a "I'll know it when I see it" kind of guy so I expect several rounds of revisions, and he might throw in a stumble request later for all I know, so I want to be able to mute this retime (so I'm working on whole numbered keys) and have it non-destructive (no baking).
SUMMARY: So, I want to scale an arbitrary number of keys on a selected object by a gradient with an arbitrary start and stop for the retime that can be muted, removed, or adjusted non-destructively.
I'm thinking that setting an f-curve that will affect the timing of selected keys would be perfect. Exactly like how animation layers work, but for timing.
Bonus points if a single retime curve can affect keys on several animation layers, as well.
Is this idea possible? Can you point me in some good direction for getting started, or any tools that have already been written? I'm very new to learning programming, and am just starting to learn python for maya.
This should be possible. Have a look how a scene timewarp works. Every animation curve is connected to a time node. Usually you only have one time node which is connected to all anim curves. A scene timewarp is connected to this time node. What you can try is to create a time node for every bunch of animcurves you want control seperately. And then create a animCurveTT and connect it to the corresponding time node. If this precdure works, you can script it to make it easier to control.

How to automated curved mouse movements in Python (with pyautogui or any other similarmodule)

I'm trying to automate mouse movements in a way that would look somewhat human-like. The reason why I need to do this is because I'm currently working on promotional videos for a web-app and I want to show how it's used. I don't want to just record my own movements because what I mean by "somewhat" human-like is that I want it to feel kind of organic, but I still want it to be perfect.
I've been using pyautogui and time for now and it works great. I can get the cursor to move at whatever speed I want. I can make it slow down with tweening functions. It looks almost human, but it's all in straight lines. I'd like to have the cursor move in curves.
Is that possible? Either that means using pyautogui with another module or simply other modules completely, that'd be fine. Also, I've found only one question here about this subject and I'm not sure the poster had the same objective as I do. They were talking about a bezier curve. I'm not particularly familiar with those so ideally, if it's possible of course, I'd like a simpler solution.

How to optimize tile rendering in pygame?

I am making a tile based game, and the map needs to be rendered every frame. Right now, each tile is 32X32, and the visible map is 28X28 tiles. The performance is dreadful. I recently made it only render the visible tiles, but this still did not improve the FPS much. Right now I'm looking for a way to speed up the rendering. I attribute the slowness to the way I am rendering ; every tile is individually blitted to the screen. What would be a more effective was of doing this?
In pygame (afaik), updating the screen is always one hell of a bottle neck. Since I could not see your code, I don't know, how you are updating the screen. Only blitting the the sprites that changed is a start, but you need to only update those parts that changed, on the screen.
Basically it is the difference between using display.flip() or using update_rects() with only the changed rects. I know, that does not help at all, when you are scrolling the map.
Take a look at this question: Why is this small (155 lines-long) Pacman game on Python running so slow?, it has a similiar topic.
One thing I tried when I had a map compiled of tiles and some sprites on it, I tried always having a precompiled image of the map for an area containing the currently displayed part and some 200 or so pixels around that, so that I could blit the prepared "ground" (still only in updated parts) without the need of blitting all those tiles contained in it. That, of course, is quite some thinking you have to put into that, espacially if you have multiple layers and parts of the map that can be above your active sprites. It is interesting to think and work that through, but I cannot tell you, how much you will gain by that.
One totally different possible solution: I began with pygame once (since I did SDL in C++ prior to that). Recently I was directed to another python gaming library: pyglet. This does not suffer from the problems of updating the whole screen as much as pygame (I think it's because of usage of OpenGL acceleration; it still works on my not at all accelerated eee-Netbook). If you are not bound to pygame in any way, it might be interesting to take a look at pyglet.

How to create a picture with animated aspects programmatically

Background
I have been asked by a client to create a picture of the world which has animated arrows/rays that come from one part of the world to another.
The rays will be randomized, will represent a transaction, will fade out after they happen and will increase in frequency as time goes on. The rays will start in one country's boundary and end in another's. As each animated transaction happens a continuously updating sum of the amounts of all the transactions will be shown at the bottom of the image. The amounts of the individual transactions will be randomized. There will also be a year showing on the image that will increment every n seconds.
The randomization, summation and incrementing are not a problem for me, but I am at a loss as to how to approach the animation of the arrows/rays.
My question is what is the best way to do this? What frameworks/libraries are best suited for this job?
I am most fluent in python so python suggestions are most easy for me, but I am open to any elegant way to do this.
The client will present this as a slide in a presentation in a windows machine.
The client will present this as a slide in a presentation in a windows machine
I think this is the key to your answer. Before going to a 3d implementation and writing all the code in the world to create this feature, you need to look at the presentation software. Chances are, your options will boil down to two things:
Animated Gif
Custom Presentation Scripts
Obviously, an animated gif is not ideal due to the fact that it repeats when it is done rendering, and to make it last a long time would make a large gif.
Custom Presentation Scripts would probably be the other way to allow him to bring it up in a presentation without running any side-programs, or doing anything strange. I'm not sure which presentation application is the target, but this could be valuable information.
He sounds like he's more non-technical and requesting something he doesn't realize will be difficult. I think you should come up with some options, explain the difficulty in implementing them, and suggest another solution that falls into the 'bang for your buck' range.
If you are adventurous use OpenGL :)
You can draw bezier curves in 3d space on top of a textured plane (earth map), you can specify a thickness for them and you can draw a point (small cone) at the end. It's easy and it looks nice, problem is learning the basics of OpenGL if you haven't used it before but that would be fun and probably useful if your in to programing graphics.
You can use OpenGL from python either with pyopengl or pyglet.
If you make the animation this way you can capture it to an avi file (using camtasia or something similar) that can be put onto a presentation slide.
It depends largely on the effort you want to expend on this, but the basic outline of an easy way. Would be to load an image of an arrow, and use a drawing library to color and rotate it in the direction you want to point(or draw it using shapes/curves).
Finally to actually animate it interpolate between the coordinates based on time.
If its just for a presentation though, I would use Macromedia Flash, or a similar animation program.(would do the same as above but you don't need to program anything)

How do I track an animated object in Python?

I want to automate playing a video game with Python. I want to write a script that can grab the screen image, diff it with the next frame and track an object to click on. What libraries would be useful for this other than PIL?
There are a few options here. The brute force diff'ing approach will lead to a lot of frustration unless what you're tracking is very consistent. For this you could use any number of genetic approaches to train your program what to follow. After enough generations it would do the right thing reliably. If the thing you want to track is visually obvious (like a red ball on a white screen) then you could detect it yourself through simple brute force scanning of the bitmap.
Another approach would be just looking at the memory of the running app, and figuring out what area is controlling the position of your object. For some more info and ideas on this, see how mumble got 3D positional audio working in various games.
http://mumble.sourceforge.net/HackPositionalAudio
Answer would depend on the platform and game too.
e.g. I did once similar things for helicopter flash game, as it was very simple 2d game with well defined colored maze
It was on widows with copy to clipboard and win32 key events using win32api bindings for python.

Categories