We got to the point where we wanted to start moving the mothership around, again using a setTranslation model meant that we had to figure out equations to get it to a specific location on screen. We did learn a lot about splines and I was happy that Shiva accomodated us on our fools errand by providing functions to evaluate BSpline, Bezier and CatmullRom. I call this translation model the Infinite Momentum Model because we were trying to use the dynamics system for collisions, but as you can guess, setting a translation every frame completely breaks the collision system. For collisions to work, the dynamics system needs the opportunity to allow the results of the collision to take place. By setting translation every frame, you are overriding the result of the collision as if your object had infinite momentum.
In game, this manifested itself by drone ships colliding with and pushing planets off the screen. . . Our model was clearly broken for a few reasons:
The new implementation was discovered by a question on the game development stackexchange site. One of the answers linked to a paper by Craig W. Reynolds titled Steering Behaviors For Autonomous Characters. For anyone looking how to implement AI steering, this is the gold mine. The AI behaviors implemented from this paper were:
We also implemented some higher-level behaviors that simply mix the lower-level behaviors together, like OffsetPursueFireAvoid to do a shooting flyby. In this video you can see an example of the Arrive and Pursuit behaviors. The Red ship is being controller by a mouse click while the green ship is pursuing.
This video is an example of OffsetPursueFireAvoid. Also at the end, you can see an example of OffsetPursuit being used to create a formation of drones.
I wanted all AI behaviors to have a self-preservation priority, meaning they don’t just crash directly into the planet if it lies in their path. To accomplish this we used the following basic flowchart.
One disadvantage to this behavioral flow is that you get bouncies. Sometimes when close to an obstacle, the drone will bounce between Avoid and another AI behavior quite rapidly. I thought there might be an advantage to building some hysteresis into this flowchart but that is on the back-burner.
You’ll notice in the first video above that I begin in debug mode and that you can see the sensors attached to the ship. Our drone ship has 2 sensors with unique sensor IDs
The navigation sensor is what protrudes from the front of the ship and alerts the ship of an impending collision. Right now, we have the drones avoid only the planet, but we could easily add other things for them to avoid (like each other). The length of the navigation sensor is supposed to be a function of the max velocity of the object. We didn’t implement this dynamic sizing because modifying the sensor size and offset programatically sounded problematic. Instead, to determine the required length of the sensor we followed a 3-step process.
Before looking at the process, it is important that your object has mass set correctly in the dynamics controller. The default mass is something like 75. Because we are dealing with planets and motherships, we setup our mass scale like this:
The process to determine the required sensor length is then: * Choose the maximum velocity of the drone. This is based on what feels right, too fast, too slow, just right. * Choose the maximum turning force of the drone. Again based on how good it feels, the drone was either turning in too large of arcs or was turning far too quickly. * Set the sensor length so that the drone has time to avoid collision when approaching a planet from dead center at maximum velocity.
If I ever have to go back and tweak this, I might just take a try at the dynamic sizing model.
I am incredibly pleased with how simple this AI model is to use. I have overloaded handlers in the model so that I can simply say onSeek(this.someTarget) or onSeek(x, z).