Blog Blog Post Uncategorized

ATAVISM DEV LOG 5 – AI Groundwork

Week 5: January 22 – 28

The actual meat of this week’s blog is fairly technical, with talk about the AI framework I’ve been implementing  if you’re into that, but if you aren’t then you get a neat video and a progress plan at the end. Let’s jump right in.

Decisions, Decisions, Decisions

This is the “FoundNextZone” decision for the hare, where he checks for any waypoints that are lined up in his Queue.

If you’ve never heard about it, there’s a good idea for AI framework in Unity derived from using Scriptableobjects instead of Monobehavior to create inherited and instantiatable script objects that can run off of an entity using a simple AI monobehavior and custom state objects filled with realizations of these custom-coded items.

If you don’t know what any of the above meant, it means an AI framework consisting of a lot of behavior pieces that can be neatly arranged after it’s written without diving into code.

In the Unity-hosted tutorial on this system, the items are broken down into Decisions, Transitions, and Actions, both of which execute constantly by the use of a State object that’s called from your AI controller’s update function. The states are created like any other object, such as materials or meshes, and can be made, customized, and filled with Actions that execute every update and Decisions that check every update before moving to one state or another. The “generic” kind of framework here is incredibly open to customization, and combined with a one-time-Action object type I added on top of this is a versatile manner of writing a ton of behaviors and managing their terms of execution within the inspector itself.

The circles around the Hares indicate their current state, with the Brown color meaning they’re going somewhere, and the blue one meaning they’re taking a moment to scan for danger and food.

While I spent all of my time this week implementing and understanding how to use this effectively, the stuff I was actually making was some AI for the hare to run between plants. While I’ve currently only just activated and debugged running between waypoints (seen here in video), there’s almost all of the inert code for waking up, scanning for and running from danger, smelling out plants and eating them if they’re not currently occupied, and hiding until danger is out of the vicinity, all while changing sightline based on they’re current state. You can see the color of the states indicated in the sphere gizmos up above; Brown means the Hare is making it’s way to a waypoint and Blue means that it’s scanning for plants and predators.

Next Week, and further still


I hope to have the above paragraph’s AI done by next week, but I’m not too sure on how long it’ll take to bug squash. Afterwards continue to mop up a few aforementioned bugbears, such as ADS FOV, switching weapons while reloading, cartridge ejections from shooting the rifle, and an Inspector tool for choosing an initial weapon.

My current plan of implementing just this and that until I’ve thought of something better to do is kind of coming to a close. While there’s plenty of random stuff to implement, the end of the work above is going to mark an end the kind of “pre-prototype” time I’m in and morph into a more structured model of development as I try to make a deadline for the end of the semester. I’ll talk more about this, and indeed the contents of this plan, in next weeks blog, but it could be a few weeks yet due to an upcoming game jam on the second weekend of February. I’ll talk about that more next week specifically.

Until then, cheers.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s