20110710

Artificial Intelligence Software Class @ AHA Hackerspace

  Yesterday, I had the pleasure of attending a free introductory class on artificial intelligence presented by All Hands Active, a hackerspace located in downtown Ann Arbor. The experience helped inform me of several things. Thinking about how to think is not easy, but boy oh boy does it have potential. And artificial intelligences are certainly fun to think about.

by RedKidOne
When the general public thinks of AI, it tends to think of the things Hollywood has thrown at us over the decades. Some examples would be HAL from 2001 A Space Odyssey, or Gerty from the more recent film Moon. Science fiction story junkies like myself tend to imagine a general artificial intelligence that may lead to a technological singularity, like ones depicted in Accelerando by Charles Stross or the Metamorphosis of Prime Intellect by Roger Williams.

Well, the AI we were working with in our two hour long introductory class were nothing like the ones depicted by Hollywood or the sci-fi stories I've read. Instead, they were what one might call "Weak AI", meaning the AI had a very narrow task in mind, and is not capable of thinking in a general or abstract way like a human would.

Our class revolved around cognitive architecture software called Soar which was developed by Carnegie Mellon University in 1982 and is now maintained by the University of Michigan.  This software allows us to write the code or rules for they our AI thinks about it's environment and the objects within the environment.

Cognitive architecture is simply put, a model of human cognition. Defense companies like the ones who develop the predator drones in operation over Afghanistan use these cognitive architectures like Soar to create the autonomous vehicle behavior that has now become a reality on the battle field. Soar has also been used to develop the game AI of popular video games America's Army and Quake.

Our instructor, Jonathon Beard, told us that the overall research goal of the Soar technology was human-level intelligence. And for that to be true, several criteria must be met for our AI: it must be autonomous, adaptive, persistent, real-time, general, robust, able to assess settings, and then make decisions.

Now the AI systems in place at the present time in various industries have some of these capabilities to varying degrees. But these are still systems that are not in any meaningful sense, "conscious", yet in very specific cases they are able to assess their environment and make decisions based on their environment.


The simple AI we are working with involve the use of something called problem space computational model (PSCM). This is the model that gives AI the ability to interact with an environment. PCSM involves the following:
-input from environment
-state: what do i know right now?
-operator: transitions between states
    -elaborate states: characterize, and make generalization about object: alive, dead? green,
blue?
    -propose, select, then apply operators
    -output
Each step from the input to the output in important when it comes to making decisions. And when our AI is making decisions, it's not doing them one by one, it happens in parallel. This fact is what particularly amazed me about this technology. While our AI was taking in data about it's environment, it was making decisions on a top level called a "state" and was working out other decisions concurrently on lower levels called "elaborate states". 

All of this was enabled by a very important factor, memory. Both short and long term memory are critical in helping an artficial intelligence make decisions based on things that have happened in the past and are happening in the present.

This class was extremely informative for an introductory class, and I'd recommend looking into the Soar architecture that is free to download if you have an interest working with AI.