How does AI work? An introduction to intelligent agents for non-computer scientists

Otto Lang
6 min readOct 21, 2019
Artificial Intelligence episode 2

How does an AI-system perceive its environment and how does it act in it? This article will be about the most common concept of Artificial Intelligence understood by computer scientists; that of intelligent agents. Moreover, we will discuss the nature of environments in which AI systems or better called intelligent agents operate. And hence, how these agents are designed to execute tasks successfully.

If you want to dive into the concept of AI from the early beginnings of research and learn more about the four different schools of thought behind AI, as well as the six major fields in AI, I recommend you read my last article (episode 1).

What is an Intelligent Agent?

An agent is anything, that simply acts. The word agent comes from the Latin verb agere, which means to “do” or “make”.

An intelligent agent is anything that;

  1. perceives its environment through sensors and
  2. acts upon that environment through actuators.
Concept of an intelligent agent

Let me give you an example: a human agent senses the world (environment) through its eyes, ears, and other organs. Right now you are reading this sentence and try making sense of it through your central processing unit, your brain and later act on this article, by giving it a clap or sharing it. A robotic agent, like an autonomous driving vehicle, perceives its environment through cameras, LIDAR, infrared cameras, and other sensors. The vehicle actuators are the engine, brakes, air conditioning, and other motors. The environment in which the vehicle operates consists out of roads, highways, traffic lights and signs, other participants, …

An autonomous vehicle as an intelligent agent

An AI-based software agent like e.g. Grammarly receives keystrokes from the user, file inputs and browser information and displays its responses on the screen or writes it to a file.

To act the agent relies on its sensor inputs. In computer science, one uses the term percept sequence, to describe the agent’s complete history of perceptual inputs. If an autonomous car is required to switch a lane, it will check with its rear and side sensors if the desired area is safe to drive to and then act. Before Grammarly can correct a word the user has typed, it needs some input, then it can process the query and return a reasonable result to the user.

A software-based intelligent agent like Grammarly

The infographics you see above here describe this repeating cycle about perceiving changes in the environment through sensors, then processing the input and finally acting through actuators, before this process starts all over again.

So far so good, but how does this agent know work in more detail?

Each intelligent agent needs a so-called agent function, that maps the given input to reasonable action. The agent function is an abstract mathematical description; the agent program is its concrete implementation, running within some physical system.

To illustrate this abstract idea of an agent function, consider this simplified example of a robotic lawnmower.

The simplified lawnmower example

It shall act to the following rules:

If the grass is high and the battery full, then the lawnmower will move out and mow the grass.
If the battery is shortly before running out of energy, the lawnmower will return to its base to charge.
If some actuators brake down, the lawnmower will display an error message and inform the owner.

As you can see agent functions, are nothing else than a set of inputs, rules and desired outputs. If we implement such agent functions into a machine we would refer to it as an agent program. But how do we ensure that the intelligent agent performs well on its given tasks?

Therefore we introduce the concept of rationality. A rational agent is one that does the right thing. But how do you explain doing the right thing to a machine? The agent somehow needs some performance measure to evaluate what action to take given a percept sequence. you probably wouldn’t want your lawnmower to move around all day, but rather fulfill its duty as quickly, and/or as energy-saving as possible. Consequently what a rational intelligent agent does, depends on four things:

  1. The performance measure that defines the criterion of success.
  2. The agent’s prior knowledge of the environment.
  3. The actions that the agent can perform.
  4. The agent’s percept sequence to date.

And this leads to the definition of a rational agent:

For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.

Illustrating this by our lawnmower example:

  1. The lawnmower should do its mowing as fast as possible.
  2. The lawnmower might have a map of the garden, with boundaries, obstacles and the charging station.
  3. The lawnmower can accelerate and brake. It can be charging, mowing and moving. It can turn left, right, up and down.
  4. Its current position is the charging station, its battery is full.
The environment of our lawnmower

Given this information, the lawnmower will move out to start mowing. We might consider an implementation where the agent gets points for every mowed pit on the garden and a negative point if it visits a mowed spot again. Over time the negative points on pits might vanish because the grass has grown again. If the pit would still punish the agent with negative points it would not visit it again, which results in an ever-growing meadow. Still, if you observed the above graphic you’d recognize that the top left field is surrounded by negative rewards. Therefore the rational lawnmower, which wants to maximize its rewards would not reach the top left pitch all too sudden. Hence, in reality, it is a more common approach to define the performance measure as the desired goal state and let the agent find out the best solution itself. What the lawn-owner cares about most is that the machine is moving as few as possible and that the lawn is always neatly mown. To the extent that an agent relies on the prior knowledge of its designer rather than on its precepts, we say that the agent lacks autonomy. We prefer an autonomous agent that can adapt to a changing environment. Dealing with complex and changing environments is a real challenge for intelligent agents. One approach is to program agents as flexible as possible, as I said before. Another is making agents able to learn from their perception, which is a topic I’m gonna dedicate a whole post to.

So far so good. In the next blog post, I will be more specific on how to design an intelligent agent and further distinguish different types of intelligent agents. If you liked this blog post, please leave me a clap, share it and get in touch with me on medium, LinkedIn or tendex.net.

About the author: Otto Lang has studied Information Systems at TUM and is the founder of Tendex, an IT solutions company based in Munich. Tendex focuses on building web-based software solutions for clients, as well as stand-alone software products like instalics.com.

Sources:

»Grundlagen der Künstlichen Intelligenz« [foundations of the artificial intelligence] (IN2062) at the Technical University of Munich lectured by Professor Dr. Matthias Althoff,
“Artificial Intelligence — A modern approach” by Stuart J. Russell and Peter Norvig
Icons from fontawesome.com

--

--