The rate at which an animal responds on a schedule is one measure of the strength of association that it makes between response and reinforcement. For example, the interesting thing about FI schedules is that, in these terms, responses become more strongly associated with reinforcement as the time since SD onset increases. The animal does not learn the FI contingency precisely, but does learn an approximation which keeps the effort it expends to obtain reinforcement relatively low while maintaining a good chance of obtaining that reinforcement almost as soon as it becomes available. We can study the way animals choose between reinforcers by presenting them with the opportunity to respond on two schedules simultaneously if we have a Skinner- box with two levers and two SD lights. One lever may, for example operate on a VR20 schedule while the other operated on a VR10. In a series of experiments with different combinations of schedules, the way in which animals allot their resources between reinforcers of different values can be discovered. It turns out that animals do not distribute their responses ideally - always making all of their response to the richer schedule, but again distribute their response in a way which serves to minimise responding given only approximate information about the different contingencies. Animals allot responses between schedules in proportion to the numbers of reinforcers they obtain on each schedule. This is known as the matching law - it has been studied not only by psychologists, but also more and more by economists since this behavior of rats often corresponds to economic behavior in humans.

This document has been restructured from a lecture kindly provided by R.W.Kentridge.