Top Document: FAQ: comp.ai.genetic part 2/6 (A Guide to Frequently Asked Questions) Previous Document: Q1.3: What's an Evolution Strategy (ES)? Next Document: Q1.5: What's Genetic Programming (GP)? See reader questions & answers on this topic! - Help others by sharing your knowledge The name of the Game First, a word on naming conventions is due, for no other paradigm of EC has undergone more changes to its name space than this one. Initially, Holland called his cognitive models "Classifier Systems" abbrv. with CS, and sometimes CFS, as can be found in [GOLD89]. Whence Riolo came into play in 1986 and Holland added a reinforcement component to the overall design of a CFS, that emphasized its ability to learn. So, the word "learning" was prepended to the name, to make: "Learning Classifier Systems" (abbrv. to LCS). On October 6-9, 1992 the "1st Inter'l Workshop on Learning Classifier Systems" took place at the NASA Johnson Space Center, Houston, TX. A summary of this "summit" of all leading researchers in LCS can be found on ENCORE (See Q15.3) in file CFS/papers/lcs92.ps.gz Today, the story continues, LCSs are sometimes subsumed under a "new" machine learning paradigm called "Evolutionary Reinforcement Learning" or ERL for short, incorporating LCSs, "Q-Learning", devised by Watkins (1989), and a paradigm of the same name, devised by Ackley and Littman [ALIFEIII]. And then, this latter statement is really somewhat confusing, as Marco Dorigo has pointed out in a letter to editors of this guide, since Q-Learning has no evolutionary component. So please let the Senior Editor explain: When I wrote this part of the guide, I just had in mind that Q-Learning would make for a pretty good replacement of Holland's bucket-brigade algorithm, so I used this litte speculation to see what comes out of it; in early December 95, almost two years later, it has finally caught Marco's attention. But meanwhile, I have been proven right: Wilson has suggested to use Q- Learning in CLASSIFIER SYSTEM (Wilson 1994) and Dorigo & Bersini (1994) have shown that Q-Learning and the bucket-brigade are truly equivalent concepts. We would therefore be allowed to call a CFS that uses Q-Learning for rule discovery, rather than a bucket-brigade, a Q-CFS, Q-LCS, or Q- CS; in any case would the result be subsumed under the term ERL, even if Q-Learning itself is not an EVOLUTIONARY ALGORITHM! Interestingly, Wilson called his system ZCS (apparantly no "Q" inside), while Dorigo & Bersini called their system a D-Max-VSCS, or "discounted max very simple classifier system" (and if you know Q- Learning at least the D-Max part of the name will remind you of that algorithm). The latter paper can be found on Encore (see Q15.3) in file CFS/papers/sab94.ps.gz And by the way in [HOLLAND95] the term "classifier system" is not used anymore. You cannot find it in the index. Its a gone! Holland now stresses the adaptive component of his invention, and simply calls the resulting systems adaptive agents. These agents are then implemented within the framework of his recent invention called ECHO. (See http://www.santafe.edu/projects/echo for more.) Alwyn Barry's LCS Web has a great deal of information on LCS. See: http://www.csm.uwe.ac.uk/~ambarry/LCSWEB/ On Schema Processors and ANIMATS So, to get back to the question above, "What are CFSs?", we first might answer, "Well, there are many interpretations of Holland's ideas...what do you like to know in particular?" And then we'd start with a recitation from [HOLLAND75], [HOLLAND92], and explain all the SCHEMA processors, the broadcast language, etc. But, we will take a more comprehensive, and intuitive way to understand what CLASSIFIER SYSTEMs are all about. The easiest road to explore the very nature of classifier systems, is to take the animat (ANIMAl + ROBOT = ANIMAT) "lane" devised by Booker (1982) and later studied extensively by Wilson (1985), who also coined the term for this approach. Work continues on animats but is often regarded as ARTIFICIAL LIFE rather than EVOLUTIONARY COMPUTATION. This thread of research has even its own conference series: "Simulation of Adaptive Behavior (SAB)" (cf Q12), including other notions from machine learning, connectionist learning, evolutionary robotics, etc. [NB: the latter is obvious, if an animat lives in a digital microcosm, it can be put into the real world by implantation into an autonomous robot vehicle, that has sensors/detectors (camera eyes, whiskers, etc.) and effectors (wheels, robot arms, etc.); so all that's needed is to use our algorithm as the "brain" of this vehicle, connecting the hardware parts with the software learning component. For a fascinating intro to the field see, e.g. Braitenberg (1984)] classifier systems, however, are yet another offspring of John Holland's aforementioned book, and can be seen as one of the early applications of GAs, for CFSs use this EVOLUTIONARY ALGORITHM to adapt their behavior toward a changing ENVIRONMENT, as is explained below in greater detail. Holland envisioned a cognitive system capable of classifying the goings on in its environment, and then reacting to these goings on appropriately. So what is needed to build such a system? Obviously, we need (1) an environment; (2) receptors that tell our system about the goings on; (3) effectors, that let our system manipulate its environment; and (4) the system itself, conveniently a "black box" in this first approach, that has (2) and (3) attached to it, and "lives" in (1). In the animat approach, (1) usually is an artificially created digital world, e.g. in Booker's Gofer system, a 2 dimensional grid that contains "food" and "poison". And the Gofer itself, that walks across this grid and tries (a) to learn to distinguish between these two items, and (b) survive well fed. Much the same for Wilson's animat, called "*". Yes, its just an asterisk, and a "Kafka-esque naming policy" is one of the sign posts of the whole field; e.g. the first implementation by Holland and Reitmann 1978 was called CS-1, (cognitive system 1); Smith's Poker player LS-1 (1980) followed this "convention". Riolo's 1988 LCS implementations on top of his CFS-C library (cf Q20), were dubbed FSW-1 (Finite State World 1), and LETSEQ-1 (LETter SEQuence predictor 1). So from the latter paragraph we can conclude that environment can also mean something completely different (e.g. an infinite stream of letters, time serieses, etc.) than in the animat approach, but anyway; we'll stick to it, and go on. Imagine a very simple animat, e.g. a simplified model of a frog. Now, we know that frogs live in (a) Muppet Shows, or (b) little ponds; so we chose the latter as our demo environment (1); and the former for a non-Kafka-esque name of our model, so let's dub it "Kermit". Kermit has eyes, i.e. sensorial input detectors (2); hands and legs, i.e. environment-manipulating effectors (3); is a spicy-fly- detecting-and-eating device, i.e. a frog (4); so we got all the 4 pieces needed. Inside the Black Box The most primitive "black box" we can think of is a computer. It has inputs (2), and outputs (3), and a message passing system inbetween, that converts (i.e., computes), certain input messages into output messages, according to a set of rules, usually called the "program" of that computer. From the theory of computer science, we now borrow the simplest of all program structures, that is something called "production system" or PS for short. A PS has been shown to be computationally complete by Post (1943), that's why it is sometimes called a "Post System", and later by Minsky (1967). Although it merely consists of a set of if-then rules, it still resembles a full- fledged computer. We now term a single "if-then" rule a "classifier", and choose a representation that makes it easy to manipulate these, for example by encoding them into binary strings. We then term the set of classifiers, a "classifier population", and immediately know how to breed new rules for our system: just use a GA to generate new rules/classifiers from the current POPULATION! All that's left are the messages floating through the black box. They should also be simple strings of zeroes and ones, and are to be kept in a data structure, we call "the message list". With all this given, we can imagine the goings on inside the black box as follows: The input interface (2) generates messages, i.e., 0/1 strings, that are written on the message list. Then these messages are matched against the condition-part of all classifiers, to find out which actions are to be triggered. The message list is then emptied, and the encoded actions, themselves just messages, are posted to the message list. Then, the output interface (3) checks the message list for messages concerning the effectors. And the cycle restarts. Note, that it is possible in this set-up to have "internal messages", because the message list is not emptied after (3) has checked; thus, the input interface messages are added to the initially empty list. (cf Algorithm CFS, LCS below) The general idea of the CFS is to start from scratch, i.e., from tabula rasa (without any knowledge) using a randomly generated classifier population, and let the system learn its program by induction, (cf Holland et al. 1986), this reduces the input stream to recurrent input patterns, that must be repeated over and over again, to enable the animat to classify its current situation/context and react on the goings on appropriately. What does it need to be a frog? Let's take a look at the behavior emitted by Kermit. It lives in its digital microwilderness where it moves around randomly. [NB: This seemingly "random" behavior is not that random at all; for more on instinctive, i.e., innate behavior of non-artificial animals see, e.g. Tinbergen (1951)] Whenever a small flying object appears, that has no stripes, Kermit should eat it, because its very likely a spicy fly, or other flying insect. If it has stripes, the insect is better left alone, because Kermit had better not bother with wasps, hornets, or bees. If Kermit encounters a large, looming object, it immediately uses its effectors to jump away, as far as possible. So, part of these behavior patterns within the "pond world", in AI sometimes called a "frame," from traditional knowledge representation techniques, Rich (1983), can be expressed in a set of "if <condition> then <action>" rules, as follows: IF small, flying object to the left THEN send @ IF small, flying object to the right THEN send % IF small, flying object centered THEN send $ IF large, looming object THEN send ! IF no large, looming object THEN send * IF * and @ THEN move head 15 degrees left IF * and % THEN move head 15 degrees right IF * and $ THEN move in direction head pointing IF ! THEN move rapidly away from direction head pointing Now, this set of rules has to be encoded for use within a CLASSIFIER SYSTEM. A possible encoding of the above rule set in CFS-C (Riolo) classifier terminology. The condition part consists of two conditions, that are combined with a logical AND, thus must be met both to trigger the associated action. This structure needs a NOT operation, (so we get NAND, and know from hardware design, that we can build any computer solely with NANDs), in CFS-C this is denoted by the `~' prefix character (rule #5). IF THEN 0000, 00 00 00 00 0000, 00 01 00 01 0000, 00 10 00 10 1111, 01 ## 11 11 ~1111, 01 ## 10 00 1000, 00 00 01 00 1000, 00 01 01 01 1000, 00 10 01 10 1111, ## ## 01 11 Obviously, string `0000' denotes small, and `00' in the fist part of the second column, denotes flying. The last two bits of column #2 encode the direction of the object approaching, where `00' means left, `01' means right, etc. In rule #4 a the "don't care symbol" `#' is used, that matches `1' and `0', i.e., the position of the large, looming object, is completely arbitrary. A simple fact, that can save Kermit's (artificial) life. PSEUDO CODE (Non-Learning CFS) Algorithm CFS is // start with an initial time t := 0; // an initially empty message list initMessageList ML (t); // and a randomly generated population of classifiers initClassifierPopulation P (t); // test for cycle termination criterion (time, fitness, etc.) while not done do // increase the time counter t := t + 1; // 1. detectors check whether input messages are present ML := readDetectors (t); // 2. compare ML to the classifiers and save matches ML' := matchClassifiers ML,P (t); // 3. process new messages through output interface ML := sendEffectors ML' (t); od end CFS. To convert the previous, non-learning CFS into a learning CLASSIFIER SYSTEM, LCS, as has been proposed in Holland (1986), it takes two steps: (1) the major cycle has to be changed such that the activation of each classifier depends on some additional parameter, that can be modified as a result of experience, i.e. reinforcement from the ENVIRONMENT; (2) and/or change the contents of the classifier list, i.e., generate new classifiers/rules, by removing, adding, or combining condition/action-parts of existing classifiers. The algorithm thus changes accordingly: PSEUDO CODE (Learning CFS) Algorithm LCS is // start with an initial time t := 0; // an initially empty message list initMessageList ML (t); // and a randomly generated population of classifiers initClassifierPopulation P (t); // test for cycle termination criterion (time, fitness, etc.) while not done do // increase the time counter t := t + 1; // 1. detectors check whether input messages are present ML := readDetectors (t); // 2. compare ML to the classifiers and save matches ML' := matchClassifiers ML,P (t); // 3. highest bidding classifier(s) collected in ML' wins the // "race" and post the(ir) message(s) ML' := selectMatchingClassifiers ML',P (t); // 4. tax bidding classifiers, reduce their strength ML' := taxPostingClassifiers ML',P (t); // 5. effectors check new message list for output msgs ML := sendEffectors ML' (t); // 6. receive payoff from environment (REINFORCEMENT) C := receivePayoff (t); // 7. distribute payoff/credit to classifiers (e.g. BBA) P' := distributeCredit C,P (t); // 8. Eventually (depending on t), an EA (usually a GA) is // applied to the classifier population if criterion then P := generateNewRules P' (t); else P := P' od end LCS. What's the problem with CFSs? Just to list the currently known problems that come with CFSs, would take some additional pages; therefore only some interesting papers dealing with unresolved riddles are listed; probably the best paper containing most of these is the aforementioned summary of the LCS Workshop: Smith, R.E. (1992) "A report on the first Inter'l Workshop on LCSs" avail. from ENCORE (See Q15.3) in file CFS/papers/lcs92.ps.gz Other noteworthy critiques on LCSs include: Wilson, S.W. (1987) "Classifier Systems and the Animat Problem" Machine Learning, 2. Wilson, S.W. (1988) "Bid Competition and Specificity Reconsidered" Complex Systems, 2(5):705-723. Wilson, S.W. & Goldberg, D.E. (1989) "A critical review of classifier systems" [ICGA89], 244-255. Goldberg, D.E., Horn, J. & Deb, K. (1992) "What makes a problem hard for a classifier system?" (containing the Goldberg citation below) is also available from Encore (See Q15.3) in file CFS/papers/lcs92-2.ps.gz Dorigo, M. (1993) "Genetic and Non-genetic Operators in ALECSYS" Evolutionary Computation, 1(2):151-164. The technical report, the journal article is based on is avail. from Encore (See Q15.3) in file CFS/papers/icsi92.ps.gz Smith, R.E. Forrest, S. & Perelson, A.S. (1993) "Searching for Diverse, Cooperative POPULATIONs with Genetic Algorithms" Evolutionary Computation, 1(2):127-149. Conclusions? Generally speaking: "There's much to do in CFS research!" No other notion of EC provides more space to explore and if you are interested in a PhD in the field, you might want to take a closer look at CFS. However, be warned!, to quote Goldberg: "classifier systems are a quagmire---a glorious, wondrous, and inventing quagmire, but a quagmire nonetheless." References Booker, L.B. (1982) "Intelligent behavior as an adaption to the task environment" PhD Dissertation, Univ. of Michigan, Logic of Computers Group, Ann Arbor, MI. Braitenberg, V. (1984) "Vehicles: Experiments in Synthetic Psychology" Boston, MA: MIT Press. Dorigo M. & H. Bersini (1994). "A Comparison of Q-Learning and Classifier Systems." Proceedings of From Animals to Animats, Third International Conference on SIMULATION of Adaptive Behavior (SAB94), Brighton, UK, D.Cliff, P.Husbands, J.-A.Meyer and S.W.Wilson (Eds.), MIT Press, 248-255. http://iridia.ulb.ac.be/dorigo/dorigo/conferences/IC.11-SAB94.ps.gz Holland, J.H. (1986) "Escaping Brittleness: The possibilities of general-purpose learning algorithms applied to parallel rule-based systems". In: R.S. Michalski, J.G. Carbonell & T.M. Mitchell (eds), Machine Learning: An Artificial Intelligence approach, Vol II, 593-623, Los Altos, CA: Morgan Kaufman. Holland, J.H., et al. (1986) "Induction: Processes of Inference, Learning, and Discovery", Cambridge, MA: MIT Press. Holland, J.H. (1992) "Adaptation in natural and artificial systems" Boston, MA: MIT Press. Holland, J.H. (1995) "Hidden Order: How adaptation builds complexity" Reading, MA: Addison-Wesley. [HOLLAND95]: Holland, J.H. & Reitman, J.S. (1978) "Cognitive Systems based on Adaptive Algorithms" In D.A. Waterman & F.Hayes-Roth, (eds) Pattern- directed inference systems. NY: Academic Press. Minsky, M.L. (1961) "Steps toward Artificial Intelligence" Proceedings IRE, 49, 8-30. Reprinted in E.A. Feigenbaum & J. Feldman (eds) Computers and Thought, 406-450, NY: McGraw-Hill, 1963. Minsky, M.L. (1967) "Computation: Finite and Infinite Machines" Englewood Cliffs, NJ: Prentice-Hall. Post, Emil L. (1943) "Formal reductions of the general combinatorial decision problem" American Journal of Mathematics, 65, 197-215. Rich, E. (1983) "Artificial Intelligence" NY: McGraw-Hill. Tinbergen, N. (1951) "The Study of Instinct" NY: Oxford Univ. Press. Watkins, C. (1989) "Learning from Delayed Rewards" PhD Dissertation, Department of Psychology, Cambridge Univ., UK. Wilson, S.W. (1985) "Knowledge growth in an artificial animal" in [ICGA85], 16-23. Wilson, S.W. (1994) "ZCS: a zeroth level classifier system" in EC 2(1), 1-18. User Contributions:Comment about this article, ask questions, or add new information about this topic:Top Document: FAQ: comp.ai.genetic part 2/6 (A Guide to Frequently Asked Questions) Previous Document: Q1.3: What's an Evolution Strategy (ES)? Next Document: Q1.5: What's Genetic Programming (GP)? Part1 - Part2 - Part3 - Part4 - Part5 - Part6 - Single Page [ Usenet FAQs | Web FAQs | Documents | RFC Index ] Send corrections/additions to the FAQ Maintainer: David.Beasley@cs.cf.ac.uk (David Beasley)
Last Update March 27 2014 @ 02:11 PM
|
Paid Studies: Make $7774 Or More Per Week: https://fla.kr/rCjf
Forex + Cryptocurrency = $ 9163 per week: https://links.wtf/KSfv
Invest $ 9811 and get $ 26859 every month: https://fla.kr/rBm3
Just how to Make $6628 FAST, Rapid Loan, The Busy Budgeter: https://darknesstr.com/wxng
https://tinyurl.com/y7h69pcq
https://delhiescortss.com/%D7%97%D7%91%D7%99%D7%9C%D7%95%D7%AA-%D7%A1%D7%A4%D7%90-%D7%9E%D7%A4%D7%A0%D7%A7%D7%95%D7%AA-%D7%91%D7%93%D7%A8%D7%95%D7%9D-%D7%A1%D7%A4%D7%90-%D7%A4%D7%9C%D7%95%D7%A1/
https://rokslides.com/%d7%a2%d7%99%d7%a1%d7%95%d7%99-%d7%98%d7%a0%d7%98%d7%a8%d7%99-%d7%a2%d7%91%d7%95%d7%93%d7%aa-%d7%a2%d7%95%d7%9e%d7%a7-%d7%a2%d7%9d-%d7%94%d7%92%d7%95%d7%a3/
https://lover-israil-graf.gq/news/spathathcaro
https://sapnaranaut.hatenablog.com/entry/2023/02/12/005103
https://sapnaranaut.hatenablog.com/entry/2023/02/10/042933
https://sapnaranaut.hatenablog.com/entry/2023/02/13/073142
https://sapnaranaut.hatenablog.com/entry/2023/02/11/024301
https://sapnaranaut.hatenablog.com/entry/2023/02/07/201048
https://sapnaranaut.hatenablog.com/entry/2023/02/12/182424
https://sapnaranaut.hatenablog.com/entry/2023/02/11/173845