QQ登录

只需要一步,快速开始

 注册地址  找回密码
查看: 2484|回复: 0
打印 上一主题 下一主题

Neural Network Using Genetic Algorithms

[复制链接]
字体大小: 正常 放大
lckboy        

26

主题

1

听众

218

积分

升级  59%

  • TA的每日心情

    2014-2-22 20:49
  • 签到天数: 13 天

    [LV.3]偶尔看看II

    群组2014美赛MCMA题备战群

    群组2014美赛MCMB题备战群

    跳转到指定楼层
    1#
    发表于 2004-6-9 10:07 |只看该作者 |倒序浏览
    |招呼Ta 关注Ta
    <CENTER>% L8 g) x; ^0 M6 Q# m1 p7 ?6 ?: v
    <H1>NNUGA </H1>
    ; S, N9 ^9 I& B& i1 u: S<H2>Neural Network Using Genetic Algorithms </H2>
    ; n. N: X, G' |; c; ^% P<H2>By <a href="http://www.cs.bgu.ac.il/~omri" target="_blank" >Omri Weisman</A> and <a href="http://www.cs.bgu.ac.il/~pollack" target="_blank" >Ziv Pollack</A> </H2>6 B. P0 Q. z' Z! ^7 B( u: Q- H) m
    <H3>A final project in the course Robotics 95 </H3><IMG src="http://www.cs.bgu.ac.il/images/lines/rainban.gif"></CENTER>
    5 ]# I# j2 M  M& C: m9 l" S<DL>6 g5 Y1 `4 M, M3 _" x& M( o/ a* n
    <DT>
    ; ]" H  q% X* L4 b; M- a<H3><FONT size=3>Using Genetic Algorithms in Computer Learning</FONT></H3>3 Z' `% u/ x& T5 ~* U( q
    <DD><FONT size=3>Essentially, <B>Genetic Algorithms</B> (GA) are a method of "breeding" computer programs and solutions to optimization or search problems by means of <B>simulated evolution</B>. Processes loosely based on <B>natural selection, crossover, and mutation</B> are repeatedly applied to a <B>population</B> of binary strings which represent potential solutions. Over time, the number of above-average individuals increases, and better fit individuals are created, until a good solution to the problem at hand is found.
    . f+ d! r! q- h5 F' X. |) v; B2 M. [4 H: U3 f& p
    </FONT>: p+ t& a3 u- k# ]
    <><FONT size=3>In NNUGA, we are using GA to find the solution to a <B>classification problem</B> with a <B>neural network</B> (NN). The neural network is a structure which is able to respond with <I>True</I> (1) or <I>False</I> (0) to a given input vector. We are trying to "teach" our neural network to correctly classify a set of input vectors, which can be thought of as <B>learning a concept</B>. We then expect that when the neural network will be presented with a vector P not from this set, it will tend to exhibit <B>generalization</B> by responding with an output similar to target vectors for input vectors close to the previously unseen input vector P. 8 g/ A1 A0 R5 Y, K- O

    : l% F7 T% }* ^4 a9 ^</FONT>
    ! d! ~+ G% }2 w: J<><FONT size=3>Our genetic algorithm works with not one but <B>multiple populations</B>, all of which evolve seperately most of the time, except for once every several generations where we allow different populations to mix with each other.
    + u  W* \8 L. j0 C7 r; P9 f; S& Y8 h9 M& V- p1 c- B( ~. |0 N
    - ]! @4 e; m- n: `, Z+ X8 H0 I' m8 ?
    </FONT></P>7 e; }9 f$ r8 Z4 v8 A# |4 V" \
    <DT>% x7 ]/ P2 K) y8 I  j  |
    <H3><FONT size=3>The Neural Network</FONT></H3>" ^: Z! K2 Z  @0 i
    <DD><FONT size=3>Our neural network has 2 inputs and 1 output. It is constructed from 6 neurons in the first layer and 1 neuron in a second layer. 3 B; m+ i2 R- X/ q1 [/ {& @. e  j$ ?

    : B0 k2 _, r% d: K+ |</FONT>
    4 w  A  q4 F4 Y3 b<><FONT size=3>In the first layer we have 2 neurons with atan transfer functions: ( x- Y( u9 A5 q0 J) T

    0 R' f' j% d6 U6 s</FONT>
    4 _0 e  o& S  J! r<><FONT size=3></FONT><RE><FONT size=3>                atan( P * W + b )' R1 G2 f, F' ~+ n3 d6 L+ E
            </FONT></PRE><FONT size=3>2 with linear transfer functions: </FONT><RE><FONT size=3>                P * W + b: m  v/ i+ S! Z1 \- ~+ o2 _- b( _$ [
            </FONT></PRE><FONT size=3>and 2 with hardlim transfer functions: </FONT><RE><FONT size=3>                P * W + b &gt; 0  o+ s- |9 N! j5 }
            </FONT></PRE><FONT size=3>where P is the input vector presented to the network, W is the vector of weights and b is the bias of the neuron.
    $ Q- W# b+ X5 G( c- s- I; ~- D1 J
    * h8 I1 O- x  v5 b) X</FONT>
    6 G* K3 n$ d* z" D<><FONT size=3>The output function of the neuron in the second layer is: </FONT><RE><FONT size=3>                A * W + b &gt; 0
    : w" b$ @4 K5 Y% I. j: D1 o" s# Q        </FONT></PRE><FONT size=3>where A is the vector of the outputs of the neurons in the first layer, W is the vector of weights and b is its bias. The output of this neuron is also the output of the network.
    ' i; L( {% Z) l; _' y/ N0 |% a8 i# L3 D; q3 L

    ) i+ [7 U' @% Z8 P& `, [8 x7 j
    ( `6 M% `% E- Z# {</FONT>
    - Z/ N: d, Z" L" @: m<DT>8 ]/ b# N9 w' [5 P% e1 R5 {. _
    <H3><FONT size=3>The Learning Method</FONT></H3>! O, s/ ]' ~9 R7 x5 `
    <DD><FONT size=3>As stated earlier, the neural network learns using genetic algorithms. This means that the weights and biases of all the neurons are joined to created a single vector. A certain set of vectors is a correct solution to the classification problem at hand. We are hoping to find one of these vectors using an evolutionary algorithm. 9 o$ X3 @% f  g5 {4 C/ U

      C8 p& E4 J& D. o</FONT>( m/ Y  Q0 r# g7 z& \$ j% e
    <><FONT size=3>The Canonical GA (pseudo code): 5 K. M% X8 D0 k" ]& N7 @8 y

    " h2 P- L$ B) Q/ T9 m( w</FONT>. l1 i) t4 Q  g! w  i9 X' O
    <><FONT size=3></FONT><RE><FONT size=3>        choose initial population, l% K1 m: Y6 r! o& p( `
            evaluate each individual's fitness6 d9 K; B; v8 K% J& f: s; u
            repeat, t6 W: H: ]# W* c" i0 M1 @7 u2 f
                    select individuals to reproduce5 R& j+ i6 C4 p8 W" B2 R+ N8 p
                    mate pairs at random
    7 S# m9 G0 J% o" R                apply crossover operator
    8 P& N0 C% n* {4 z5 |$ r                apply mutation operator6 J. V- {5 i5 f6 A1 G1 t( _' O
                    evaluate each individual's fitness
    ; n2 v% Q* n' c        until terminating condition
    $ V) f7 m& i. Q% F) `7 g0 g( K        </FONT></PRE><FONT size=3>The learning loop can terminate either if a satisfactory solution is found, or the number of generations pass a preset limit, suggesting that a complete solution will not be found with this set of individuals. + q  ~; `/ t( A  D# `

    ) ~  x6 R4 x* \5 z: y* p</FONT>
    . x# W& \3 |. _: J8 z3 b- B% M% a- ~) V<><FONT size=3>In our case, we have several seperated populations, all of which evolve seperately. Once every several generations we allow cross-over from different populations. 7 \+ v& `+ A9 Q! V1 L" f

    7 q; n2 v2 c6 \, ?% c; P( O, Q1 J</FONT>, F( Q) n6 j6 B" K; B8 }+ n, A
    <><FONT size=3></FONT></P>$ J4 r) Q, j- ~/ X& b
    <DT># V8 m5 I- P" [# S
    <H3><FONT size=3>Limitations</FONT></H3>
    3 b) M# U; V1 M! J<DD><FONT size=3>Sometimes it could happen that though the NN could theoretically solve a certain classification problem, our system will not return a correct solution. This is because of the random nature of the algorithm and its reliance on natural selection, mutation and cross-overs. Naturally, it could happen that a certain flow of events that would lead to a correct solution will not occure and thus a solution will not be found. However, by using several unrelated populations we have decreased the probabilty of this happening, since if some population has poor individuals the solution could still be found at another. & m6 ?  e- O1 w" \. w* k

    / r) h/ V" i$ X</FONT>; ]; J- }( Q, U5 s' l3 I/ F" K
    <DT>) u8 U7 W3 P( {) d
    <H3><FONT size=3>Our Implementation</FONT></H3>7 R& t* m* C) c/ t+ K
    <DD><FONT size=3>We implemented a network with 2 inputs. The input for the neural network can be taken from a graphic user interface, by clicking on points in a board. A click with the left mouse button generates a '+' sign on the board, marking that it's a point where the NN should respond with 'True'. A click with the right mouse button generates a '-' sign on the board, marking that it's a point where the NN should respond with 'False'. When enough points have been entered, the user can click on 'Start', which will introduce these points as inputs to the NN, have it learn these input vectors and show a group of lines which corresponds to the division of the plane into regions of opposite neuron response. While the learning takes place, a textual indication of the learning process is presented on the standard output; This includes the fitness of the best individual in each population on each generation, and a schematic textual division of the plane once every 50 generations, to allow the user to inpect the progress. . i" h  |+ r. S) m8 |
    / ?. ~" D2 P7 [
    </FONT>. v6 M& g$ a7 W! S. a2 f) Y" U6 A
    <><FONT size=3>Here's an example of a screen shot: 0 \2 b: R/ F# i0 M& i) K' `; k4 U
    2 @' e# U8 D; p* m; k. J, h
    </FONT>
    5 W+ ?& }9 C( R% I+ t& N<><FONT size=3></FONT>3 g& J. ~1 I1 R1 a
    <CENTER><FONT size=3><IMG src="http://www.cs.bgu.ac.il/~omri/NNUGA/nnuga.gif"></FONT></CENTER>
    ' ]' K0 S1 G- L9 {8 e9 d<><FONT size=3></FONT></P>1 p! m0 y$ S* k4 `; i7 G& p
    <DT>2 i' x7 W, b2 C
    <H3><FONT size=3>Examples</FONT></H3>
    9 M* ~3 o* n+ ~8 z<DD><FONT size=3>Here are more examples of screen shots. & A/ ?7 g1 M! x: V" |

    ( k9 b. [$ |" }</FONT>- P5 {; i6 d7 x: n& X
    <UL>( _% l% Z% E1 O
    <LI><a href="http://www.cs.bgu.ac.il/~omri/NNUGA/nnuga1.gif" target="_blank" ><FONT size=3>Example 1 </FONT></A><FONT size=3>( GIF, 10K ) 1 v+ q( f7 A+ N" a( A
    # p- B7 j1 a% y2 a5 l
    </FONT>2 ^' @& G" s" u/ ~5 g' t4 L: f. U
    <LI><a href="http://www.cs.bgu.ac.il/~omri/NNUGA/nnuga2.gif" target="_blank" ><FONT size=3>Example 2 </FONT></A><FONT size=3>( GIF, 10K )
    ' n9 x/ l- K5 e. n- r/ S+ u1 e- {% E, I% E( Y7 Q+ M5 A
    </FONT>$ y- |9 c  B7 A6 l) x$ _# j5 F6 O
    <LI><a href="http://www.cs.bgu.ac.il/~omri/NNUGA/nnuga3.gif" target="_blank" ><FONT size=3>Example 3 </FONT></A><FONT size=3>( GIF, 10K )
    7 E. {8 \. }/ r
    7 h; S# Q+ e, n' n% W</FONT>% b* `  ]7 A4 F
    <LI><a href="http://www.cs.bgu.ac.il/~omri/NNUGA/nnuga4.gif" target="_blank" ><FONT size=3>Example 4 </FONT></A><FONT size=3>( GIF, 10K ) 8 a, n2 t' J7 \+ J, o* M
    + E0 @9 Z  I8 e+ D7 @
    </FONT>
    * T" |# N. T/ V7 W7 j4 P) c; c. {<LI><a href="http://www.cs.bgu.ac.il/~omri/NNUGA/nnuga5.gif" target="_blank" ><FONT size=3>Example 5 </FONT></A><FONT size=3>( GIF, 10K ) </FONT></LI></UL>
    , Z! Z' r. i; v<><FONT size=3></FONT></P>7 j) C5 U! A/ m+ |, l# D+ g9 V
    <DT>+ G/ h' {$ i, B; S4 |
    <H3><FONT size=3>Source Code</FONT></H3>' e3 v. Q8 l* {3 S8 c8 T
    <DD><FONT size=3>In order to be able to run the program, save all these files in a directory, type 'make', and use 'nnuga' to start the system.The program should run under all platforms which support Tk, though we tested it for UNIX SunOS only. 4 d, K5 Q  E1 f0 ?
    7 ?# K5 Q4 W0 r) V% s5 ~
    </FONT>( S. R$ Q& {% l9 c0 q+ H
    <>
    8 {6 d, d  k( @2 v<UL>/ G  P* h8 v8 u$ s  h2 B- C- Z% ^
    <LI><a href="http://www.cs.bgu.ac.il/~omri/NNUGA/nnuga" target="_blank" ><FONT size=3>nnuga </FONT></A><FONT size=3>- The Tk interface (8K). " {& M. U2 c$ M+ I9 I
    % V7 ?. L: ^# Q* u8 H
    </FONT>* H1 ^; z2 N6 ?
    <LI><a href="http://www.cs.bgu.ac.il/~omri/NNUGA/nnuga.c" target="_blank" ><FONT size=3>nnuga.c </FONT></A><FONT size=3>- The actual Neural Network, C code (18K).
    0 }3 w; w8 Q$ b, W4 z
      g3 X5 w8 ]- a9 l  O; c* U</FONT>, M2 Q# n( P, g6 X- q
    <LI><a href="http://www.cs.bgu.ac.il/~omri/NNUGA/README" target="_blank" ><FONT size=3>readme </FONT></A><FONT size=3>- The readme file (very much like this file) (6K).
    / H) X$ |3 ~/ Y, Q: c3 m% v0 U- c% s2 Y% t0 }- O5 F* B8 K
    </FONT>
      J) \2 M( _  Z! H<LI><a href="http://www.cs.bgu.ac.il/~omri/NNUGA/makefile" target="_blank" ><FONT size=3>makefile </FONT></A><FONT size=3>- The (trivial) makefile (&lt;1K). </FONT></LI></UL>2 X. I5 z( M( r0 G6 z  }/ y
    <><FONT size=3>You can also grab all these files in one </FONT><a href="http://www.cs.bgu.ac.il/~omri/NNUGA/nnuga.tar.gz" target="_blank" ><FONT size=3>tarred gzipped file </FONT></A><FONT size=3>(8K) .
    8 U. ^: Q! x+ B  w: P  }2 `/ h# e) k1 b
    ' y" R9 B6 ]( \, C! i; W* F</FONT>. W+ |, K9 M; E$ c* v
    <><FONT size=3><I>Note: This code is free and is in the public domain. You may copy, use, and change it as much as you like. However, the writers take no responsibility for the results of using this program.</I>
    $ U1 l6 z- t* Z2 j5 }2 X- y% a1 W* q9 e. I6 I2 D! F% l( C) H. R
    </FONT>
    0 R3 C/ r3 L, ~. s( i& m<><FONT size=3></FONT></P>
    + h- E4 _/ U% j! K+ A, |<DT>. l9 T- E. {6 w2 O  ^; k
    <H3><FONT size=3>See Also</FONT></H3>
      I6 S) d% `" S+ V$ [# {<UL>- r1 S3 z/ m+ ?# \& e5 k
    <LI><a href="http://www.cs.bgu.ac.il/~omri/Perceptron/" target="_blank" ><FONT size=3>The Perceptron </FONT></A></LI></UL></DT></DL><FONT size=3>
    + k- |. Y: c! [<HR>
    + t. X) d6 v- ]/ @8 E: E/ _4 c</FONT>
    # b# I7 I% d6 }# p& F<><I><FONT size=3>Last updated August 13th 1995) H% t# ?, f5 D! b. Q, v
    Created by </FONT><a href="http://www.cs.bgu.ac.il/~omri" target="_blank" ><FONT size=3>Omri Weisman</FONT></A><FONT size=3> </FONT></I></P>; U. }$ {) }5 x0 v; ~# S, q' i. Y+ Q
    [此贴子已经被作者于2004-6-9 10:09:34编辑过]
    zan
    转播转播0 分享淘帖0 分享分享0 收藏收藏0 支持支持0 反对反对0 微信微信
    您需要登录后才可以回帖 登录 | 注册地址

    qq
    收缩
    • 电话咨询

    • 04714969085
    fastpost

    关于我们| 联系我们| 诚征英才| 对外合作| 产品服务| QQ

    手机版|Archiver| |繁體中文 手机客户端  

    蒙公网安备 15010502000194号

    Powered by Discuz! X2.5   © 2001-2013 数学建模网-数学中国 ( 蒙ICP备14002410号-3 蒙BBS备-0002号 )     论坛法律顾问:王兆丰

    GMT+8, 2026-4-20 06:14 , Processed in 0.423947 second(s), 50 queries .

    回顶部