Liberty BASIC Programming Discussions >> Liberty BASIC Code >>

Post by

this is a try at a perceptron as it has to be

error :

i only get white circles

WindowWidth = DisplayWidth WindowHeight = DisplayHeight global winx , winy , c , n , last , count winx = WindowWidth winy = WindowHeight c = 0.01 n = 2 count = 0 gosub [perceptron] nomainwin open "Perceptron" for graphics as #m #m "trapclose [quit]" #m "when characterInput [key]" #m "setfocus" timer 100 , [timer] wait [timer] count = count + 1 x = rnd2() * winx y = rnd2() * winy in( 0 ) = x in( 1 ) = y answer = perceptron.activate( f( x , y ) ) call perceptron.train answer #m "goto " ; x ; " " ; y #m "down" if answer = 1 then #m "circlefilled 8" else #m "circle 8" end if #m "up" wait [key] timer 0 notice chr$( 13 ) _ + "times : " + str$( count ) + chr$( 13 ) _ + "w 0 : " + str$( w( 0 ) ) + chr$( 13 ) _ + "w 1 : " + str$( w( 1 ) ) + chr$( 13 ) _ + "w 2 : " + str$( w( 2 ) ) [quit] close #m end function f( x , y ) uit = -1 if x < y then uit = 1 f = uit end function [perceptron] last = n dim w( last ) , in( last ) for i = 0 to last w( i ) = rnd2() next i in( last ) = 1 return function perceptron.ff() sum = 0 for i = 0 to last sum = sum + w( i ) * in( i ) next i ff = perceptron.activate( sum ) end function function perceptron.activate( x ) uit = -1 if x > 0 then uit = 1 activate = uit end function sub perceptron.train uit gues = perceptron.ff() fout = uit - gues for i = 0 to last w( i ) = w( i ) + c * fout * in( i ) next i end sub function rnd2() rnd2 = rnd(0) - rnd(0) end function

Post by

My last look at perceptrons was when they were electromechanical. CdS photocells and electric motor-driven potentiometers. Then their limits were realised and we forgot about them for a generation- the 'AI Winter'.

Not that it helps with your code, bluatigro, but I've lashed up code that may help people finding 'Perceptron' on the LB site, and will do a proper page on my site soon. Meanwhile the image below shows what happens when you run the code. I present an image of a character ( eg 'A') and the code learns how much weight to give each pixel. If now presented with a different image ( eg 'C') it uses these weightings to score its acceptability.

At present I train with just one image font/style, and always centred. Next is to train to recognize a range of fonts with some positional jitter. Perhaps...

Post by

i want this also working with

a genetic algoritm

i get strange results

dim w( 200 , 2 ) , fout( 200 ) , ry( 200 ) for i = 0 to 200 for j = 0 to 2 w( i , j ) = range( -1 , 1 , 1 ) next j ry( i ) = i next i for gen = 0 to 100 for p = 0 to 200 fout( p ) = 0 for i = 0 to 100 x = range( -10 , 10 , 1 ) y = range( -10 , 10 , 1 ) sum = w( p , 0 ) * x + w( p , 1 ) * y + w( p , 2 ) fout( p ) = fout( p ) + abs( is( sum ) - f( x , y ) ) next i next p for h = 1 to 200 for l = 0 to h - 1 if fout( ry( h ) ) < fout( ry( l ) ) then q = ry( l ) ry( l ) = ry( h ) ry( h ) = q end if next l next h print gen , fout( ry( 0 ) ) for i = 20 to 200 a = int( range( 0 , 200 , 2 ) ) b = int( range( 0 , 200 , 2 ) ) call mix ry( a ) , ry( b ) , i if rnd( 0 ) < 0.5 then call mutate i end if next i next gen input "[ pres return to contiue ]" ; in$ fout = 0 for i = 0 to 100 x = range( -10 , 10 , 1 ) y = range( -10 , 10 , 1 ) sum = w( ry( 0 ) , 0 ) * x + w( ry( 0 ) , 1 ) * y + w( ry( 0 ) , 2 ) fout = fout + abs( is( sum ) - f( x , y ) ) next i print "test error = " ; fout end sub mix a , b , uit for i = 0 to 2 if rnd(0)<.5 then w( uit , i ) = w( a , i ) else w( uit , i ) = w( b , i ) end if next i end sub sub mutate a i = int( range( 0 , 3 , 1 ) ) w( a , i ) = range( -1 , 1 , 1 ) end sub function f( x , y ) uit = -1 if x < y then uit = 1 f = uit end function function range( l , h , m ) range = ( rnd(0) ^ m ) * ( h - l ) + l end function function is( x ) uit = -1 if x >= 0 then uit = 1 is = uit end function

Post by

Keep us in touch on these areas as you make progress. Interesting stuff.

BUT it would help if you put 'scan' in loops- sometimes you have 4 nested layers and in Linux I can't cleanly exit.

I've got as far as teaching single layer perceptrons to do logic- AND and OR, and demonstrated how they fail on XOR. See PerceptronPage

That was a useful reference on back-propagation. I too am trying to understand it!