Liberty BASIC Community Forum
« neural net : xor »

Welcome Guest. Please Login or Register.
Nov 24th, 2017, 10:56am


Rules|Home|Help|Search|Recent Posts|Notification


« Previous Topic | Next Topic »
Pages: 1  Notify Send Topic Print
 thread  Author  Topic: neural net : xor  (Read 474 times)
bluatigro
Guru
ImageImageImageImageImage


member is offline

Avatar

cxiu diversas el tio respondas cxiu samvaloras [ thats esperanto for : we are al different therefore we are al equal ]


PM

Gender: Male
Posts: 922
xx neural net : xor
« Thread started on: Jan 18th, 2017, 08:00am »

error :
JB reports a syntaks error
but where i dont know

Code:
'' Author:        John McCullock
'' Date:        12-11-2005
'' Description:    Backpropagation XOR Example 2.
'' Sources: Dr Phil Brierley, www.philbrierley.com
'' translated from c++ to JB/LB by bluatigro

global numInputs : numInputs = 3       '' Input nodes, plus the bias input.
global numPatterns : numPatterns = 4     '' Input patterns for XOR experiment.

global numHidden : numHidden = 4
global numEpochs : numEpochs = 200
global LR.IH : LR.IH = 0.7       '' Learning rate, input to hidden weights.
global LR.HO : LR.HO = 0.07      '' Learning rate, hidden to output weights.

global patNum : patNum = 0
global errThisPat : errThisPat = 0.0
global outPred : outPred = 0.0                  '' "Expected" output values.
global RMSerror : RMSerror = 0.0                 '' Root Mean Squared error.

dim hiddenVal( numHidden )         '' Hidden node outputs.

dim weightsIH( numInputs , numHidden )  '' Input to Hidden weights.
dim weightsHO( numHidden )          '' Hidden to Output weights.

dim trainInputs( numPatterns , numInputs )
dim trainOutput( numPatterns )         '' "Actual" output values.



    randomize timer  '' Seed the generator with system time.

    call initWeights

    call initData

    '' Train the network
    for j = 0 to numEpochs

        for i = 0 to numPatterns

            ''Select a pattern at random.
            patNum = rnd(0) * numPatterns

            ''Calculate the output and error for this pattern.
            call calcNet

            ''Adjust network weights.
            call WeightChangesHO
            call WeightChangesIH
        next i

        call calcOverallError

        ''Display the overall network error after each epoch
        print "epoch = " + str$(j) + " RMS Error = " + str(RMSerror)

    next j
    ''Training has finished.

    call displayResults

input "[ pres return ]" ; in$
end

function tanh( x as double ) as double
  return ( 1 - exp( -x * 2 ) ) / ( 1 + exp( -x * 2 ) )
end function

sub initWeights
'' Initialize weights to random values.

    for j = 0 to numHidden

        weightsHO(j) = ( rnd(0) - 0.5 ) / 2
        for i = 0 to numInputs

            weightsIH(i,j) = ( rnd(0) - 0.5 ) / 5
            print "Weight = " + str$( weightsIH(i,j) )
        next i
    next j
end sub

sub initData
    '' The data here is the XOR data which has been rescaled to
    '' the range -1 to 1.

    '' An extra input value of 1 is also added to act as the bias.

    '' The output must lie in the range -1 to 1.

    trainInputs(0,0)   =  1
    trainInputs(0,1)   = -1
    trainInputs(0,2)   =  1   '' Bias
    trainOutput(0)     =  1

    trainInputs(1,0)   = -1
    trainInputs(1,1)   =  1
    trainInputs(1,2)   =  1  '' Bias
    trainOutput(1)     =  1

    trainInputs(2,0)   =  1
    trainInputs(2,1)   =  1
    trainInputs(2,2)   =  1  '' Bias
    trainOutput(2)     = -1

    trainInputs(3,0)   = -1
    trainInputs(3,1)   = -1
    trainInputs(3,2)   =  1  '' Bias
    trainOutput(3)     = -1
end sub

sub calcNet
'' Calculates values for Hidden and Output nodes.

    for i = 0 to numHidden
      hiddenVal(i) = 0.0

        for j = 0 to numInputs
            hiddenVal(i) = hiddenVal(i)+(trainInputs(patNum,j) * weightsIH(j,i) )
        next j

        hiddenVal(i) = tanh( hiddenVal( i ) )
    next i

    outPred = 0.0

    for i = 0 to numHidden
        outPred = outPred + hiddenVal(i) * weightsHO(i)
    next i
    ''Calculate the error: "Expected" - "Actual"
    errThisPat = outPred - trainOutput( patNum )
end sub

sub WeightChangesHO
''Adjust the Hidden to Output weights.
    for k = 0 to numHidden
        dim as double weightChange = LR.HO * errThisPat * hiddenVal(k)
        weightsHO(k) = weightsHO(k) - weightChange

        '' Regularization of the output weights.
        if (weightsHO(k) < -5) then
            weightsHO(k) = -5
        end if
        if (weightsHO(k) > 5) then
            weightsHO(k) = 5
        end if
    next k
end sub

sub WeightChangesIH
'' Adjust the Input to Hidden weights.
    for i = 0 to numHidden
        for k = 0 to numInputs
            x = 1 - (hiddenVal(i) * hiddenVal(i))
            x = x * weightsHO(i) * errThisPat * LR.IH
            x = x * trainInputs(patNum,k)
            weightChange = x
            weightsIH(k,i) = weightsIH(k,i) - weightChange
        next k
    next i
end sub

sub calcOverallError
    RMSerror = 0.0
    for i = 0 to numPatterns
         patNum = i
         call calcNet
         RMSerror = RMSerror + (errThisPat * errThisPat)
    next i
    RMSerror = RMSerror / numPatterns
    RMSerror = sqr(RMSerror)
end sub

sub displayResults
    for i = 0 to numPatterns
        patNum = i
        call calcNet
        print "pat = " + str$( patNum + 1 ) _
        + " actual = " + str$( trainOutput(patNum) ) _
        + " neural model = " + str$( outPred )
    next i
end sub

 
User IP Logged

basic's : C64 Simons C128 Amiga Amos quick4.5 Visual5 Visual6 Visual2005 Visual2008 just and last but not least liberty
tenochtitlanuk
Moderator
ImageImageImageImageImage


member is offline

Avatar




Homepage PM

Gender: Male
Posts: 1170
xx Re: neural net : xor
« Reply #1 on: Jan 18th, 2017, 08:41am »

Debugger not being very helpful here.
Several errors I see...
Code:
randomize timer  ' Seed the generator with system time.
 

Randomize needs a number between 0 and 1

Code:
function tanh( x as double ) as double '     <<<<<<<<<<< syntax error
  return ( 1 - exp( -x * 2 ) ) / ( 1 + exp( -x * 2 ) )
end function
 

Not sure how you want these doubles to be used

Code:
        'Display the overall network error after each epoch
         print "epoch = " + str$(j) + " RMS Error = " + str(RMSerror)
 

You have a missing $ in the second term.

Code:
sub WeightChangesHO
'Adjust the Hidden to Output weights.
    for k = 0 to numHidden
        'dim as double weightChange = LR.HO * errThisPat * hiddenVal(k)  '   <<<<<<<<<<<<<<<<<<<
        weightsHO(k) = weightsHO(k) - weightChange

        ' Regularization of the output weights.
        if (weightsHO(k) < -5) then
            weightsHO(k) = -5
        end if
        if (weightsHO(k) > 5) then
            weightsHO(k) = 5
        end if
    next k
end sub
 

Not sure what 'dim as double' implies.

Hope to see it working soon!
User IP Logged

tenochtitlanuk
Moderator
ImageImageImageImageImage


member is offline

Avatar




Homepage PM

Gender: Male
Posts: 1170
xx Re: neural net : xor
« Reply #2 on: Jan 18th, 2017, 11:48am »

Personally I'd prefer you did not put the same posts on JB and LB forums. All the regulars check BOTH forums and will chip in with help.
As a result of the double posting, two of us spent some time finding your errors.
I enjoy a challenge, but not if I find I've spent time duplicating someone else's helpful efforts....
User IP Logged

bluatigro
Guru
ImageImageImageImageImage


member is offline

Avatar

cxiu diversas el tio respondas cxiu samvaloras [ thats esperanto for : we are al different therefore we are al equal ]


PM

Gender: Male
Posts: 922
xx Re: neural net : xor
« Reply #3 on: Jan 19th, 2017, 03:36am »

@ tenochtitlanuk :
thanks for help

the error's came because i translated first in freebasic

update :
it works !!

Code:
'' Author:        John McCullock
'' Date:        12-11-2005
'' Description:    Backpropagation XOR Example 2.
'' Sources: Dr Phil Brierley, www.philbrierley.com
'' translated from c++ to JB/LB by bluatigro
global numInputs : numInputs = 3       '' Input nodes, plus the bias input.
global numPatterns : numPatterns = 4     '' Input patterns for XOR experiment.
global numHidden : numHidden = 4
global numEpochs : numEpochs = 200
global LR.IH : LR.IH = 0.7       '' Learning rate, input to hidden weights.
global LR.HO : LR.HO = 0.07      '' Learning rate, hidden to output weights.
global patNum : patNum = 0
global errThisPat : errThisPat = 0.0
global outPred : outPred = 0.0                  '' "Expected" output values.
global RMSerror : RMSerror = 0.0                 '' Root Mean Squared error.
dim hiddenVal( numHidden )         '' Hidden node outputs.
dim weightsIH( numInputs , numHidden )  '' Input to Hidden weights.dim weightsHO( numHidden )          '' Hidden to Output weights.
dim trainInputs( numPatterns , numInputs )
dim trainOutput( numPatterns )         '' "Actual" output values.

call initWeights
call initData
'' Train the network
for j = 0 to numEpochs
  for i = 0 to numPatterns
    ''Select a pattern at random.
    patNum = i ''rnd(0) * numPatterns
    ''Calculate the output and error for this pattern.
    call calcNet
    ''Adjust network weights.
    call WeightChangesHO
    call WeightChangesIH
  next i
  call calcOverallError
  ''Display the overall network error after each epoch
  print "epoch = " + str$(j) + " RMS Error = " + str$(RMSerror)
next j
''Training has finished.
call displayResults
input "[ pres return ]" ; in$
end

function tanh( x )
  tanh = ( 1 - exp( 0-x * 2 ) ) / ( 1 + exp( 0-x * 2 ) )
end function

sub initWeights
'' Initialize weights to random values.
    for j = 0 to numHidden
        weightsHO(j) = ( rnd(0) - 0.5 ) / 2
        for i = 0 to numInputs
            weightsIH(i,j) = ( rnd(0) - 0.5 ) / 5
            print "Weight = " + str$( weightsIH(i,j) )
        next i
    next j
end sub

sub initData
    '' The data here is the XOR data which has been rescaled to
    '' the range -1 to 1.
    '' An extra input value of 1 is also added to act as the bias.
    '' The output must lie in the range -1 to 1.
    trainInputs(0,0)   =  1
    trainInputs(0,1)   = -1
    trainInputs(0,2)   =  1   '' Bias
    trainOutput(0)     =  1
    trainInputs(1,0)   = -1
    trainInputs(1,1)   =  1
    trainInputs(1,2)   =  1  '' Bias
    trainOutput(1)     =  1
    trainInputs(2,0)   =  1
    trainInputs(2,1)   =  1
    trainInputs(2,2)   =  1  '' Bias
    trainOutput(2)     = -1
    trainInputs(3,0)   = -1
    trainInputs(3,1)   = -1
    trainInputs(3,2)   =  1  '' Bias
    trainOutput(3)     = -1
end sub

sub calcNet
'' Calculates values for Hidden and Output nodes.
    for i = 0 to numHidden
        hiddenVal(i) = 0.0
        for j = 0 to numInputs
            hiddenVal(i) = hiddenVal(i)+(trainInputs(patNum,j) * weightsIH(j,i) )
        next j
        hiddenVal(i) = tanh( hiddenVal( i ) )
    next i
    outPred = 0.0
    for i = 0 to numHidden
        outPred = outPred + hiddenVal(i) * weightsHO(i)
    next i
    ''Calculate the error: "Expected" - "Actual"
    errThisPat = outPred - trainOutput( patNum )
end sub

sub WeightChangesHO
''Adjust the Hidden to Output weights.
    for k = 0 to numHidden
        weightChange = LR.HO * errThisPat * hiddenVal(k)
        weightsHO(k) = weightsHO(k) - weightChange
        '' Regularization of the output weights.
        if (weightsHO(k) < -5) then
            weightsHO(k) = -5
        end if
        if (weightsHO(k) > 5) then
            weightsHO(k) = 5
        end if
    next k
end sub

sub WeightChangesIH
'' Adjust the Input to Hidden weights.
    for i = 0 to numHidden
        for k = 0 to numInputs
            x = 1 - (hiddenVal(i) * hiddenVal(i))
            x = x * weightsHO(i) * errThisPat * LR.IH
            x = x * trainInputs(patNum,k)
            weightChange = x
            weightsIH(k,i) = weightsIH(k,i) - weightChange
        next k
    next i
end sub

sub calcOverallError
    RMSerror = 0.0
    for i = 0 to numPatterns
         patNum = i
         call calcNet
         RMSerror = RMSerror + (errThisPat * errThisPat)
    next i
    RMSerror = RMSerror / numPatterns
    RMSerror = sqr(RMSerror)
end sub

sub displayResults
    for i = 0 to numPatterns
        patNum = i
        call calcNet
        print "pat = " + str$( patNum + 1 ) _
        + " actual = " + str$( trainOutput(patNum) ) _
        + " neural model = " + str$( outPred )
    next i
end sub

 
User IP Logged

basic's : C64 Simons C128 Amiga Amos quick4.5 Visual5 Visual6 Visual2005 Visual2008 just and last but not least liberty
bluatigro
Guru
ImageImageImageImageImage


member is offline

Avatar

cxiu diversas el tio respondas cxiu samvaloras [ thats esperanto for : we are al different therefore we are al equal ]


PM

Gender: Male
Posts: 922
xx Re: neural net : xor
« Reply #4 on: Feb 1st, 2017, 05:04am »

update :
another one translated
more outputs posible

this one calc's sqr()

Code:
' input I[i] = any real numbers ("doubles" in C++)
' y[j]
' network output y[k] = sigmoid continuous 0 to 1
' correct output O[k] = continuous 0 to 1

' assumes throughout that all i are linked to all j, and that all j are linked to all k
' if want some NOT to be connected, will need to introduce:
'   Boolean connected [ TOTAL ] [ TOTAL ];
' initialise it, and then keep checking:
'   if (connected[i][j])
' don't really need to do this,
' since we can LEARN a weight of 0 on this link
' translated to JB/LB by bluatigro 1 feb 2017

global NOINPUT : NOINPUT  = 1
global NOHIDDEN : NOHIDDEN = 30
global NOOUTPUT : NOOUTPUT = 1

global RATE : RATE = 0.3

global C : C = 0.1


' I = x = double lox to hix
GLOBAL lox : lox = 0
global hix : hix = 9

'/ want it to store f(x) = double lof to hif
global lof : lof = -2.5 ' approximate bounds
global hif : hif =  3.2
TOTAL = NOINPUT+NOHIDDEN+NOOUTPUT

' units all unique ids - so no ambiguity about which we refer to:

global loi : loi = 0
global hii : hii = NOINPUT-1
global loj : loj = NOINPUT
global hij : hij = NOINPUT+NOHIDDEN-1
global lok : lok = NOINPUT+NOHIDDEN
global hik : hik = NOINPUT+NOHIDDEN+NOOUTPUT-1

dim II( TOTAL - 1)
dim y( TOTAL - 1)
dim O( TOTAL - 1)
dim w( TOTAL - 1, TOTAL  - 1)        ' w[i][j]
dim wt( TOTAL - 1)                   ' bias weights wt[i]
dim dx( TOTAL - 1)                   ' dE/dx[i]
dim dy( TOTAL - 1)                   ' dE/dy[i]

CALL net.init
CALL net.learn 1000
CALL net.exploit
PRINT "[ END PROGRAM ]"
END


function sigmoid( x )
  signoid = 1.0 / ( 1 + exp( 0 - x ) )
end function

function randomAtoB( l , h )
  randomAtoB = l + rnd(0) * ( h - l )
end function

function f( x )
  f = sqr(x)
  ' return sin(x)
  '  return sin(x)+sin(2*x)+sin(5*x)+cos(x)
end function

' O = f(x) normalised to range 0 to 1
function normalise( t )
  normalise = ( t - lof ) / ( hif - lof )
end function

function expand( t ) ' goes the other way
  expand = lof + t *( hif - lof )
end function

' going to do w++ and w--
' so might think should start with all w=0
' (all y[k]=0.5 - halfway between possibles 0 and 1)
' in fact, if all w same they tend to march in step together
' need *asymmetry* if want them to specialise (form a representation scheme)
' best to start with diverse w
'
' also, large positive or negative w -> slow learning
' so start with small absolute w -> fast learning

function initw()
  initw = ( rnd(0) - rnd(0) ) * C
end function

sub net.init
  visits = 0
  for i = loi to hii
    for i = loj to hij
      w(i,j) = initw()
    next
  next
  for j = loj to hij
    for k = lok to hik
      w(j,k) = initw()
    next
  next
  for j = loj to hij
    wt(j) = initw()
  next
  for k = lok to hik
    wt(k) = initw()
  next
end sub



sub net.backpropagate
  dw = 0 ' temporary variable - dE/dw[i][j]
  '----- backpropagate O[k] -> dy[k] -> dx[k] -> w[j][k],wt[k] ---------------------------------
  for k = lok to hik
    dy(k) = y(k) - O(k)
    dx(k) = ( dy(k) ) * y(k) * (1-y(k))
  next
  '----- backpropagate dx(k),w[j](k) -> dy[j] -> dx[j] -> w[i][j],wt[j] ------------------------
  '----- use OLD w values here (that's what the equations refer to) .. -------------------------
  for j = loj to hij
    t = 0
    for k = lok to hik
      t = t + ( dx(k) * w(j,k) )
    next
    dy(j) = t
    dx(j) = ( dy(j) ) * y(j) * (1-y(j))
  next
  '----- .. do all w changes together at end ---------------------------------------------------
  for j = loj to hij
    for k = lok to hik
      dw = dx(k) * y(j)
      w(j,k) = w(j,k) - ( RATE * dw )
    next
  next
  for i = loi to hii
    for j = loj to hij
     dw = dx(j) * II(i)
     w(i,j) = w(i,j) - ( RATE * dw )
    next
  next

  for k = lok to hik
    dw = dx(k) * (-1)
    wt(k) = wt(k) - ( RATE * dw )
  next

  for j = loj to hij
    dw = dx(j) * (-1)
    wt(j) = wt(j) - ( RATE * dw )
  next
end sub


sub net.newIO
  x = randomAtoB( lox , hix )
  ' there is only one, just don't want to remember number:
  for i = loi to hii
    II(i) = x
  next
  ' there is only one, just don't want to remember number:
  for k = lok to hik
    O(k) = normalise( f( x ) )
  next
end sub

' Note it never even sees the same exemplar twice!
sub net.reportIO
  for i = loi to hii
    x = II( i )
  next
  for k = lok to hik
    y = expand( y( k ) )
  next
  print "x    " ; x
  print "y    " ; y
  print "f(x) " ; f( x )
end sub

sub net.forwardpass
  x = 0 'temporary variable - x[i]
  '----- forwardpass I[i] -> y[j] ------------------------------------------------
  for j = loj to hij
    x = 0
    for i = loi to hii
      x = x + ( II(i) * w(i,j) )
    next
    y(j) = sigmoid( x - wt(j) )
  next
  '----- forwardpass y[j] -> y[k] ------------------------------------------------
  for i = lok to hik
    x = 0
    for i = loj to hij
      x = x + ( y(j) * w(j,k) )
    next
    y(k) = sigmoid( x - wt(k) )
  next
end sub


sub net.report ' report on the forwardpass we just did
  print "II[i] "
  for i = loi to hii
    print II(i) ; " " ;
  next
  print ""
  print "y[j] "
  for i = loj to hij
    print y(j) ; " ";
  next
  print ""

  print "y[k] "
  for i = lok to hik
    print y(k) ; " ";
  next
  print ""

  print "O[k] "
  for i = lok to hik
    print O(k) ; " ";
  next
  print ""

  E = 0
  for i = lok to hik
    E = E + (y(k)-O(k))*(y(k)-O(k))
  next
   E = E/2
  print E ; " E"
end sub


sub net.learn CEILING
  for ic = 1 to CEILING
    call net.newIO
    ' new I/O pair
    ' put I into I[i]
    ' put O into O(k)
    call net.forwardpass
    call net.backpropagate
 next
end sub

sub net.exploit
  for ic = 1 to 10
    call net.newIO
    call net.forwardpass
    call net.reportIO
  next
end sub
 
User IP Logged

basic's : C64 Simons C128 Amiga Amos quick4.5 Visual5 Visual6 Visual2005 Visual2008 just and last but not least liberty
bluatigro
Guru
ImageImageImageImageImage


member is offline

Avatar

cxiu diversas el tio respondas cxiu samvaloras [ thats esperanto for : we are al different therefore we are al equal ]


PM

Gender: Male
Posts: 922
xx Re: neural net : xor
« Reply #5 on: Feb 2nd, 2017, 03:45am »

update :
image recognision !!!

WARNING :
this wil take minutes

the code lerns | recognises squares from circles

ERROR :
if i don't let it lern it stil get 100% right
that can't be right

Code:
' input I[i] = any real numbers ("doubles" in C++)
' y[j]
' network output y[k] = sigmoid continuous 0 to 1
' correct output O[k] = continuous 0 to 1

' assumes throughout that all i are linked to all j, and that all j are linked to all k
' if want some NOT to be connected, will need to introduce:
'   Boolean connected [ TOTAL ] [ TOTAL ];
' initialise it, and then keep checking:
'   if (connected[i][j])
' don't really need to do this,
' since we can LEARN a weight of 0 on this link
' translated to JB/LB by bluatigro 1 feb 2017

global bmpsize : bmpsize = 63
global false : false = 0
global true : true = 1

global NOINPUT : NOINPUT  = bmpsize * bmpsize
global NOHIDDEN : NOHIDDEN = 30
global NOOUTPUT : NOOUTPUT = 1

global RATE : RATE = 0.3

global C : C = 0.1

TOTAL = NOINPUT+NOHIDDEN+NOOUTPUT

' units all unique ids - so no ambiguity about which we refer to:

global loi : loi = 0
global hii : hii = NOINPUT-1
global loj : loj = NOINPUT
global hij : hij = NOINPUT+NOHIDDEN-1
global lok : lok = NOINPUT+NOHIDDEN
global hik : hik = NOINPUT+NOHIDDEN+NOOUTPUT-1

dim II( TOTAL - 1)
dim y( TOTAL - 1)
dim O( TOTAL - 1)
dim w( TOTAL - 1, TOTAL  - 1)        ' w[i][j]
dim wt( TOTAL - 1)                   ' bias weights wt[i]
dim dx( TOTAL - 1)                   ' dE/dx[i]
dim dy( TOTAL - 1)                   ' dE/dy[i]

global handle
global device
open "neural net" for graphics as #m
  #m "trapclose [quit]"
  handle = hwnd( #m )
  calldll #User32 , "GetDC" _
  , handle as ulong , device as ulong
  CALL net.init
  CALL net.learn 1000
  CALL net.exploit 
  notice "[ END PROGRAM ]"
wait
[quit]
  calldll #User32 , "ReleseDC" _
  , handle as ulong _
  , device as ulong _
  , r as ushort
  close #m 
END

function getpixel( x , y )
scan
  y = y + 64
  calldll #Gdi32 , "GetPixel" _
  , device as ulong _
  , x as long _
  , y as long _
  , kl as long
  r = kl and 255
  g = int( kl / 256 ) and 255
  b = int( kl / 256 ^ 2 ) and 255
  uit = false
  if r + g + b > 127 * 3 then uit = true
  getpixel = uit
end function

function sigmoid( x )
  signoid = 1.0 / ( 1 + exp( 0 - x ) )
end function

function randomAtoB( l , h )
  randomAtoB = l + rnd(0) * ( h - l )
end function

' going to do w++ and w--
' so might think should start with all w=0
' (all y[k]=0.5 - halfway between possibles 0 and 1)
' in fact, if all w same they tend to march in step together
' need *asymmetry* if want them to specialise (form a representation scheme)
' best to start with diverse w
'
' also, large positive or negative w -> slow learning
' so start with small absolute w -> fast learning

function initw()
  initw = ( rnd(0) - rnd(0) ) * C
end function

sub net.init
scan
  visits = 0
  for i = loi to hii
    for i = loj to hij
      w(i,j) = initw()
    next
  next
  for j = loj to hij
    for k = lok to hik
      w(j,k) = initw()
    next
  next
  for j = loj to hij
    wt(j) = initw()
  next
  for k = lok to hik
    wt(k) = initw()
  next
end sub

sub net.backpropagate
scan
  dw = 0 ' temporary variable - dE/dw[i][j]
  '----- backpropagate O[k] -> dy[k] -> dx[k] -> w[j][k],wt[k] ---------------------------------
  for k = lok to hik
    dy(k) = y(k) - O(k)
    dx(k) = ( dy(k) ) * y(k) * (1-y(k))
  next
  '----- backpropagate dx(k),w[j](k) -> dy[j] -> dx[j] -> w[i][j],wt[j] ------------------------
  '----- use OLD w values here (that's what the equations refer to) .. -------------------------
  for j = loj to hij
    t = 0
    for k = lok to hik
      t = t + ( dx(k) * w(j,k) )
    next
    dy(j) = t
    dx(j) = ( dy(j) ) * y(j) * (1-y(j))
  next
  '----- .. do all w changes together at end ---------------------------------------------------
  for j = loj to hij
    for k = lok to hik
      dw = dx(k) * y(j)
      w(j,k) = w(j,k) - ( RATE * dw )
    next
  next
  for i = loi to hii
    for j = loj to hij
     dw = dx(j) * II(i)
     w(i,j) = w(i,j) - ( RATE * dw )
    next
  next

  for k = lok to hik
    dw = dx(k) * (-1)
    wt(k) = wt(k) - ( RATE * dw )
  next

  for j = loj to hij
    dw = dx(j) * (-1)
    wt(j) = wt(j) - ( RATE * dw )
  next
end sub

sub shape q
scan
  #m "fill white"
  #m "color black"
  #m "backcolor black"
  #m "goto 0 64"
  #m "down"
  #m "boxfilled 63 127"
  #m "up"
  r = randomAtoB( 10 , 20 )
  x = randomAtoB( r , bmpsize - r )
  y = randomAtoB( r , bmpsise - r + 64 )
  #m "color white"
  #m "backcolor white"
  if q then
    #m "goto " ; x ; " " ; y + 64
    #m "down"
    #m "circlefilled "; r
    #m "up"
  else     
    #m "goto " ; x - r ; " " ; y - r + 64
    #m "down"
    #m "boxfilled " ; x + r ; " " ; y + r + 64
    #m "up"
  end if
end sub

sub net.newIO q
scan
  call shape q
  for i = loi to hii
    II(i) = getpixel( i and 63 , int( i / 64 )  )
  next
  for k = lok to hik
    O(k) = q
  next
end sub

sub net.forwardpass
scan
  x = 0 'temporary variable - x[i]
  '----- forwardpass I[i] -> y[j] ------------------------------------------------
  for j = loj to hij
    x = 0
    for i = loi to hii
      x = x + ( II(i) * w(i,j) )
    next
    y(j) = sigmoid( x - wt(j) )
  next
  '----- forwardpass y[j] -> y[k] ------------------------------------------------
  for i = lok to hik
    x = 0
    for i = loj to hij
      x = x + ( y(j) * w(j,k) )
    next
    y(k) = sigmoid( x - wt(k) )
  next
end sub

sub net.learn CEILING
  for ic = 1 to CEILING
  scan
    call net.newIO ic and 1
    ' new I/O pair
    ' put I into I[i]
    ' put O into O(k)
    call net.forwardpass
    call net.backpropagate
 next
end sub

sub net.exploit 
scan
  goed = 0 
  for ic = 1 to 100
    call net.newIO ic and 1
    call net.forwardpass
    if y(0) >= 0.5 and O(0) >= 0.5 then
      goed = goed + 1
    end if
    if y(0) < 0.5 and O(0) < 0.5 then
      goed = goed + 1
    end if
  next     
  notice chr$( 13 ) _
  ; goed ; " % corect ." 
end sub
 
« Last Edit: Feb 2nd, 2017, 04:43am by bluatigro » User IP Logged

basic's : C64 Simons C128 Amiga Amos quick4.5 Visual5 Visual6 Visual2005 Visual2008 just and last but not least liberty
bluatigro
Guru
ImageImageImageImageImage


member is offline

Avatar

cxiu diversas el tio respondas cxiu samvaloras [ thats esperanto for : we are al different therefore we are al equal ]


PM

Gender: Male
Posts: 922
xx Re: neural net : xor
« Reply #6 on: Feb 2nd, 2017, 06:38am »


update :
removed some posible error's

WARNING :
code run's several minutes

error :
it does not lern

Code:
' input I[i] = any real numbers ("doubles" in C++)
' y[j]
' network output y[k] = sigmoid continuous 0 to 1
' correct output O[k] = continuous 0 to 1

' assumes throughout that all i are linked to all j, and that all j are linked to all k
' if want some NOT to be connected, will need to introduce:
'   Boolean connected [ TOTAL ] [ TOTAL ];
' initialise it, and then keep checking:
'   if (connected[i][j])
' don't really need to do this,
' since we can LEARN a weight of 0 on this link
' translated to JB/LB by bluatigro 1 feb 2017

global bmpsize : bmpsize = 63
global false : false = 0
global true : true = 1

global NOINPUT : NOINPUT  = bmpsize * bmpsize
global NOHIDDEN : NOHIDDEN = 30
global NOOUTPUT : NOOUTPUT = 1

global RATE : RATE = 0.3

global C : C = 0.1

TOTAL = NOINPUT+NOHIDDEN+NOOUTPUT

' units all unique ids - so no ambiguity about which we refer to:

global loi : loi = 0
global hii : hii = NOINPUT-1
global loj : loj = NOINPUT
global hij : hij = NOINPUT+NOHIDDEN-1
global lok : lok = NOINPUT+NOHIDDEN
global hik : hik = NOINPUT+NOHIDDEN+NOOUTPUT-1

dim II( TOTAL - 1)
dim y( TOTAL - 1)
dim O( TOTAL - 1)
dim w( TOTAL - 1, TOTAL  - 1)        ' w[i][j]
dim wt( TOTAL - 1)                   ' bias weights wt[i]
dim dx( TOTAL - 1)                   ' dE/dx[i]
dim dy( TOTAL - 1)                   ' dE/dy[i]

global handle
global device
nomainwin 
open "neural net" for graphics as #m
  #m "trapclose [quit]"
  handle = hwnd( #m )
  calldll #User32 , "GetDC" _
  , handle as ulong , device as ulong
  CALL net.init
  CALL net.learn 1000
  CALL net.exploit 
wait
[quit]
  calldll #User32 , "ReleaseDC" _
  , handle as ulong _
  , device as ulong _
  , r as ushort
  close #m 
END

function getpixel( x , y )
scan
  y = y + 64
  calldll #Gdi32 , "GetPixel" _
  , device as ulong _
  , x as long _
  , y as long _
  , kl as long
  r = kl and 255
  g = int( kl / 256 ) and 255
  b = int( kl / 256 ^ 2 ) and 255
  uit = false
  if r + g + b > 127 * 3 then uit = true
  getpixel = uit
end function

function sigmoid( x )
  signoid = 1.0 / ( 1 + exp( 0 - x ) )
end function

function randomAtoB( l , h )
  randomAtoB = l + rnd(0) * ( h - l )
end function

' going to do w++ and w--
' so might think should start with all w=0
' (all y[k]=0.5 - halfway between possibles 0 and 1)
' in fact, if all w same they tend to march in step together
' need *asymmetry* if want them to specialise (form a representation scheme)
' best to start with diverse w
'
' also, large positive or negative w -> slow learning
' so start with small absolute w -> fast learning

function initw()
  initw = ( rnd(0) - rnd(0) ) * C
end function

sub net.init
scan
  visits = 0
  for i = loi to hii
    for j = loj to hij
      w(i,j) = initw()
    next
  next
  for j = loj to hij
    for k = lok to hik
      w(j,k) = initw()
    next
  next
  for j = loj to hij
    wt(j) = initw()
  next
  for k = lok to hik
    wt(k) = initw()
  next
end sub

sub net.forwardpass
scan
  x = 0 'temporary variable - x[i]
  '----- forwardpass I[i] -> y[j] ------------------------------------------------
  for j = loj to hij
    x = 0
    for i = loi to hii
      x = x + ( II(i) * w(i,j) )
    next
    y(j) = sigmoid( x - wt(j) )
  next
  '----- forwardpass y[j] -> y[k] ------------------------------------------------
  for k = lok to hik
    x = 0
    for j = loj to hij
      x = x + ( y(j) * w(j,k) )
    next
    y(k) = sigmoid( x - wt(k) )
  next
end sub

sub net.backpropagate
scan
  dw = 0 ' temporary variable - dE/dw[i][j]
  '----- backpropagate O[k] -> dy[k] -> dx[k] -> w[j][k],wt[k] ---------------------------------
  for k = lok to hik
    dy(k) = y(k) - O(k)
    dx(k) = ( dy(k) ) * y(k) * (1-y(k))
  next
  '----- backpropagate dx(k),w[j](k) -> dy[j] -> dx[j] -> w[i][j],wt[j] ------------------------
  '----- use OLD w values here (that's what the equations refer to) .. -------------------------
  for j = loj to hij
    t = 0
    for k = lok to hik
      t = t + ( dx(k) * w(j,k) )
    next
    dy(j) = t
    dx(j) = ( dy(j) ) * y(j) * (1-y(j))
  next
  '----- .. do all w changes together at end ---------------------------------------------------
  for j = loj to hij
    for k = lok to hik
      dw = dx(k) * y(j)
      w(j,k) = w(j,k) - ( RATE * dw )
    next
  next
  for i = loi to hii
    for j = loj to hij
     dw = dx(j) * II(i)
     w(i,j) = w(i,j) - ( RATE * dw )
    next
  next

  for k = lok to hik
    dw = dx(k) * (-1)
    wt(k) = wt(k) - ( RATE * dw )
  next

  for j = loj to hij
    dw = dx(j) * (-1)
    wt(j) = wt(j) - ( RATE * dw )
  next
end sub

sub shape q
scan
  #m "fill white"
  #m "color black"
  #m "backcolor black"
  #m "goto 0 64"
  #m "down"
  #m "boxfilled 63 127"
  #m "up"
  r = randomAtoB( 1 , 25 )
  x = randomAtoB( r , bmpsize - r )
  y = randomAtoB( r , bmpsise - r + 64 )
  #m "color white"
  #m "backcolor white"
  if q then
    #m "goto " ; x ; " " ; y + 64
    #m "down"
    #m "circlefilled "; r
    #m "up"
  else     
    #m "goto " ; x - r ; " " ; y - r + 64
    #m "down"
    #m "boxfilled " ; x + r ; " " ; y + r + 64
    #m "up"
  end if
end sub

sub net.newIO q
scan
  call shape q
  for i = loi to hii
    II(i) = getpixel( i and 63 , int( i / 64 )  )
  next
  for k = lok to hik
    O(k) = q
  next
end sub

sub net.learn CEILING
  for ic = 1 to CEILING
  scan
    call net.newIO ic and 1
    ' new I/O pair
    ' put I into I[i]
    ' put O into O(k)
    call net.forwardpass
    call net.backpropagate
 next
end sub

sub net.exploit 
scan
  goed = 0 
  for ic = 1 to 1000
    call net.newIO ic and 1
    call net.forwardpass
    if y(hik) >= 0.5 and O(hik) >= 0.5 then
      goed = goed + 1
    end if
    if y(hik) < 0.5 and O(hik) < 0.5 then
      goed = goed + 1
    end if
  next     
  notice chr$( 13 ) _
  ; "[ GAME OVER ]" ; chr$( 13 ) _
  ; goed/10 ; " % corect ." 
end sub

 
User IP Logged

basic's : C64 Simons C128 Amiga Amos quick4.5 Visual5 Visual6 Visual2005 Visual2008 just and last but not least liberty
bluatigro
Guru
ImageImageImageImageImage


member is offline

Avatar

cxiu diversas el tio respondas cxiu samvaloras [ thats esperanto for : we are al different therefore we are al equal ]


PM

Gender: Male
Posts: 922
xx Re: neural net : xor
« Reply #7 on: Jul 5th, 2017, 08:43am »

font a other example

trying to translate it into LB

got a : isnumerictype not understoot error

some help wood be nice

Code:
''bluatigro 5 jun 2017
''neural net translation from :
''http://www.learnartificialneuralnetworks.com/neural-network-software/backpropagation-source-code/

global inputmax , outputmax , hiddenmax , celperlayer
inputmax = 1
outputmax = 0
hiddenmax = 2
celperlayer = 4
dim cel(hiddenmax,celperlayer)
dim g(hiddenmax,celperlayer)
dim wg(hiddenmax,celperlayer)
dim w(in(hiddenmax,celperlayer,celperlayer))
dim d(in(hiddenmax,celperlayer,celperlayer))
dim wish(outputmax)


call create

for t = 0 to 2000
  f = 0
  for a = 0 to 1`
    for b = 0 to 1
      for c = 0 to 1
        cel(0,0)=a
        cel(0,1)=b
        cel(0,2)=c
        wish(0)=a xor b xor c
        f = f + train( .1 , .2 )
      next c
    next b
  next a
  print t , f
next t
end
sub create
  for i = 0 to hiddenmax
    for j = 0 to celperlayer
      for k = 0 to hiddenlayer
        w(in(i,j,k))=rnd(0)
        d(in(i,j,k))=rnd(0)
      next k
      g(i,j)=rnd(0)
      wg(i,j)=rnd(0)
    next j
  next i
end sub
function train( alpha , momentum )
  call probpegate
  for i = 0 to outputmax
    uit = cel(hiddenmax,i)
    errorc = ( wish(i) - uit ) * uit * ( 1 - uit )
    errorg = errorg + ( wish(i) - uit ) * ( wish(i) - uit )
    for j = 0 to celperlayer
      delta = d(in(hiddenmax,j,i))
      udelta = alpha * errorc * cel(hiddenmax-1,j) + delta * momentum
      w(in(hiddenmax,j,i))=w(in(hiddenmax,j,i))+udelta
      d(in(hiddenmax,j,i))=udelta
      sum=sum+w((in(hiddenmax,j,i))
    next j
    wg(hiddenmax,i)=wg(hiddenmax,i)+alpha*errorc*g(hiddenmax,i)
  next i
  for i = hiddenmax - 1 to 1 step -1
    for j = 0 to celperlayer
      uit = cel(i,j)
      errorc = uit * ( 1 - uit ) * sum
      for k = 0 to celperlayer
        delta = d(in(i,j,k))
        udelta = alpha * errorc * cel(i-1,k) + delta * momentum
        w(in(i,j,k))=w(in(i,j,k))+udelta
        d(in(i,j,k))=udelta
        csum = w(in(i,j,k))*errorc
      next k
      wg(i,j)=wg(i,j)*alpha*errorc
    next j
    sum = csum
    csum = 0
  next i
  train = errorg / 2
end function
sub probpegate
  for layer = 1 to hiddenmax
    for celnr = 0 to celperlayer
      sum = 0
      for i = 0 to celperlayer
        sum = sum + cel(layer-1,celnr)*w(in(layer,celnr,i))
      next i
      cel(layer,celnr)=signoid(sum)
    next celnr
  next layer
end sub
function in(l,c,i)
  in=l*celperlayer^2+c*celperlayer+i
end function
 
User IP Logged

basic's : C64 Simons C128 Amiga Amos quick4.5 Visual5 Visual6 Visual2005 Visual2008 just and last but not least liberty
Rod
Global Moderator
ImageImageImageImageImage


member is offline

Avatar

Graphics = goosebumps!


PM

Gender: Male
Posts: 5567
xx Re: neural net : xor
« Reply #8 on: Jul 5th, 2017, 10:13am »

There were a couple of characters that prevented the code compiling a ` character and a ( .

Fixing that got me a float denormalised error. This is where the code tries to create a number too small to hold in Liberty's float format.

So an ugly fix was to limit the smallness of the number, once reached I just zeroed the result. It runs to the end but I have no idea if this is the correct solution for you.

Code:
function train( alpha , momentum )
  call probpegate
  for i = 0 to outputmax
    uit = cel(hiddenmax,i)
    errorc = ( wish(i) - uit ) * uit * ( 1 - uit )
    errorg = errorg + ( wish(i) - uit ) * ( wish(i) - uit )
    for j = 0 to celperlayer
      delta = d(in(hiddenmax,j,i))
      if delta > .8e-306 then
      udelta = alpha * errorc * cel(hiddenmax-1,j) + delta * momentum
      else
      udelta=0
      end if
      w(in(hiddenmax,j,i))=w(in(hiddenmax,j,i))+udelta
      d(in(hiddenmax,j,i))=udelta
      sum=sum+w(in(hiddenmax,j,i))
    next j
    wg(hiddenmax,i)=wg(hiddenmax,i)+alpha*errorc*g(hiddenmax,i)
  next i
  for i = hiddenmax - 1 to 1 step -1
    for j = 0 to celperlayer
      uit = cel(i,j)
      errorc = uit * ( 1 - uit ) * sum
      for k = 0 to celperlayer
        delta = d(in(i,j,k))
        'print delta
        if delta > .8e-306 then
        udelta = alpha * errorc * cel(i-1,k) + delta * momentum
        else
        udelta=0
        end if
        w(in(i,j,k))=w(in(i,j,k))+udelta
        d(in(i,j,k))=udelta
        csum = w(in(i,j,k))*errorc
      next k
      wg(i,j)=wg(i,j)*alpha*errorc
    next j
    sum = csum
    csum = 0
  next i
  train = errorg / 2
end function
 
User IP Logged

bluatigro
Guru
ImageImageImageImageImage


member is offline

Avatar

cxiu diversas el tio respondas cxiu samvaloras [ thats esperanto for : we are al different therefore we are al equal ]


PM

Gender: Male
Posts: 922
xx Re: neural net : xor
« Reply #9 on: Jul 6th, 2017, 03:35am »

@ rod :
thanks for help

update :
test added

i got this now :
Code:
''bluatigro 5 jun 2017
''neural net translation from :
''http://www.learnartificialneuralnetworks.com/neural-network-software/backpropagation-source-code/

global inputmax , outputmax , hiddenmax , celperlayer
global tosmall
tosmall = 1e-306
inputmax = 1
outputmax = 0
hiddenmax = 2
celperlayer = 4
dim cel(hiddenmax,celperlayer)
dim g(hiddenmax,celperlayer)
dim wg(hiddenmax,celperlayer)
dim w(in(hiddenmax,celperlayer,celperlayer))
dim d(in(hiddenmax,celperlayer,celperlayer))
dim wish(outputmax)


call create

for t = 0 to 2000
  f = 0
  for a = 0 to 1
    for b = 0 to 1
      for c = 0 to 1
        cel(0,0)=a
        cel(0,1)=b
        cel(0,2)=c
        wish(0)=a xor b xor c
        f = f + train( .1 , .2 )
      next c
    next b
  next a
  print t , f
next t

for a = 0 to 1
  for b = 0 to 1
    for c = 0 to 1
      cel(0,0)=a
      cel(0,1)=b
      cel(0,2)=c
      call probpegate
      print a;b;c , a xor b xor c , cel(hiddenmax,0)
    next c
  next b
next a
print "[ end sim ]"
end
sub create
  for i = 0 to hiddenmax
    for j = 0 to celperlayer
      for k = 0 to hiddenlayer
        w(in(i,j,k))=rnd(0)
        d(in(i,j,k))=rnd(0)
      next k
      g(i,j)=rnd(0)
      wg(i,j)=rnd(0)
    next j
  next i
end sub
function train( alpha , momentum )
  call probpegate
  for i = 0 to outputmax
    uit = cel(hiddenmax,i)
    errorc = ( wish(i) - uit ) * uit * ( 1 - uit )
    errorg = errorg + ( wish(i) - uit ) * ( wish(i) - uit )
    for j = 0 to celperlayer
      delta = d(in(hiddenmax,j,i))
      if delta > tosmall then
        udelta = alpha * errorc * cel(hiddenmax-1,j) _
        + delta * momentum
      else
        udelta=0
      end if
      w(in(hiddenmax,j,i))=w(in(hiddenmax,j,i))+udelta
      d(in(hiddenmax,j,i))=udelta
      sum=sum+w(in(hiddenmax,j,i))
    next j
    wg(hiddenmax,i)=wg(hiddenmax,i)+alpha*errorc*g(hiddenmax,i)
  next i
  for i = hiddenmax - 1 to 1 step -1
    for j = 0 to celperlayer
      uit = cel(i,j)
      errorc = uit * ( 1 - uit ) * sum
      for k = 0 to celperlayer
        delta = d(in(i,j,k))
        'print delta
        if delta > tosmall then
          udelta = alpha * errorc * cel(i-1,k) _
          + delta * momentum
        else
          udelta=0
        end if
        w(in(i,j,k))=w(in(i,j,k))+udelta
        d(in(i,j,k))=udelta
        csum = w(in(i,j,k))*errorc
      next k
      wg(i,j)=wg(i,j)*alpha*errorc
    next j
    sum = csum
    csum = 0
  next i
  train = errorg / 2
end function

sub probpegate
  for layer = 1 to hiddenmax
    for celnr = 0 to celperlayer
      sum = 0
      for i = 0 to celperlayer
        sum = sum + cel(layer-1,celnr)*w(in(layer,celnr,i))
      next i
      cel(layer,celnr)=signoid(sum)
    next celnr
  next layer
end sub
function in(l,c,i)
  in=l*celperlayer^2+c*celperlayer+i
end function
 


there is 1 problem :
the net don't learn
« Last Edit: Jul 6th, 2017, 03:40am by bluatigro » User IP Logged

basic's : C64 Simons C128 Amiga Amos quick4.5 Visual5 Visual6 Visual2005 Visual2008 just and last but not least liberty
bluatigro
Guru
ImageImageImageImageImage


member is offline

Avatar

cxiu diversas el tio respondas cxiu samvaloras [ thats esperanto for : we are al different therefore we are al equal ]


PM

Gender: Male
Posts: 922
xx Re: neural net : xor
« Reply #10 on: Jul 6th, 2017, 03:50am »

i forgot the signoid function

[ the ide don't see that . why ? ]

i got now a 'underflow exception'

and the train() returns a growing number

Code:
function signoid( x )
  signoid = 1 / ( 1 + exp( 0-x ) )
end function
 
User IP Logged

basic's : C64 Simons C128 Amiga Amos quick4.5 Visual5 Visual6 Visual2005 Visual2008 just and last but not least liberty
Pages: 1  Notify Send Topic Print
« Previous Topic | Next Topic »

Rules|Home|Help|Search|Recent Posts|Notification

Donate $6.99 for 50,000 Ad-Free Pageviews!

| |

This forum powered for FREE by Conforums ©
Sign up for your own Free Message Board today!
Terms of Service | Privacy Policy | Conforums Support | Parental Controls