Discussion:
Request for a Clarification on Every Run Output Differences
Othman Soufan
2012-03-18 23:07:08 UTC
Permalink
Dear Group,

After I have installed FANN, I went and tried to run the example on the
"Getting Started" page.
However, each time I run the program on the Xor.data, I get a different
output.
I would like to mention that I have combined the training and testing
programs in one program.

Some output samples are:
Max epochs 500000. Desired error: 0.0010000000.
Epochs 1. Current error: 0.2505536079. Bit fail 4.
Epochs 26. Current error: 0.0007957527. Bit fail 0.
xor test (-1.000000,1.000000) -> -319531589830587711488.000000

Max epochs 500000. Desired error: 0.0010000000.
Epochs 1. Current error: 0.2500049174. Bit fail 4.
Epochs 23. Current error: 0.0009584196. Bit fail 0.
xor test (-1.000000,1.000000) -> -0.000002

Max epochs 500000. Desired error: 0.0010000000.
Epochs 1. Current error: 0.2502186596. Bit fail 4.
Epochs 30. Current error: 0.0009348781. Bit fail 0.
xor test (-1.000000,1.000000) -> 0.000000

I would like to know why I am receiving such weird outputs that differs
from each other.

Regards,
Othman
Steffen Nissen
2012-03-19 01:06:23 UTC
Permalink
Weight are initialized randomly, so different result is expected. The very
large number you observed in the first run is, however, not expected.
Post by Othman Soufan
Dear Group,
After I have installed FANN, I went and tried to run the example on the
"Getting Started" page.
Post by Othman Soufan
However, each time I run the program on the Xor.data, I get a different
output.
Post by Othman Soufan
I would like to mention that I have combined the training and testing
programs in one program.
Post by Othman Soufan
Max epochs 500000. Desired error: 0.0010000000.
Epochs 1. Current error: 0.2505536079. Bit fail 4.
Epochs 26. Current error: 0.0007957527. Bit fail 0.
xor test (-1.000000,1.000000) -> -319531589830587711488.000000
Max epochs 500000. Desired error: 0.0010000000.
Epochs 1. Current error: 0.2500049174. Bit fail 4.
Epochs 23. Current error: 0.0009584196. Bit fail 0.
xor test (-1.000000,1.000000) -> -0.000002
Max epochs 500000. Desired error: 0.0010000000.
Epochs 1. Current error: 0.2502186596. Bit fail 4.
Epochs 30. Current error: 0.0009348781. Bit fail 0.
xor test (-1.000000,1.000000) -> 0.000000
I would like to know why I am receiving such weird outputs that differs
from each other.
Regards,
Othman
--
Best Regards,
Steffen Nissen, MSc
http://www.linkedin.com/in/steffennissen
Fábio Blessa
2012-03-19 01:25:16 UTC
Permalink
Send us the code plz...
For sure has a small mistake.

BR,

Fábio
Post by Steffen Nissen
Weight are initialized randomly, so different result is expected. The very
large number you observed in the first run is, however, not expected.
Post by Othman Soufan
Dear Group,
After I have installed FANN, I went and tried to run the example on the
"Getting Started" page.
Post by Othman Soufan
However, each time I run the program on the Xor.data, I get a different
output.
Post by Othman Soufan
I would like to mention that I have combined the training and testing
programs in one program.
Post by Othman Soufan
Max epochs 500000. Desired error: 0.0010000000.
Epochs 1. Current error: 0.2505536079. Bit fail 4.
Epochs 26. Current error: 0.0007957527. Bit fail 0.
xor test (-1.000000,1.000000) -> -319531589830587711488.000000
Max epochs 500000. Desired error: 0.0010000000.
Epochs 1. Current error: 0.2500049174. Bit fail 4.
Epochs 23. Current error: 0.0009584196. Bit fail 0.
xor test (-1.000000,1.000000) -> -0.000002
Max epochs 500000. Desired error: 0.0010000000.
Epochs 1. Current error: 0.2502186596. Bit fail 4.
Epochs 30. Current error: 0.0009348781. Bit fail 0.
xor test (-1.000000,1.000000) -> 0.000000
I would like to know why I am receiving such weird outputs that differs
from each other.
Regards,
Othman
--
Best Regards,
Steffen Nissen, MSc
http://www.linkedin.com/in/steffennissen
------------------------------------------------------------------------------
Try Windows Azure free for 90 days Click Here
http://p.sf.net/sfu/sfd2d-msazure
_______________________________________________
Fann-general mailing list
https://lists.sourceforge.net/lists/listinfo/fann-general
Othman Soufan
2012-03-19 01:58:30 UTC
Permalink
Thanks for the immediate response.

The code is as follows:

#include <stdio.h>
#include <stdlib.h>
#include <fann.h>
#include "floatfann.h"
#include "libMSVM.h" // Generic structure and function declarations
#include "libtrainMSVM.h" // Training functions (not required for
predictions only)
#include "libevalMSVM.h" // Evaluation functions (also used during
training)
#include "memory.c"
#include "math.h"

//1. Consider having a matrix for the output layer when we have multiple
class i.e. one-to-all representation

int main()
{
const unsigned int num_input = 2;
const unsigned int num_output = 1;
const unsigned int num_layers = 3;
const unsigned int num_neurons_hidden = 3;
const float desired_error = (const float) 0.001;
const unsigned int max_epochs = 500000;
const unsigned int epochs_between_reports = 1000;

//Training the ANN
struct fann *ann = fann_create_standard(num_layers, num_input,
num_neurons_hidden, num_output);

fann_set_activation_function_hidden(ann, FANN_SIGMOID_SYMMETRIC);
fann_set_activation_function_output(ann, FANN_SIGMOID_SYMMETRIC);

fann_train_on_file(ann, "xor.data", max_epochs, epochs_between_reports,
desired_error);

//Testing the ANN
fann_type *calc_out;
fann_type input[2];

input[0] = -1;
input[1] = 1;
calc_out = fann_run(ann, input);
printf("xor test (%f,%f) -> %f\n", input[0], input[1], calc_out[0]);

fann_destroy(ann);

return 0;
}
Post by Fábio Blessa
Send us the code plz...
For sure has a small mistake.
BR,
Fábio
Post by Steffen Nissen
Weight are initialized randomly, so different result is expected. The
very large number you observed in the first run is, however, not expected.
Post by Othman Soufan
Dear Group,
After I have installed FANN, I went and tried to run the example on the
"Getting Started" page.
Post by Othman Soufan
However, each time I run the program on the Xor.data, I get a different
output.
Post by Othman Soufan
I would like to mention that I have combined the training and testing
programs in one program.
Post by Othman Soufan
Max epochs 500000. Desired error: 0.0010000000.
Epochs 1. Current error: 0.2505536079. Bit fail 4.
Epochs 26. Current error: 0.0007957527. Bit fail 0.
xor test (-1.000000,1.000000) -> -319531589830587711488.000000
Max epochs 500000. Desired error: 0.0010000000.
Epochs 1. Current error: 0.2500049174. Bit fail 4.
Epochs 23. Current error: 0.0009584196. Bit fail 0.
xor test (-1.000000,1.000000) -> -0.000002
Max epochs 500000. Desired error: 0.0010000000.
Epochs 1. Current error: 0.2502186596. Bit fail 4.
Epochs 30. Current error: 0.0009348781. Bit fail 0.
xor test (-1.000000,1.000000) -> 0.000000
I would like to know why I am receiving such weird outputs that differs
from each other.
Regards,
Othman
--
Best Regards,
Steffen Nissen, MSc
http://www.linkedin.com/in/steffennissen
------------------------------------------------------------------------------
Try Windows Azure free for 90 days Click Here
http://p.sf.net/sfu/sfd2d-msazure
_______________________________________________
Fann-general mailing list
https://lists.sourceforge.net/lists/listinfo/fann-general
------------------------------------------------------------------------------
Try Windows Azure free for 90 days Click Here
http://p.sf.net/sfu/sfd2d-msazure
_______________________________________________
Fann-general mailing list
https://lists.sourceforge.net/lists/listinfo/fann-general
--
MS Candidate, Class of 2010
Mathematical and Computer Sciences and Engineering
King Abdullah University of Science and Technology
Tuwal, Jeddah, KSA.
Mobile: +966506134003
Othman Soufan
2012-03-19 12:29:13 UTC
Permalink
I would like to update you that:

const unsigned int num_output = 2;

solves the problem.

So, instead of setting num_output = 1 as listed on the Getting Started page,
num_output = 2 seems to overcome the problem of large output differences.

Currently, whenever I execute the program I get the following output:
xor test (-1.000000,1.000000) -> 0.000000

So, would you kindly confirm if this is the proper solution or am I missing
something?

Regards,
Othman
Post by Othman Soufan
Thanks for the immediate response.
#include <stdio.h>
#include <stdlib.h>
#include <fann.h>
#include "floatfann.h"
#include "libMSVM.h" // Generic structure and function declarations
#include "libtrainMSVM.h" // Training functions (not required for
predictions only)
#include "libevalMSVM.h" // Evaluation functions (also used during
training)
#include "memory.c"
#include "math.h"
//1. Consider having a matrix for the output layer when we have multiple
class i.e. one-to-all representation
int main()
{
const unsigned int num_input = 2;
const unsigned int num_output = 1;
const unsigned int num_layers = 3;
const unsigned int num_neurons_hidden = 3;
const float desired_error = (const float) 0.001;
const unsigned int max_epochs = 500000;
const unsigned int epochs_between_reports = 1000;
//Training the ANN
struct fann *ann = fann_create_standard(num_layers, num_input,
num_neurons_hidden, num_output);
fann_set_activation_function_hidden(ann, FANN_SIGMOID_SYMMETRIC);
fann_set_activation_function_output(ann, FANN_SIGMOID_SYMMETRIC);
fann_train_on_file(ann, "xor.data", max_epochs,
epochs_between_reports, desired_error);
//Testing the ANN
fann_type *calc_out;
fann_type input[2];
input[0] = -1;
input[1] = 1;
calc_out = fann_run(ann, input);
printf("xor test (%f,%f) -> %f\n", input[0], input[1], calc_out[0]);
fann_destroy(ann);
return 0;
}
Post by Fábio Blessa
Send us the code plz...
For sure has a small mistake.
BR,
Fábio
Post by Steffen Nissen
Weight are initialized randomly, so different result is expected. The
very large number you observed in the first run is, however, not expected.
Post by Othman Soufan
Dear Group,
After I have installed FANN, I went and tried to run the example on
the "Getting Started" page.
Post by Othman Soufan
However, each time I run the program on the Xor.data, I get a
different output.
Post by Othman Soufan
I would like to mention that I have combined the training and testing
programs in one program.
Post by Othman Soufan
Max epochs 500000. Desired error: 0.0010000000.
Epochs 1. Current error: 0.2505536079. Bit fail 4.
Epochs 26. Current error: 0.0007957527. Bit fail 0.
xor test (-1.000000,1.000000) -> -319531589830587711488.000000
Max epochs 500000. Desired error: 0.0010000000.
Epochs 1. Current error: 0.2500049174. Bit fail 4.
Epochs 23. Current error: 0.0009584196. Bit fail 0.
xor test (-1.000000,1.000000) -> -0.000002
Max epochs 500000. Desired error: 0.0010000000.
Epochs 1. Current error: 0.2502186596. Bit fail 4.
Epochs 30. Current error: 0.0009348781. Bit fail 0.
xor test (-1.000000,1.000000) -> 0.000000
I would like to know why I am receiving such weird outputs that differs
from each other.
Regards,
Othman
--
Best Regards,
Steffen Nissen, MSc
http://www.linkedin.com/in/steffennissen
------------------------------------------------------------------------------
Try Windows Azure free for 90 days Click Here
http://p.sf.net/sfu/sfd2d-msazure
_______________________________________________
Fann-general mailing list
https://lists.sourceforge.net/lists/listinfo/fann-general
------------------------------------------------------------------------------
Try Windows Azure free for 90 days Click Here
http://p.sf.net/sfu/sfd2d-msazure
_______________________________________________
Fann-general mailing list
https://lists.sourceforge.net/lists/listinfo/fann-general
Steffen Nissen
2012-03-19 13:12:29 UTC
Permalink
That should not fix anything, as the XOR problem only has one output.

Please let me know how you compile the source, perhaps you are linking with
doublefann instead of floatfann.

You can also try to run some of the other examples in:
http://fann.git.sourceforge.net/git/gitweb.cgi?p=fann/fann;a=tree;f=examples;hb=HEAD

Like:
http://fann.git.sourceforge.net/git/gitweb.cgi?p=fann/fann;a=blob;f=examples/cascade_train.c;h=35bae4b9a0e022770c97995d6301aab6e08680e8;hb=HEAD

Steffen
Post by Othman Soufan
const unsigned int num_output = 2;
solves the problem.
So, instead of setting num_output = 1 as listed on the Getting Started page,
num_output = 2 seems to overcome the problem of large output differences.
xor test (-1.000000,1.000000) -> 0.000000
So, would you kindly confirm if this is the proper solution or am I missing
something?
Regards,
Othman
Thanks for the immediate response.
#include <stdio.h>
#include <stdlib.h>
#include <fann.h>
#include "floatfann.h"
#include "libMSVM.h" // Generic structure and function declarations
#include "libtrainMSVM.h" // Training functions (not required for
predictions only)
Post by Othman Soufan
#include "libevalMSVM.h" // Evaluation functions (also used during
training)
Post by Othman Soufan
#include "memory.c"
#include "math.h"
//1. Consider having a matrix for the output layer when we have multiple
class i.e. one-to-all representation
Post by Othman Soufan
int main()
{
const unsigned int num_input = 2;
const unsigned int num_output = 1;
const unsigned int num_layers = 3;
const unsigned int num_neurons_hidden = 3;
const float desired_error = (const float) 0.001;
const unsigned int max_epochs = 500000;
const unsigned int epochs_between_reports = 1000;
//Training the ANN
struct fann *ann = fann_create_standard(num_layers, num_input,
num_neurons_hidden, num_output);
Post by Othman Soufan
fann_set_activation_function_hidden(ann, FANN_SIGMOID_SYMMETRIC);
fann_set_activation_function_output(ann, FANN_SIGMOID_SYMMETRIC);
fann_train_on_file(ann, "xor.data", max_epochs,
epochs_between_reports, desired_error);
Post by Othman Soufan
//Testing the ANN
fann_type *calc_out;
fann_type input[2];
input[0] = -1;
input[1] = 1;
calc_out = fann_run(ann, input);
printf("xor test (%f,%f) -> %f\n", input[0], input[1], calc_out[0]);
fann_destroy(ann);
return 0;
}
Send us the code plz...
For sure has a small mistake.
BR,
Fábio
Weight are initialized randomly, so different result is expected. The
very large number you observed in the first run is, however, not expected.
Post by Othman Soufan
Post by Othman Soufan
Dear Group,
After I have installed FANN, I went and tried to run the example on the
"Getting Started" page.
Post by Othman Soufan
Post by Othman Soufan
However, each time I run the program on the Xor.data, I get a different
output.
Post by Othman Soufan
Post by Othman Soufan
I would like to mention that I have combined the training and testing
programs in one program.
Post by Othman Soufan
Post by Othman Soufan
Max epochs 500000. Desired error: 0.0010000000.
Epochs 1. Current error: 0.2505536079. Bit fail 4.
Epochs 26. Current error: 0.0007957527. Bit fail 0.
xor test (-1.000000,1.000000) -> -319531589830587711488.000000
Max epochs 500000. Desired error: 0.0010000000.
Epochs 1. Current error: 0.2500049174. Bit fail 4.
Epochs 23. Current error: 0.0009584196. Bit fail 0.
xor test (-1.000000,1.000000) -> -0.000002
Max epochs 500000. Desired error: 0.0010000000.
Epochs 1. Current error: 0.2502186596. Bit fail 4.
Epochs 30. Current error: 0.0009348781. Bit fail 0.
xor test (-1.000000,1.000000) -> 0.000000
I would like to know why I am receiving such weird outputs that differs
from each other.
Regards,
Othman
--
Best Regards,
Steffen Nissen, MSc
http://www.linkedin.com/in/steffennissen
------------------------------------------------------------------------------
Post by Othman Soufan
Try Windows Azure free for 90 days Click Here
ht
--
Best Regards,
Steffen Nissen, MSc
http://www.linkedin.com/in/steffennissen
Othman Soufan
2012-03-19 13:32:03 UTC
Permalink
Indeed that was the problem...
I was linking to doublefann instead of floatfann.

Now, I am getting the right output which is:
xor test (-1.000000,1.000000) -> 1.000000

As the printf in the example is using "%f ", the floatfann
should be linked only.

Thanks for the support and I appreciate your efforts.
Post by Steffen Nissen
That should not fix anything, as the XOR problem only has one output.
Please let me know how you compile the source, perhaps you are linking
with doublefann instead of floatfann.
http://fann.git.sourceforge.net/git/gitweb.cgi?p=fann/fann;a=tree;f=examples;hb=HEAD
http://fann.git.sourceforge.net/git/gitweb.cgi?p=fann/fann;a=blob;f=examples/cascade_train.c;h=35bae4b9a0e022770c97995d6301aab6e08680e8;hb=HEAD
Steffen
Post by Othman Soufan
const unsigned int num_output = 2;
solves the problem.
So, instead of setting num_output = 1 as listed on the Getting Started
page,
Post by Othman Soufan
num_output = 2 seems to overcome the problem of large output differences.
xor test (-1.000000,1.000000) -> 0.000000
So, would you kindly confirm if this is the proper solution or am I
missing
Post by Othman Soufan
something?
Regards,
Othman
Thanks for the immediate response.
#include <stdio.h>
#include <stdlib.h>
#include <fann.h>
#include "floatfann.h"
#include "libMSVM.h" // Generic structure and function
declarations
Post by Othman Soufan
#include "libtrainMSVM.h" // Training functions (not required for
predictions only)
Post by Othman Soufan
#include "libevalMSVM.h" // Evaluation functions (also used during
training)
Post by Othman Soufan
#include "memory.c"
#include "math.h"
//1. Consider having a matrix for the output layer when we have multiple
class i.e. one-to-all representation
Post by Othman Soufan
int main()
{
const unsigned int num_input = 2;
const unsigned int num_output = 1;
const unsigned int num_layers = 3;
const unsigned int num_neurons_hidden = 3;
const float desired_error = (const float) 0.001;
const unsigned int max_epochs = 500000;
const unsigned int epochs_between_reports = 1000;
//Training the ANN
struct fann *ann = fann_create_standard(num_layers, num_input,
num_neurons_hidden, num_output);
Post by Othman Soufan
fann_set_activation_function_hidden(ann, FANN_SIGMOID_SYMMETRIC);
fann_set_activation_function_output(ann, FANN_SIGMOID_SYMMETRIC);
fann_train_on_file(ann, "xor.data", max_epochs,
epochs_between_reports, desired_error);
Post by Othman Soufan
//Testing the ANN
fann_type *calc_out;
fann_type input[2];
input[0] = -1;
input[1] = 1;
calc_out = fann_run(ann, input);
printf("xor test (%f,%f) -> %f\n", input[0], input[1], calc_out[0]);
fann_destroy(ann);
return 0;
}
Send us the code plz...
For sure has a small mistake.
BR,
Fábio
Weight are initialized randomly, so different result is expected. The
very large number you observed in the first run is, however, not expected.
Post by Othman Soufan
Post by Othman Soufan
Dear Group,
After I have installed FANN, I went and tried to run the example on the
"Getting Started" page.
Post by Othman Soufan
Post by Othman Soufan
However, each time I run the program on the Xor.data, I get a different
output.
Post by Othman Soufan
Post by Othman Soufan
I would like to mention that I have combined the training and testing
programs in one program.
Post by Othman Soufan
Post by Othman Soufan
Max epochs 500000. Desired error: 0.0010000000.
Epochs 1. Current error: 0.2505536079. Bit fail 4.
Epochs 26. Current error: 0.0007957527. Bit fail 0.
xor test (-1.000000,1.000000) -> -319531589830587711488.000000
Max epochs 500000. Desired error: 0.0010000000.
Epochs 1. Current error: 0.2500049174. Bit fail 4.
Epochs 23. Current error: 0.0009584196. Bit fail 0.
xor test (-1.000000,1.000000) -> -0.000002
Max epochs 500000. Desired error: 0.0010000000.
Epochs 1. Current error: 0.2502186596. Bit fail 4.
Epochs 30. Current error: 0.0009348781. Bit fail 0.
xor test (-1.000000,1.000000) -> 0.000000
I would like to know why I am receiving such weird outputs that differs
from each other.
Regards,
Othman
--
Best Regards,
Steffen Nissen, MSc
http://www.linkedin.com/in/steffennissen
------------------------------------------------------------------------------
Post by Othman Soufan
Try Windows Azure free for 90 days Click Here
ht
--
Best Regards,
Steffen Nissen, MSc
http://www.linkedin.com/in/steffennissen
------------------------------------------------------------------------------
Try Windows Azure free for 90 days Click Here
http://p.sf.net/sfu/sfd2d-msazure
_______________________________________________
Fann-general mailing list
https://lists.sourceforge.net/lists/listinfo/fann-general
--
MS Candidate, Class of 2010
Mathematical and Computer Sciences and Engineering
King Abdullah University of Science and Technology
Tuwal, Jeddah, KSA.
Mobile: +966506134003
Steffen Nissen
2012-03-19 14:42:22 UTC
Permalink
Please note that whenever you include floatfann.h or just fann.h instead of
doublefann.h, you should link with floatfann.
Post by Othman Soufan
Indeed that was the problem...
I was linking to doublefann instead of floatfann.
xor test (-1.000000,1.000000) -> 1.000000
As the printf in the example is using "%f ", the floatfann
should be linked only.
Thanks for the support and I appreciate your efforts.
Post by Steffen Nissen
That should not fix anything, as the XOR problem only has one output.
Please let me know how you compile the source, perhaps you are linking
with doublefann instead of floatfann.
http://fann.git.sourceforge.net/git/gitweb.cgi?p=fann/fann;a=tree;f=examples;hb=HEAD
http://fann.git.sourceforge.net/git/gitweb.cgi?p=fann/fann;a=blob;f=examples/cascade_train.c;h=35bae4b9a0e022770c97995d6301aab6e08680e8;hb=HEAD
Post by Othman Soufan
Post by Steffen Nissen
Steffen
Post by Othman Soufan
const unsigned int num_output = 2;
solves the problem.
So, instead of setting num_output = 1 as listed on the Getting Started page,
num_output = 2 seems to overcome the problem of large output differences.
xor test (-1.000000,1.000000) -> 0.000000
So, would you kindly confirm if this is the proper solution or am I missing
something?
Regards,
Othman
Thanks for the immediate response.
#include <stdio.h>
#include <stdlib.h>
#include <fann.h>
#include "floatfann.h"
#include "libMSVM.h" // Generic structure and function declarations
#include "libtrainMSVM.h" // Training functions (not required for
predictions only)
Post by Othman Soufan
Post by Steffen Nissen
Post by Othman Soufan
#include "libevalMSVM.h" // Evaluation functions (also used during
training)
Post by Othman Soufan
Post by Steffen Nissen
Post by Othman Soufan
#include "memory.c"
#include "math.h"
//1. Consider having a matrix for the output layer when we have
multiple class i.e. one-to-all representation
Post by Othman Soufan
Post by Steffen Nissen
Post by Othman Soufan
int main()
{
const unsigned int num_input = 2;
const unsigned int num_output = 1;
const unsigned int num_layers = 3;
const unsigned int num_neurons_hidden = 3;
const float desired_error = (const float) 0.001;
const unsigned int max_epochs = 500000;
const unsigned int epochs_between_reports = 1000;
//Training the ANN
struct fann *ann = fann_create_standard(num_layers, num_input,
num_neurons_hidden, num_output);
Post by Othman Soufan
Post by Steffen Nissen
Post by Othman Soufan
fann_set_activation_function_hidden(ann, FANN_SIGMOID_SYMMETRIC);
fann_set_activation_function_output(ann, FANN_SIGMOID_SYMMETRIC);
fann_train_on_file(ann, "xor.data", max_epochs,
epochs_between_reports, desired_error);
Post by Othman Soufan
Post by Steffen Nissen
Post by Othman Soufan
//Testing the ANN
fann_type *calc_out;
fann_type input[2];
input[0] = -1;
input[1] = 1;
calc_out = fann_run(ann, input);
printf("xor test (%f,%f) -> %f\n", input[0], input[1],
calc_out[0]);
Post by Othman Soufan
Post by Steffen Nissen
Post by Othman Soufan
fann_destroy(ann);
return 0;
}
Send us the code plz...
For sure has a small mistake.
BR,
Fábio
Weight are initialized randomly, so different result is expected. The
very large number you observed in the first run is, however, not expected.
Post by Othman Soufan
Post by Steffen Nissen
Post by Othman Soufan
Post by Othman Soufan
Dear Group,
After I have installed FANN, I went and tried to run the example on
the "Getting Started" page.
Post by Othman Soufan
Post by Steffen Nissen
Post by Othman Soufan
Post by Othman Soufan
However, each time I run the program on the Xor.data, I get a
different output.
Post by Othman Soufan
Post by Steffen Nissen
Post by Othman Soufan
Post by Othman Soufan
I would like to mention that I have combined the training and testing
programs in one program.
Post by Othman Soufan
Post by Steffen Nissen
Post by Othman Soufan
Post by Othman Soufan
Max epochs 5
http://p.sf.net/sfu/sfd2d-msazure
_______________________________________________
Fann-general mailing list
https://lists.sourceforge.net/lists/listinfo/fann-general
--
MS Candidate, Class of 2010
Mathematical and Computer Sciences and Engineering
King Abdullah University of Science and Technology
Tuwal, Jeddah, KSA.
Mobile: +966506134003
--
Best Regards,
Steffen Nissen, MSc
http://www.linkedin.com/in/steffennissen
Continue reading on narkive:
Loading...