Discussion:
Current error: inf on function approximation problem
Dirk Gorissen
2007-07-30 14:48:43 UTC
Permalink
Hello,

Ive been using ANN for a long time but always in a Matlab environment (I know next to nothing of C). Since I need to speed things up I wanted to try fann. I started off with a a simple data fitting problem (thats what I need ANNs for). But unfortunately I cant get it to work, though my code seems ok (see below and attached). The output when run is:

Code:

Dataset: 100 patterns, 1 inputs, 1 outputs
Max epochs 5000. Desired error: 0.0010000000.
Epochs 1. Current error: inf. Bit fail 47.
Epochs 1000. Current error: inf. Bit fail 100.
Epochs 2000. Current error: inf. Bit fail 100.
Epochs 3000. Current error: inf. Bit fail 100.
Epochs 4000. Current error: inf. Bit fail 100.
Epochs 5000. Current error: inf. Bit fail 100.
The network output for 0.500000 is: 0.000000
The network output for 0.830000 is: 0.000000

So, for some reason training is not doing anything... What am I doing wrong? Ive searched through google, the forum and the mailinglist archives but came up with nothing. So it seems it must be something trivial :/ Im using the latest version 2.1beta.

Many thanks,

Dirk

Ps: I got the "read_from_array" function from a different forum post, Im assuming it is correct.

Code (full code attached):

#include "doublefann.h"
#include "fann.h"
#include "math.h"

/*
* Reads training data from a double array
*/
struct fann_train_data *read_from_array(double *din, double *dout, unsigned int num_data,
unsigned int num_input, unsigned int num_output) {
unsigned int i, j;
fann_type *data_input, *data_output;
//...taken from: http://leenissen.dk/fann/forum/viewtopic.php?p=719&sid=1661ac359e28908e704231faa6310518
return data;
}


int main()
{
//init
const float desired_error = (const float) 0.001;
const unsigned int max_epochs = 5000;
const unsigned int epochs_between_reports = 1000;
struct fann_train_data *train_data;

//create a simple network
unsigned int layers[3] = {1, 4, 1};
struct fann *ann = fann_create_standard_array(3, layers);

fann_randomize_weights(ann, -1, 1);
fann_set_activation_function_hidden(ann, FANN_SIGMOID_SYMMETRIC);
fann_set_activation_function_output(ann, FANN_LINEAR);
fann_set_training_algorithm(ann, FANN_TRAIN_RPROP);
fann_set_train_error_function(ann, FANN_ERRORFUNC_LINEAR);

//generate some artificial data
int n = 100;
double din[n];
double dout[n];
double x = 0;
int i = 0;
int k = 0;
while(i < n){
x = x + (5*3.14)/n;
din[i] = x;
dout[i] = sin(x);
//printf("%i - dataset is: sin(%f) = %f\n",k,x,dout[i]);
++i;
++k;
}

train_data = read_from_array(din, dout, n, 1, 1);
int num = fann_length_train_data(train_data);
int numIn = fann_num_input_train_data(train_data);
int numOut = fann_num_output_train_data(train_data);
printf("Dataset: %i patterns, %i inputs, %i outputs\n",num,numIn,numOut);

//train the network
fann_train_on_data(ann, train_data, max_epochs, epochs_between_reports, desired_error);

//evaluate the trained network in a few places
//do we have to specify one value at a time??
double in = 0.5;
fann_type* out = fann_run(ann, &in);
printf("The network output for %f is: %f\n",in,out);

in = 0.83;
out = fann_run(ann, &in);
printf("The network output for %f is: %f\n",in,out);

//cleanup
fann_destroy_train(train_data);
fann_destroy(ann);

return 0;
}
--
Der GMX SmartSurfer hilft bis zu 70% Ihrer Onlinekosten zu sparen!
Ideal für Modem und ISDN: http://www.gmx.net/de/go/smartsurfer
Amaresh Kumar
2007-07-30 18:47:43 UTC
Permalink
I think u have to check the training data, i also faced this problem but fixed it, the fix was to check the training data i.e the way its feed to the program. i hope u understand me.

amar

Dirk Gorissen <dgorissen-***@public.gmane.org> wrote: Hello,

Ive been using ANN for a long time but always in a Matlab environment (I know next to nothing of C). Since I need to speed things up I wanted to try fann. I started off with a a simple data fitting problem (thats what I need ANNs for). But unfortunately I cant get it to work, though my code seems ok (see below and attached). The output when run is:

Code:

Dataset: 100 patterns, 1 inputs, 1 outputs
Max epochs 5000. Desired error: 0.0010000000.
Epochs 1. Current error: inf. Bit fail 47.
Epochs 1000. Current error: inf. Bit fail 100.
Epochs 2000. Current error: inf. Bit fail 100.
Epochs 3000. Current error: inf. Bit fail 100.
Epochs 4000. Current error: inf. Bit fail 100.
Epochs 5000. Current error: inf. Bit fail 100.
The network output for 0.500000 is: 0.000000
The network output for 0.830000 is: 0.000000

So, for some reason training is not doing anything... What am I doing wrong? Ive searched through google, the forum and the mailinglist archives but came up with nothing. So it seems it must be something trivial :/ Im using the latest version 2.1beta.

Many thanks,

Dirk

Ps: I got the "read_from_array" function from a different forum post, Im assuming it is correct.

Code (full code attached):

#include "doublefann.h"
#include "fann.h"
#include "math.h"

/*
* Reads training data from a double array
*/
struct fann_train_data *read_from_array(double *din, double *dout, unsigned int num_data,
unsigned int num_input, unsigned int num_output) {
unsigned int i, j;
fann_type *data_input, *data_output;
//...taken from: http://leenissen.dk/fann/forum/viewtopic.php?p=719&sid=1661ac359e28908e704231faa6310518
return data;
}


int main()
{
//init
const float desired_error = (const float) 0.001;
const unsigned int max_epochs = 5000;
const unsigned int epochs_between_reports = 1000;
struct fann_train_data *train_data;

//create a simple network
unsigned int layers[3] = {1, 4, 1};
struct fann *ann = fann_create_standard_array(3, layers);

fann_randomize_weights(ann, -1, 1);
fann_set_activation_function_hidden(ann, FANN_SIGMOID_SYMMETRIC);
fann_set_activation_function_output(ann, FANN_LINEAR);
fann_set_training_algorithm(ann, FANN_TRAIN_RPROP);
fann_set_train_error_function(ann, FANN_ERRORFUNC_LINEAR);

//generate some artificial data
int n = 100;
double din[n];
double dout[n];
double x = 0;
int i = 0;
int k = 0;
while(i < n){
x = x + (5*3.14)/n;
din[i] = x;
dout[i] = sin(x);
//printf("%i - dataset is: sin(%f) = %f\n",k,x,dout[i]);
++i;
++k;
}

train_data = read_from_array(din, dout, n, 1, 1);
int num = fann_length_train_data(train_data);
int numIn = fann_num_input_train_data(train_data);
int numOut = fann_num_output_train_data(train_data);
printf("Dataset: %i patterns, %i inputs, %i outputs\n",num,numIn,numOut);

//train the network
fann_train_on_data(ann, train_data, max_epochs, epochs_between_reports, desired_error);

//evaluate the trained network in a few places
//do we have to specify one value at a time??
double in = 0.5;
fann_type* out = fann_run(ann, &in);
printf("The network output for %f is: %f\n",in,out);

in = 0.83;
out = fann_run(ann, &in);
printf("The network output for %f is: %f\n",in,out);

//cleanup
fann_destroy_train(train_data);
fann_destroy(ann);

return 0;
}
--
Der GMX SmartSurfer hilft bis zu 70% Ihrer Onlinekosten zu sparen!
Ideal für Modem und ISDN: http://www.gmx.net/de/go/smartsurfer
/*
Fast Artificial Neural Network Library (fann)
Copyright (C) 2003 Steffen Nissen (lukesky-***@public.gmane.org)

This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.

This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.

You should have received a copy of the GNU Lesser General Public
License along with this library; if not, write to the Free Software
Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*/

#include "doublefann.h"
#include "fann.h"
#include "math.h"

/*
* Reads training data from a double array
*/
struct fann_train_data *read_from_array(double *din, double *dout, unsigned int num_data,
unsigned int num_input, unsigned int num_output) {
unsigned int i, j;
fann_type *data_input, *data_output;
struct fann_train_data *data =
(struct fann_train_data *) malloc(sizeof(struct fann_train_data));
if(data == NULL) {
fann_error(NULL, FANN_E_CANT_ALLOCATE_MEM);
return NULL;
}

fann_init_error_data((struct fann_error *) data);

data->num_data = num_data;
data->num_input = num_input;
data->num_output = num_output;
data->input = (double **) calloc(num_data, sizeof(double *));
if(data->input == NULL) {
fann_error(NULL, FANN_E_CANT_ALLOCATE_MEM);
fann_destroy_train(data);
return NULL;
}

data->output = (double **) calloc(num_data, sizeof(double *));
if(data->output == NULL) {
fann_error(NULL, FANN_E_CANT_ALLOCATE_MEM);
fann_destroy_train(data);
return NULL;
}

data_input = (double *) calloc(num_input * num_data, sizeof(double));
if(data_input == NULL) {
fann_error(NULL, FANN_E_CANT_ALLOCATE_MEM);
fann_destroy_train(data);
return NULL;
}

data_output = (double *) calloc(num_output * num_data, sizeof(double));
if(data_output == NULL) {
fann_error(NULL, FANN_E_CANT_ALLOCATE_MEM);
fann_destroy_train(data);
return NULL;
}

for(i = 0; i != num_data; i++) {
data->input[i] = data_input;
data_input += num_input;

for(j = 0; j != num_input; j++) {
data->input[i][j] = din[i*num_input+j];
}


data->output[i] = data_output;
data_output += num_output;

for(j = 0; j != num_output; j++) {
data->output[i][j] = dout[i*num_output+j];
}
}
return data;
}


int main()
{
//init
const float desired_error = (const float) 0.001;
const unsigned int max_epochs = 5000;
const unsigned int epochs_between_reports = 1000;
struct fann_train_data *train_data;

//create a simple network
unsigned int layers[3] = {1, 4, 1};
struct fann *ann = fann_create_standard_array(3, layers);

fann_randomize_weights(ann, -1, 1);
fann_set_activation_function_hidden(ann, FANN_SIGMOID_SYMMETRIC);
fann_set_activation_function_output(ann, FANN_LINEAR);
fann_set_training_algorithm(ann, FANN_TRAIN_RPROP);
fann_set_train_error_function(ann, FANN_ERRORFUNC_LINEAR);
fann_set_train_stop_function(ann, FANN_STOPFUNC_MSE);

//generate some artificial data
int n = 100;
double din[n];
double dout[n];
double x = 0;
int i = 0;
int k = 0;
while(i < n){
x = x + (5*3.14)/n;
din[i] = x;
dout[i] = sin(x);
//printf("%i - dataset is: sin(%f) = %f\n",k,x,dout[i]);
++i;
++k;
}

train_data = read_from_array(din, dout, n, 1, 1);
int num = fann_length_train_data(train_data);
int numIn = fann_num_input_train_data(train_data);
int numOut = fann_num_output_train_data(train_data);
printf("Dataset: %i patterns, %i inputs, %i outputs\n",num,numIn,numOut);

//train the network
fann_train_on_data(ann, train_data, max_epochs, epochs_between_reports, desired_error);

//evaluate the trained network in a few places
//do we have to specify one value at a time??
double in = 0.5;
fann_type* out = fann_run(ann, &in);
printf("The network output for %f is: %f\n",in,out);

in = 0.83;
out = fann_run(ann, &in);
printf("The network output for %f is: %f\n",in,out);

//cleanup
fann_destroy_train(train_data);
fann_destroy(ann);

return 0;
}
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/_______________________________________________
Fann-general mailing list
Fann-general-5NWGOfrQmneRv+***@public.gmane.org
https://lists.sourceforge.net/lists/listinfo/fann-general



---------------------------------
Got a little couch potato?
Check out fun summer activities for kids.
Dirk Gorissen
2007-07-31 12:53:58 UTC
Permalink
Post by Amaresh Kumar
I think u have to check the training data, i also faced this problem but
fixed it, the fix was to check the training data i.e the way its feed to
the program. i hope u understand me.
amar
Thanks for the tip. I removed some stuff, loaded the data from file and now
the output is:

Code:

Dataset: 30 patterns, 1 inputs, 1 outputs
Max epochs 5000. Desired error: 0.0001000000.
Epochs 1. Current error: 0.6193455458. Bit fail 26.
Epochs 500. Current error: 0.0197773371. Bit fail 1.
Epochs 1000. Current error: 0.0197770242. Bit fail 1.
Epochs 1500. Current error: 0.0197773371. Bit fail 1.
Epochs 2000. Current error: 0.0197778232. Bit fail 1.
Epochs 2500. Current error: 0.0197787844. Bit fail 1.
Epochs 3000. Current error: 0.0197784621. Bit fail 1.
Epochs 3500. Current error: 0.0197775420. Bit fail 1.
Epochs 4000. Current error: 0.0197772477. Bit fail 1.
Epochs 4500. Current error: 0.0197778028. Bit fail 1.
Epochs 5000. Current error: 0.0197775140. Bit fail 1.

The network output for 0.330000 is: 5.248911e-315

So now at least training seems do do something. But trying to evaluate the
network always gives 0, _no matter what value I use as input_......

This is getting very frustrating for such a simple test case :s

Cheers
Dirk

(files attached)
--
Dirk Gorissen
PhD Student
COMS Research Group
Antwerp University
Belgium

http://www.coms.ua.ac.be
Steven Levis
2007-07-31 17:45:57 UTC
Permalink
Hi Dirk,
You may have discovered a bug with doublefann. When I replace #include "
doublefann.h" with #include "fann.h", the program appears to work as
expected.

-Steve
Hi Dirk,
You may have discovered a bug with doublefann. When I replace #include
"doublefann.h" with #include "fann.h", the program appears to work as
expected.
-Steve
Post by Dirk Gorissen
Post by Amaresh Kumar
I think u have to check the training data, i also faced this problem
but
Post by Amaresh Kumar
fixed it, the fix was to check the training data i.e the way its feed
to
Post by Amaresh Kumar
the program. i hope u understand me.
amar
Thanks for the tip. I removed some stuff, loaded the data from file and now
Dataset: 30 patterns, 1 inputs, 1 outputs
Max epochs 5000. Desired error: 0.0001000000.
Epochs 1. Current error: 0.6193455458. Bit fail 26.
Epochs 500. Current error: 0.0197773371. Bit fail 1.
Epochs 1000. Current error: 0.0197770242. Bit fail 1.
Epochs 1500. Current error: 0.0197773371. Bit fail 1.
Epochs 2000. Current error: 0.0197778232. Bit fail 1.
Epochs 2500. Current error: 0.0197787844. Bit fail 1.
Epochs 3000. Current error: 0.0197784621. Bit fail 1.
Epochs 3500. Current error: 0.0197775420. Bit fail 1.
Epochs 4000. Current error: 0.0197772477. Bit fail 1.
Epochs 4500. Current error: 0.0197778028. Bit fail 1.
Epochs 5000. Current error: 0.0197775140. Bit fail 1.
The network output for 0.330000 is: 5.248911e-315
So now at least training seems do do something. But trying to evaluate the
network always gives 0, _no matter what value I use as input_......
This is getting very frustrating for such a simple test case :s
Cheers
Dirk
(files attached)
--
Dirk Gorissen
PhD Student
COMS Research Group
Antwerp University
Belgium
http://www.coms.ua.ac.be
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
Fann-general mailing list
https://lists.sourceforge.net/lists/listinfo/fann-general
Dirk Gorissen
2007-08-01 09:13:45 UTC
Permalink
Hi Steve,

Indeed, that works! A bit thank you!
I have posted this on the forum topic I opened
(http://leenissen.dk/fann/forum/viewtopic.php?p=792#792)

Should I file a bugreport?

Cheers
Dirk
Post by Steven Levis
Hi Dirk,
You may have discovered a bug with doublefann. When I replace #include "
doublefann.h" with #include "fann.h", the program appears to work as
expected.
-Steve
Post by Steven Levis
Hi Dirk,
You may have discovered a bug with doublefann. When I replace #include
"doublefann.h" with #include "fann.h", the program appears to work as
expected.
-Steve
Post by Dirk Gorissen
Post by Amaresh Kumar
I think u have to check the training data, i also faced this problem
but
Post by Amaresh Kumar
fixed it, the fix was to check the training data i.e the way its feed
to
Post by Amaresh Kumar
the program. i hope u understand me.
amar
Thanks for the tip. I removed some stuff, loaded the data from file and now
Dataset: 30 patterns, 1 inputs, 1 outputs
Max epochs 5000. Desired error: 0.0001000000.
Epochs 1. Current error: 0.6193455458. Bit fail 26.
Epochs 500. Current error: 0.0197773371. Bit fail 1.
Epochs 1000. Current error: 0.0197770242. Bit fail 1.
Epochs 1500. Current error: 0.0197773371. Bit fail 1.
Epochs 2000. Current error: 0.0197778232. Bit fail 1.
Epochs 2500. Current error: 0.0197787844. Bit fail 1.
Epochs 3000. Current error: 0.0197784621. Bit fail 1.
Epochs 3500. Current error: 0.0197775420. Bit fail 1.
Epochs 4000. Current error: 0.0197772477. Bit fail 1.
Epochs 4500. Current error: 0.0197778028. Bit fail 1.
Epochs 5000. Current error: 0.0197775140. Bit fail 1.
The network output for 0.330000 is: 5.248911e-315
So now at least training seems do do something. But trying to evaluate the
network always gives 0, _no matter what value I use as input_......
This is getting very frustrating for such a simple test case :s
Cheers
Dirk
(files attached)
--
Dirk Gorissen
PhD Student
COMS Research Group
Antwerp University
Belgium
http://www.coms.ua.ac.be
-----------------------------------------------------------------------
-- This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
Fann-general mailing list
https://lists.sourceforge.net/lists/listinfo/fann-general
--
Dirk Gorissen
PhD Student
COMS Research Group
Antwerp University
Belgium

http://www.coms.ua.ac.be


-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
Steven Levis
2007-08-01 16:04:49 UTC
Permalink
Hi Dirk,
That sounds like a good idea, especially since it is so easily
reproducable. Be sure to include the source code and data files.

I uncommented the loop you had in the source code that printed
out the training data, and found that the training data is incorrect
when using doublefann. I suggest you uncomment that section
in the code before submitting the report to give the developers a
head start.

-Steve
Post by Dirk Gorissen
Hi Steve,
Indeed, that works! A bit thank you!
I have posted this on the forum topic I opened
(http://leenissen.dk/fann/forum/viewtopic.php?p=792#792)
Should I file a bugreport?
Cheers
Dirk
Post by Steven Levis
Hi Dirk,
You may have discovered a bug with doublefann. When I replace
#include "
Post by Steven Levis
doublefann.h" with #include "fann.h", the program appears to work as
expected.
-Steve
Post by Steven Levis
Hi Dirk,
You may have discovered a bug with doublefann. When I replace
#include
Post by Steven Levis
Post by Steven Levis
"doublefann.h" with #include "fann.h", the program appears to work as
expected.
-Steve
Post by Dirk Gorissen
Post by Amaresh Kumar
I think u have to check the training data, i also faced this
problem
Post by Steven Levis
Post by Steven Levis
Post by Dirk Gorissen
but
Post by Amaresh Kumar
fixed it, the fix was to check the training data i.e the way its
feed
Post by Steven Levis
Post by Steven Levis
Post by Dirk Gorissen
to
Post by Amaresh Kumar
the program. i hope u understand me.
amar
Thanks for the tip. I removed some stuff, loaded the data from file
and
Post by Steven Levis
Post by Steven Levis
Post by Dirk Gorissen
now
Dataset: 30 patterns, 1 inputs, 1 outputs
Max epochs 5000. Desired error: 0.0001000000.
Epochs 1. Current error: 0.6193455458. Bit fail 26.
Epochs 500. Current error: 0.0197773371. Bit fail 1.
Epochs 1000. Current error: 0.0197770242. Bit fail 1.
Epochs 1500. Current error: 0.0197773371. Bit fail 1.
Epochs 2000. Current error: 0.0197778232. Bit fail 1.
Epochs 2500. Current error: 0.0197787844. Bit fail 1.
Epochs 3000. Current error: 0.0197784621. Bit fail 1.
Epochs 3500. Current error: 0.0197775420. Bit fail 1.
Epochs 4000. Current error: 0.0197772477. Bit fail 1.
Epochs 4500. Current error: 0.0197778028. Bit fail 1.
Epochs 5000. Current error: 0.0197775140. Bit fail 1.
The network output for 0.330000 is: 5.248911e-315
So now at least training seems do do something. But trying to
evaluate
Post by Steven Levis
Post by Steven Levis
Post by Dirk Gorissen
the
network always gives 0, _no matter what value I use as input_......
This is getting very frustrating for such a simple test case :s
Cheers
Dirk
(files attached)
--
Dirk Gorissen
PhD Student
COMS Research Group
Antwerp University
Belgium
http://www.coms.ua.ac.be
-----------------------------------------------------------------------
Post by Steven Levis
Post by Steven Levis
Post by Dirk Gorissen
-- This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a
browser.
Post by Steven Levis
Post by Steven Levis
Post by Dirk Gorissen
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
Fann-general mailing list
https://lists.sourceforge.net/lists/listinfo/fann-general
--
Dirk Gorissen
PhD Student
COMS Research Group
Antwerp University
Belgium
http://www.coms.ua.ac.be
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
Fann-general mailing list
https://lists.sourceforge.net/lists/listinfo/fann-general
Dirk Gorissen
2007-08-01 15:46:18 UTC
Permalink
Hello,

I have written a simple mex file that allows me to interface FANN with Matlab.

Everything works, but I've run into a practical problem when I want to do
something like this in matlab:

ann = createFann(layers, samples, values);
..do stuff....
values = testFann(ann, testSamples);

The problem is finding a suitable form for the matlab ann object. My
intuitive idea was to do the following:

- createFann: creates a C fann neural network struct, trains it, gets the
weights (extract them from the connections structure), returns the weight
array to matlab (so ann is a weight matrix)

- testFann: creates an empty C fann neural network (with the same layer
structure), set the passed weights, run the network on given test samples,
return the results

However this seems ugly, time consuming, and Im not sure if it will work all
the time (e.g., when using sparse anns, creating a new empty fann will not
necessarily have the same connections I guess).

Better ideas would be appreciated (ie. how do the octave bindings solve this?
I will try to look at the code)

Many thanks,

Cheers
Dirk

PS: if I get everything running I will hapily post the bindings somewhere
--
Dirk Gorissen
PhD Student
COMS Research Group
Antwerp University
Belgium

http://www.coms.ua.ac.be


-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
Søren Hauberg
2007-08-02 11:05:26 UTC
Permalink
Post by Dirk Gorissen
Better ideas would be appreciated (ie. how do the octave bindings solve this?
I will try to look at the code)
I guess I should answer that question as I'm the author of the Octave
binding :-)
I just create a new Octave data type the contains the fann data
structure. Then I can create an API that's fairly similar to the C API.
Last time I checked no API existed for getting/setting individual
weights, so the Octave binding doesn't support this :-(

Søren

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
Dirk Gorissen
2007-08-02 12:40:02 UTC
Permalink
Hello,

In the end I decided to return a struct to matlab containing 3 arrays
(weights, from_neuron connections, to_neuron connections).

This works fine, except in the situation where the connectivity is < 1 (sparse
networks). The reason for this is that fann_set_weight_array ignores the
connections:

Only the weights can be changed, connections and weights are ignored
if they do not already exist in the network.

So if the developers would provide a fann_set_connection_array method that
would be perfect... :)

I will post the code when Ive cleaned it up a bit.

Cheers
Dirk
Post by Søren Hauberg
Post by Dirk Gorissen
Better ideas would be appreciated (ie. how do the octave bindings solve
this? I will try to look at the code)
I guess I should answer that question as I'm the author of the Octave
binding :-)
I just create a new Octave data type the contains the fann data
structure. Then I can create an API that's fairly similar to the C API.
Last time I checked no API existed for getting/setting individual
weights, so the Octave binding doesn't support this :-(
Søren
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
Fann-general mailing list
https://lists.sourceforge.net/lists/listinfo/fann-general
--
Dirk Gorissen
PhD Student
COMS Research Group
Antwerp University
Belgium

http://www.coms.ua.ac.be


-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
Loading...