Caroline Simpson and Pruning Cascade Correlation Networks

For some time now Hanbin and I have been interested in the cascade correlation (pdf) neural network architecture. The two of us played with it some using the code that its inventor, Scott Fahlman, made available on line in Lisp (and again, isn't it amazing to use a programming language that you can run out of the box forty years after it was written).

As an off-shoot of a class I teach at the University of Waterloo a group of undergraduates and myself met every month or two to talk about the architecture and to report our efforts to get it working for a particular problem (e.g. replicating the work of Kharratzadeh and Shultz) or trying a new programming language (like C++).

One of the students, Caroline Simpson, decided to do an independent study project on porting the code to python with the goal of exploring how robust the network was to pruning. One of the nice things about cascade correlation is that it grows the network architecture as it needs to to solve a particular problem, but that can also lead to very deep networks with a lot of connections, perhaps not all of them being useful. This seemed to Caroline alot like the overgrowth and pruning observed in human neural development, so she decided to use the cascade correlation architecture as the test bed for exploring how pruning impacts learning and generalization. Needless to say this is pretty impressive work for a one term independent study project.

Caroline is presenting her work as a talk to our department's Discovery Day Research conference and has also made all her code available on her gitlab page for this project. If you would like a copy of her presentation slides you can raise an issue on her gitlab repo or email me.

Date: 2024-04-08 Mon 00:00

Author: Britt Anderson

Created: 2024-05-02 Thu 03:14

Validate