- 1 ) { tu,fleflefletu,tu,tu,flefletu,tu,tu,tu,fletu,tu, re as,as,as,re re re as,as,re re re re as,re re tu,tu,fleflefletu,tu,tu,flefletu,tu,tu,tu,fletu, re re as,as,as,re re re as,as,re re re re as,re tu,tu,tu,fleflefletu,tu,tu,flefletu,fletu,tu,tu, re re re as,as,as,re re re as,as,re as,re re re fletu,tu,tu,flefletu,tu,tu,tu,flefletu,fletu,tu, as,re re re as,as,re re re re as,as,re as,re re flefletu,tu,tu,flefletu,tu,tu,tu,fletu,tu,fletu, as,as,re re re as,as,re re re re as,re re as,re fleflefletu,tu,tu,flefletu,tu,tu,tu,fletu,tu,tu as,as,as,re re re as,as,re re re re as,re re re } ; ntokLyr.d(e Lyr(ewr,Lyrye.ovltoa,AtvtoFntosTn,1,1,1,5 nw ewr.aesAdnw aesntok aeTpsCnouinl ciainucin.ah 0 , , e Mpig(ewr,2 mpobntos); apnsntok , aCmiain)) ntokLyr.d(e Lyr(ewr,Lyrye.usmln,AtvtoFntosAeaeolnTn,1,5 2) ewr.aesAdnw aesntok aeTpsSbapig ciainucin.vrgPoigah , , , ); ntokLyr.d(e Lyr(ewr,Lyrye.ovltoa,AtvtoFntosTn,10 1 5) ewr.aesAdnw aesntok aeTpsCnouinl ciainucin.ah 2, , , , ); ntokLyr.d(e Lyr(ewr,Lyrye.ulCnetd AtvtoFntosTn,1); ewr.aesAdnw aesntok aeTpsFlyonce, ciainucin.ah 0) ntokIiWihs) ewr.ntegt(; Design View This is Design view where you can see how the network is defined and see the weights of all the layers When you hover with the mouse over a single weight, a tooltip shows the corresponding weight or bias value You can always refresh the weights graphic if you have changed the block size so you can see it in the prefered size Training View This is Training view where you train the network The 'Play' button gives you the 'Select Training Parameters' dialog where you can define the basic training parameters The 'Training Scheme Editor' button gives you the possibility to fully define your own training schemes to experiment with At any time, the training can be easily aborted by pressing the 'Stop' button The 'Star' button will reset (forget) all the weight values Testing View In Testing view, you can test your network and get a graphical confusion matrix that represents all the misses Calculate View In Calculate view, we can test a single digit or object with the desired properties and fire it through the network and get a graphical view of all the output values in every layer Final Words I would love to see a DirectCompute 5.0 integration for offloading the highly parallel task of learning the neural network to a DirectX 11 compliant GPU if one is available But I've never programmed with DirectX or any other shader based language before, so if there's anyone out there with some more experience in this area, any help is very welcome I made an attempt to use a simple MVVM structure in this WPF application In the Model folder, you can find the files for the neural network class and also a D t P o i e class which deals with loading and providing the aarvdr necessary MNIST and CIFAR-10 training and testing samples There is also a N u a N t o k a a e class that is used by the project to load and erlewrDtSt save neural network definitions, weights, or both (full) from or to a file on disk Then there is the View folder that contains the four different PageViews in the project and a global PageView which acts as a container for the different views (Design, Training, Testing, and Calculate) In the ViewModel folder, you will find a P g V e M d l a e a e i w o e B s class where the corresponding four ViewModels are derived from All the rest is found in the MainViewWindows.xaml.cs class Hope there's someone out there who can actually use this code and improve on it Extend it with an unsupervised learning stage for example (encoder/decoder construction), or implement a better loss-function (negative log likelihood instead of MSE); extend to more test databases; make use of more advanced squashing functions, etc History 1.0.2.5: (05-27-2012) - Now you can download MyNet-16 (42 errors) weights file - Code cleaning and spelling corrections 1.0.2.4: (05-14-2012) - Fix: Download of the MNIST dataset now works for everybody If you had problems with downloading, it's best to delete the CNNWB folder under My Documents and then run the latest version 1.0.2.3: (05-10-2012) - Fix: The Pattern Index value in Calculate View isn't set to zero anymore when changing to a different View 1.0.2.2: (05-03-2012) - Several important fixes for functionality previous version 1.0.2.1: (04-30-2012) - Added the possibility to switch every dataset from float to double and viceversa Using a dataset in float reduces memory consumption quite a bit on big sets If you have plenty of memory you can use a double dataset The benefit of a double dataset is a slight speed advantage in training the network - Added a global setting of the default MNIST distortion parameters used - Better garbage collection 1.0.2.0: (04-17-2012) - Better garbage collection when switching between networks - PageViewModelBase.cs and the classes wich derive from it are cleaned from some unnecessary code - Refactoring & small fixes 1.0.1.9: (04-10-2012) - Bugfixes 1.0.1.8: (04-07-2012) - Reduced memory usage for every dataset - Bugfixes 1.0.1.7: (03-17-2012) - Fixed: Download of MNIST dataset - Fixed: Training Scheme Editor works now for the CIFAR-10 dataset 1.0.1.6: (03-13-2012) - Speed improvements in training the CNN - Speed improvements in creating the Design & Calculate graphic 1.0.1.5: (02-26-2012) - Memory consumption reduced 1.0.1.4: - Loading the CIFAR-10 dataset is now much faster - The performance of Design View is now better optimised for bigger networks - It's now possible to adjust the block size of the weight and output values graphic - In Design View you can refresh the weights graphic to the current block size 1.0.1.3: - Performance improvements in training networks - Performance improvement in displaying Design View (still to slow for big networks) - Minor GUI changes 1.0.1.2: - Now all the fully connected layers are displayed in Calculate View - Changing the background color is working properly now 1.0.1.1: - Now you can easily reset the weights values in Training View - By using Max-Pooling with the CIFAR-10 dataset the results are much better I've also horizontal flipped each training pattern to double the size of the training set - Some minor fixes 1.0.1.0: - The CIFAR-10 Dataset of 10 natural objects in color is now fully supported - The weights in Design View are now correctly displayed (still slow on big networks) - The file format used to save and load weights, definitions, etc is changed and incompatible with previous versions 1.0.0.1: - Now you can see all the weight and bias values in every layer - Renaming some items so they make more sense (KernelTypes.Sigmoid => ActivationFunctions.Tanh) - As a last layer you can use LeCun's RBF layer with fixed weights - Now it is possible to uses ActivationFunctions.AbsTanh to have a rectified convolutional layer 1.0.0.0: - Initial release License This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) About the Author Filip D'haene Software Developer Belgium Member No Biography provided Comments and Discussions 116 messages have been posted for this article Visit http://www.codeproject.com/Articles/140631/Convolutional-Neural-NetworkMNIST-Workbench to post and view comments on this article, or click here to get a print view with messages Permalink | Advertise | Privacy | Mobile Web04 | 2.6.121031.1 | Last Updated 14 May 2012 Article Copyright 2010 by Filip D'haene Everything else Copyright © CodeProject, 1999-2012 Terms of Use ... for every dataset - Bugfixes 1.0.1.7: (0 3-1 7-2 012) - Fixed: Download of MNIST dataset - Fixed: Training Scheme Editor works now for the CIFAR-10 dataset 1.0.1.6: (0 3-1 3-2 012) - Speed improvements... networks - PageViewModelBase.cs and the classes wich derive from it are cleaned from some unnecessary code - Refactoring & small fixes 1.0.1.9: (0 4-1 0-2 012) - Bugfixes 1.0.1.8: (0 4-0 7-2 012) - Reduced... 116 messages have been posted for this article Visit http://www .codeproject. com/Articles/140631 /Convolutional- Neural- NetworkMNIST -Workbench to post and view comments on this article, or click here