Skip to content
This repository has been archived by the owner on Sep 1, 2023. It is now read-only.

Meeting Notes from September 22

subutai edited this page Sep 23, 2014 · 1 revision

Internal meeting held on September 22, 2014

Algorithm issues discussed

We reviewed the algorithm and went over a number of possible issues with the current implementation.

  1. Too many predictions. Suppose you have learned ABCD and DCBA. If you are currently at B, then moving to the right will generate predictions for C and A. As you add more worlds, you end up with more and more extra predictions. This is due to the fact that we currently don’t have lateral connections in layer 4. We discussed adding lateral connections in Layer 4 to disambiguate. Now that we have winner cell hysteresis, this may work.

  2. TP capacity. With pooling representations tied to columns, we have a pretty severe limit on pooling capacity. We discussed a) moving to a cell based representation with a small number of columns participating, and b) a cell based representation with all columns in the world participating. Both require shifting to a temporal memory implementation of pooling, which is a big code change.

  3. If you have a single world with repeated sensory elements, you can’t distinguish between them (you will lose track of where you are). We didn’t know if this was a real problem since it would probably be solved in a hierarchy. We decided to do nothing right now.

  4. Resets. We didn’t get into this too much, but this is something to keep an eye on.

Next steps

We feel pretty good about the basic behavior of sensorimotor inference and temporal pooling. It is hard to make some of the above decisions in the simplistic scenarios we are working with. As such, we decided to expand from just working on simple artificial datasets. Our goal now is to demonstrate increasing stability and capacity within a richer domain. We will attempt an MNIST like dataset.

Specific goals: we want to show that adding Layer 3 pooling will improve stability. After that we want to add a second level and demonstrate additional stability. Layer 3 here will do no high order sequences for now. We will test with a small set of images first and then move to a larger set. We expect to primarily get translation invariance from these tests.