Results
Assumptions underlying the model
We consider a feedforward faceprocessing hierarchy as a model for how the ventral stream rapidly computes invariant representations. Invariant information can be decoded from inferotemporal cortex, and the face areas within it, roughly 100ms after stimulus presentation [Hung2005, meyers2015intelligent]. This is too fast of a timescale for feedback to play a large role [Hung2005, Thorpe1996, isik2014dynamics]. Thus while the actual face processing system might operate in other modes as well, all indications are that fundamental properties of shapeselectivity and invariance need to be explained as a property of feedforward processing.
The population of neurons in ML/MF is highly face selective [Tsao2006] and incoming information can be thought of as passing through a facelikeness filter. We thus assume the existence of a functional gate that routes only images of facelike objects at the input of the face system. The existence of large “facelike” templates or filters explains many of the socalled holistic effects of face perception, including face inversion and the composite face [young1987configurational] effect [Tan2016, farzmahdi2016specialized]. This property has one further computational implication: it provides an automatic facespecific gating mechanism to the faceprocessing system.
We make the standard assumption that a neuron’s basic operation is a pooled dot product between inputs
and synaptic weight vectors
, yielding complexlike cells as(1) 
where is a nonlinear function e.g., squaring as in [adelson1985spatiotemporal]. We suppose that are image plane transformations corresponding to rotations in depth of the face. Note that is a set of transformations but it is not a group (see Appendix section 1.1). We call the signature of image .
Approximate view invariance
The model of Eq. (1) encodes a novel face by its similarity to a set of stored template faces. For example, the could correspond to views of each of a set of wellknown individuals from an early developmental period e.g., parents, caretakers, etc. One could regard the acquisition of this set of familiar faces as the algorithm’s (unsupervised) training phase. To see why the algorithm works, consider that whenever encodes a nonmatching orientation to , the value of will be very low. Among the tuned to the correct orientation, there will be a range of response values since different template faces will have different levels of similarity to . When the novel face appears at a different orientation, the only effect is to change which specific viewtuned units carry its signature. Since the pooled neural response is computed by summing over these, the large responses carrying the signature will dominate. Thus the pooled neural response will be approximatively unchanged by rotation (see the Appendix section 1). Since these models are based on stored associations of frames, they can be interpreted as taking advantage of temporal continuity to learn the simpletocomplex wiring from their viewspecific to viewtolerant layers. They associate temporally adjacent frames from the video of visual experience as in, e.g., [Isik2012].
The computational insight enabling depthrotation tolerant representations to be learned from experience is that, due to properties of how objects move in the world, temporally adjacent frames (the ) almost always depict the same object [hinton1990unsupervised, stryker1991temporal, Foldiak1991, wiskott2002slow, berkes2009structured, Isik2012]. Short videos containing a face almost always contain multiple views of the same face. There is considerable evidence from physiology and psychophysics that the brain employs a temporalassociation strategy of this sort [miyashita1988neuronal, Wallis2001, Cox2005, Li2008, wallis2009learning, Li2010]. Thus, our assumption here is that in order to get invariance to nonaffine transformations (like rotation in depth), it is necessary to have a learning rule that takes advantage of the temporal coherence of object identity.
More formally, this procedure achieves depthrotation tolerance because the set of rotations in depth approximates the group structure of affine transformations in the plane (see Appendix section 1). For the latter case, there are theorems guaranteeing invariance without loss of selectivity by operations resembling the convolution in space performed by simple cells and the pooling done by complex cells [anselmi2015invariance].
Furthermore, [leibo2015invariance] showed that Eq. 1 is approximately invariant to rotations in depth for a face, provided the templates also correspond to images of faces. For each template , the rotated views must have been observed and stored. The can be interpreted as the output of “simple” cells each with tuning when stimulated with image . In a similar way can be interpreted as the activity of the “complex” cell indexed by .
Biologically plausible learning
The simplecomplex algorithm described above can provide an invariant representation but relies on a biologically implausible learning step: storing a set of discrete views observed during development. Instead we propose a more biologically plausible mechanism: Hebblike learning [Hebb1949] at the level of simple cells (see Equation (2)). Instead of storing separate frames, cortical neurons exposed to the rotation in depth of a face update their synaptic weights according to a Hebblike rule, effectively becoming each tuned to one of a set of basis functions corresponding to different combinations of the set of views. Different Hebblike rules lead to different sets of basis functions such as Independent Components (IC) or Principal Components (PC). Since each of the neurons become tuned to one of these basis functions instead of one of the views, a set of basis functions replaces the (for a given ) in the pooling Equation (1). The question is whether invariance is still present under this new tuning.
The surprising answer is that most unsupervised learning rules will learn approximate invariance to viewpoint when provided with the appropriate training set (see Appendix section 2 for a proof). In fact, unsupervised Hebblike plasticity rules such as Oja’s, Foldiak’s trace rule, and ICA provide a basis that when used in the pooling equation provide invariance. Supervised learning rules such as backpropagation also satisfy the requirement as long as the training set is appropriate.
In the following we consider as an example a simple Hebbian learning scheme called Oja’s rule [oja1982simplified, oja1992principal]. At this point we are concerned only with establishing the model and why it computes a viewtolerant face representation. For this purpose we could use any of the other learning rules—like Foldiak’s trace rule or ICA—but we focus on the Oja rule because it will turn out to be of singular relevance for mirror symmetry.
The Oja rule can be derived as the first order expansion of a normalized Hebb rule. The assumption of this normalization is plausible, because normalization mechanisms are widespread in cortex [Turrigiano2004].
For learning rate , Oja’s rule is
(2) 
The original paper of Oja showed that the weights of a neuron updated according to this rule will converge to the top principal component (PC) of the neuron’s past inputs, that is to an eigenvector of the input’s covariance
. Thus the synaptic weights correspond to the solution of the eigenvectoreigenvalue equation
. Plausible modifications of the rule—involving added noise or inhibitory connections with similar neurons—yield additional eigenvectors [sanger1989optimal, oja1992principal]. This generalized Oja rule can be regarded as an online algorithm to compute the principal components of incoming stream of vectors, in our case, images.What is learned and how it is stored depends on the choice of a timescale over which learning takes place since learning is dictated by the underlying covariance of the inputs (see Appendix, section 3). In order for familiar faces to be stored so that the neural response modeled by Eq. (1) tolerates rotations in depth of novel faces, we propose that Ojatype plasticity leads to representations for which the are given by principal components (PCs) of an image sequence depicting depthrotation of face . Consider an immature functional unit exposed, while in a plastic state, to all depthrotations of a face. Oja’s rule will converge to the eigenvectors corresponding to the top eigenvalues and thus to the subspace spanned by them. The Appendix, section 2 shows that for each template face the signature obtained by pooling over all PCs represented by different is an invariant. This is analogous to Eq. (1) with replaced by the th PC. The appendix also shows that other learning rules for which the solutions are not PCs but a different set of basis functions, generate invariance as well—for instance, independent components (see Appendix section 2).
Empirical evaluation of viewinvariant face recognition performance
Viewinvariance of the two models was assessed by simulating a sequence of samedifferent pairmatching tasks, each demanding more invariance than the last. In each test, 600 pairs of face images were sampled from the set of faces with orientations in the current testing interval. 300 pairs depicted the same individual and 300 pairs depicted different individuals. Testing intervals were ordered by inclusion and were always symmetric about , the set of frontal faces; i.e., they were for . The radius of the testing interval , dubbed the invariance range, is the abscissa in Fig. 3.
To classify an image pair
as depicting the same or a different individual, the cosine similarity
of the two representations was compared to a threshold. The threshold was varied systematically in order to compute the area under the ROC curve (AUC), reported on the ordinate of Fig. 3. AUC declines as the range of testing orientations is widened. As long as enough PCs are used, the proposed model performs on par with the viewbased model. It even exceeds its performance if the complete set of PCs is used. Both models outperform the baseline HMAX C1 representation (Fig. 3).Mirror symmetry
Consider the the case where, for each of the templates , the developing organism has been exposed to a sequence of images showing a single face rotating from a left profile to a right profile. Faces are approximately bilaterally symmetric. Thus, for each face view , its reflection over the vertical midline will also be in the training set. It turns out that this property—along with the assumption of Oja plasticity, but not other kinds of plasticity—is sufficient to explain mirror symmetric tuning curves. The argument is as follows.
Consider a face, and its orbit in w.r.t. the rotation group:
where is a rotation matrix in 3D, w.r.t., e.g., the axis.
Projecting onto we have
Note now that, due to the bilateral symmetry, the above set can be written as:
where , and is the reflection operator. Thus the set consists of a collection of orbits w.r.t. the group of the templates .
This property of the training set is used in the appendix in two ways. First, it is needed in order to show that the signature computed by pooling over the solutions to any equivariant learning rule, e.g., Hebb, Oja, Foldiak, ICA, or supervised backpropagation learning, is approximately invariant to depthrotation (sections 1 – 2).
Second, in the specific case of the Oja learning rule, it is this same property of the training set that is used to prove that the solutions for the weights (i.e., the PCs) are either even or odd (section 3). This in turn implies that the penultimate stage of the signature computation: the stage where
is computed, will have orientation tuning curves that are either even or odd functions of the view angle.Finally, to get mirror symmetric tuning curves like those in AL, we need one final assumption: the nonlinearity before pooling at the level of the “simple” cells in AL must be an even nonlinearity such as . This is the same assumption as in the "energy model” of [adelson1985spatiotemporal]. This assumption is needed in order to predict mirror symmetric tuning curves for the neurons corresponding to odd solutions to the Oja equation. The neurons corresponding to even solutions have mirror symmetric tuning curves regardless of whether is even or odd.
An orientation tuning curve is obtained by varying the orientation of the test image . Fig. 4A shows example orientation tuning curves for the model based on a raw pixel representation. It plots as a function of the test face’s orientation for five example units tuned to features with different corresponding eigenvalues. All of these tuning curves are symmetric about —i.e., the frontal face orientation. Fig. 5A shows how the three populations in the C1based model represent face view and identity and Fig. 5B shows the same for populations of neurons recorded in ML/MF, AL, and AM. The model is the same one as in Fig. 3.
In contrast to the Oja/PCA case, we show through a simulation analogous to Fig. 5 that ICA does not yield mirror symmetric tuning curves (appendix section 4). Though this is an empirical finding for a specific form of ICA, we do not expect, based on our proof technique for the Oja case, that a generic learning rule would predict mirror symmetric tuning curves.
These results imply that if neurons in AL learn according to a broad class of Hebblike rules, then there will be invariance to viewpoint. Different AM cells would come to represent components of a viewinvariant signature—one per neuron. Each component can correspond to a single face or to a set of faces, different for each component of the signature. Additionally, if the learning rule is of the Ojatype and the output nonlinearity is, at least roughly, squaring, then the model predicts that on the way to view invariance, mirrorsymmetric tuning emerges, as a necessary consequence of the intrinsic bilateral symmetry of faces.
Discussion
The model discussed here provides a computational account of how experience and evolution may wire up the ventral stream circuitry to achieve the computational goal of viewinvariant face recognition. Neurons in toplevel face patch AM maintain an explicit representation selective for face identity and tolerant to position, scale, and viewing angle [freiwald2010functional] (along with other units tolerant to identity but selective for other variables such as viewing angle). The approach in this paper explains how this property may arise in a feedforward hierarchy. To the best of our knowledge, it is the first account that provides a computational explanation of why cells in the face network’s penultimate processing stage, AL, are tuned symmetrically to head orientation.
Our assumptions about the architecture for invariance conform to itheory [anselmi2015unsupervised, anselmi2015invariance] which is a theory of invariant recognition that characterizes and generalizes the convolutional and pooling layers in deep networks. itheory has recently been shown to predict domainspecific regions in cortex [leibo2015invariance] with the function of achieving invariance to classspecific transformations (e.g. for faces) and the specific form of eccentricitydependent cortical magnification [poggio2014computational]
. Our assumption of Hebbianlike plasticity for learning template views is, however, outside the mathematics of itheory: it links it to biological properties of cortical synapses.
This argument of this paper has been made, as nearly as possible, from first principles. It begins with a claim about the computational problem faced by a part of the brain: the need to compute viewtolerant representations for faces. Yet it seeks to explain properties of single neurons in a specific brain region, AL, far from the sensory periphery. The argument proceeds by considering which of the various biologicallyplausible learning rules satisfy requirements coming from the theory while also yielding nontrivial predictions for AL neurons in qualitative accord with the available data. It seems significant then that the argument only works in the case of Ojalike plasticity; it may suggest the hypothesis that such plasticity may indeed be driving learning in AL.
The class of learning rules yielding invariance includes those that emerge from principles such as sparsity and the efficient coding hypothesis [attneave1954some, barlow1961possible, Olshausen1996]. However, explaining the mirror symmetric tuning of AL neurons apparently requires the Oja rule. An interesting direction for future work in this area could be to investigate the role of sparsity in the face processing system. Perhaps a learning algorithm derived from the efficient coding perspective that also explains AL’s mirror symmetry could be found.
Our model is designed to account only for the feedforward processing in the ventral stream. Backprojections between visual areas—and of course within each area—are well known to exist in the ventral stream and probably also exist in the face patch network. They are likely to play a major role in visual recognition after
ms from image onset. Representations computed in the first feeforward sweep are likely used to provide information about a few basic questions such as the identity or pose of a face. Additional processing is likely to require iterations and even topdown computations involving shifts of fixation and generative models. An example for face recognition is recent work [Yildirim2015] which combines a feedforward network like ours—also showing mirrorsymmetric tuning of cell populations—with a probabilistic generative model. Thus our feedforward model, which succeeds in explaining the main tuning and invariance properties of the macaque faceprocessing system, may serve as a building block for future objectrecognition models addressing brain areas such as prefrontal cortex, hippocampus and superior colliculus, integrating feedforward processing with subsequent computational steps that involve eyemovements and their planning, together with task dependency and interactions with memory.Materials
Stimuli
40 face models were rendered with perspective projection. Each face was rendered (using Blender [Stichting_Blender_Foundation]) at each orientation in increments from to . The untextured face models were generated using Facegen [Singular_Inversions]. All faces appeared on a uniform gray background.
Viewinvariant Samedifferent Pair Matching Task
For each of the 5 repetitions of the samedifferent pair matching task, 20 template and 20 test individuals were randomly selected from the full set of 40 individuals. The template and test sets were chosen independently and were always disjoint. 50% of the 600 test pairs sampled from each testing interval depicted the same two individuals. Each testing interval was symmetric about (frontal) and testing intervals were ordered by inclusion. The smallest was and the largest was
(left and right profile views). The classifier compared the Cosine similarity of the two zeromean, and unitstandard deviation representations to a threshold. The threshold was integrated over to compute the area under the ROC curve (AUC). The abscissa of Fig.
3 is the radius of the testing interval from which test pairs were sampled. The ordinate of Fig. 3 is the mean AUC the standard deviation computed over the 5 repetitions of the experiment.A similarity matrix in Figure 5 was obtained by computing Pearson’s linear correlation coefficient between each test sample pair. The same matrix was computed 10 times with different training/test splits and the average was reported. Same procedures were repeated for features from area MLMF, AL and AM to get corresponding matrices.
Acknowledgments
This material is based upon work supported by the Center for Brains, Minds, and Machines (CBMM), funded by NSF STC award CCF1231216. This research was also sponsored by grants from the National Science Foundation (NSF0640097, NSF0827427), and AFOSRTHRL (FA865005C7262). Additional support was provided by the Eugene McDermott Foundation.
References
Appendix
The key results in this appendix can be informally stated as follows:

We prove than a number of learning rules, supervised and unsupervised, are equivariant with respect to the symmetries of the training data. We use this result in the case of training data consisting of images of faces for all view angles obtaining equivariance of the solutions of the learning rules with respect to the reflection group and the group of rotations. The implications that we use in the paper are

the solutions of all learning rules can be used as templates in the computation of an invariant signature. The algorithm consists of performing dot products of the input image with each template, transforming nonlinearly (for instance using a rectifier nonlinearity or a square) the result and then pooling over all templates, i.e., the solutions of the learning rule. The result is approximately invariant to rotation in depth.

in the case of the Oja rule we prove that the solutions are even or odd functions of the view angle; a square nonlinearity provides even functions, which are mirrorsymmetric. We were not able to prove such a property for any of the other learning rules.


in the case of the ICA rule we show empirical evidence that the solutions are neither odd nor even. This suggests that most learning rules do not lead to even or odd solutions.
The appendix is divided into four sections:

In section A we show how recent theorems on invariance under group transformations could be extended to nongroups and under which conditions. We show how an approximately invariant signature can be computed in this setting. In particular we analyze the case of rotation in depth and mirror symmetry transformations of bilateral symmetric objects such as faces.

In section B we describe how the group symmetry properties of the set of images to which neurons are exposed (the “unsupervised” training set) determine the symmetries of the learned weights. In particular we show how the weight symmetries gives a simple way of computing an invariant signature.

In section C we prove that the solutions of the Oja equation, given that the input vectors that are reflections of each other (like a face’s view at degrees and its view at degrees), must be odd or even.

In section D we provide empirical evidence that there are solutions of ICA algorithms—on the same data as above—that do not show any symmetry.
In the following we indicate with an image, with a filter or neural weight and with a locally compact group.
Appendix A Approximate Invariance for nongroup transformations
In this section we analyze the problem of getting an approximately invariant signature for image transformations that do not have a group structure. In fact, clearly, not all image transformations have a group structure. However assuming that the object transformation defines a smooth manifold we have (by the theory of Lie manifolds) that locally a Lie group is defined by the generators on the tangent space. We illustrate this in a simple example. Let . Let a transformation depending on parameters. For any fixed the set describe a differentiable manifold. If we expand the transformation around e.g. we have:
(3) 
where are the infinitesimal generators of the transformation in the direction.
Therefore locally (when the term can be neglected) the associated group transformation can be expressed by exponentiation as:
Note that the above expansion is valid only locally. In other words instead of a global group structure of the transformation we will have a collection of local transformations that obey a group structure. The results derived in section B will then say that the local learned weights will be orbits w.r.t. the local group approximating the nongroup global transformation.
a.1 Invariance under rotations in depth
The 3D “views” of an object undergoing a 3D rotation are group transformations but the 2D projections of an object undergoing a 3D rotation are not group transformations. However for any fixed angle and for small rotations the projected images approximately follow a group structure. This can be easily seen making the substitution in eq. (3) where is the 2D projection. Let be a nonlinear function, e.g., squaring or rectification. For small values of we have therefore that the signature:
or its discrete version
is invariant under 3D rotation of of an angle up to a factor proportional to . Alternatively if the following property holds:
(4) 
the invariance will be exact (see [Anselmi2013, leibo2015invariance]); this is the case e.g. when both and are faces.
The locality of the group structure (eq. (4)) means that we have invariance of the signature only within each local neighborhood but not over all viewpoints. A reasonable scenario could be that each local neighborhood may consist of, say, degrees (depending on the universe of distractors). Almost complete view invariance can be obtained from a single view at degrees. In fact the view, together with the associated virtual view at degrees because of mirror symmetry, provides invariance over degrees [poggio19923d].
a.2 Rotation in depth and mirror symmetry.
As explained on the previous paragraph, projected rotations in depth are not group transformations.
However in the case of a bilateral symmetric objects, as we will see below, projected rotations in depth are a collection of orbits of the mirror symmetry group. Section B will clarify why this property is important proving that it forces the set of solutions of a variety of learning rules to be a collection of orbits w.r.t. the mirror symmetry group.
Consider e.g. a face, , which is a bilateral symmetric object and its orbit in w.r.t. the rotation group:
where is a rotation matrix in 3D, e.g. w.r.t. the axis.
Projecting onto we have
Note now that, due to the bilateral symmetry, the above set can be written as:
where , and is the reflection operator. The set consists of a collection of orbits w.r.t. the group . This is due to the relation
i.e. a face rotated by an angle and then projected is equal to the reflection of the same face rotated by an angle and projected.
The reasoning generalizes to multiple faces. In summary in the specific case of bilateral symmetric objects rotating in depth, a projection onto a plane parallel to the rotation axis creates images which are transformations w.r.t. the group of reflection, thus falling in the group case described in the above paragraphs.
Appendix B Unsupervised and supervised learning and data symmetries
In the following we show how symmetry properties on the neuronal inputs affect the learned weights. We model different unsupervised (Hebbian, Oja, Foldiak, ICA) or supervised learning (SGD) rules as dynamical systems coming from the requirement of minimization of some target function. We see how these dynamical systems are equivariant (in the sense specified below) and how equivariance determines the symmetry properties of their solutions.
This gives a simple way to generate an invariant signature by averaging over all solutions.
b.1 Equivariant dynamical systems and their solutions.
We make the general assumption that the dynamical system can be described in terms of trying to minimize a nonlinear functional of the form:
(5) 
The associated dynamical system reads as:
(6) 
A general result holds for equivariant dynamical systems. A dynamical system is called equivariant w.r.t. a group if in eq. (6) commutes with any transformation i.e.
(7) 
In this case we have:
Theorem 1.
If an equivariant dynamical system has a solution , then the whole group orbit of will also be a set of solutions (see [golu]).
In the following we are going to analyze different cases of updating rules for neuronal weights showing, under the hypothesis that the training set is a (scrambled) collection of the orbits i.e. we specialize the set to be of the form:
(8) 
that the dynamical system is equivariant.
We will see that the following variant of the equivariance holds for many dynamical systems:
(9) 
where is permutation of the set that depends on . The derivation stands on the simple observation:
and the hypothesis that the training set is a collection of orbits. In fact in this case
In general if the training set
is large enough the dynamical system will be equivalent to the unpermuted one due to the stability of the stochastic gradient descent method
[hardt]. Since the dynamical systems associated with the Oja and the ICA rules minimize statistical moments they are clearly independent of training data permutations. The fact that the set of solutions is a collections of orbits,
implies that any average operator over them is invariant. In our case the operator is the signature:where is the element of the orbit and is a nonlinear function.
In the following we prove equivariance of a few learning rules.

Unsupervised learning rules[hassoun]:
In the following and and with the notation we indicate the permutation of the element in the training set due to the transformation .

Hebbian learning. Choosing
(10) where is the neuron’s response, we have the associated dynamical system is:
(11) The system is equivariant. In fact:

Oja learning. Choosing
(12) we obtain by differentiation:
(13) The obtained dynamical system is that of Oja’s for the choice . The system is equivariant (note that ). In fact:

ICA. Choosing
(14) we obtain the dynamical system:
(15) which can be shown to extract one ICA component [hyvarinen1998independent]. The system is equivariant. In fact:

Foldiak. Choosing:
(16) the associated dynamical system is:
(17) which is the so called Foldiak updating rule. The system is equivariant. In fact:


Supervised learning in deep convolutional networks. The reasoning above can be extended to supervised problems of the form:
(18) where . The term is a function defined using the loss of representing a set of observations , their labels , and a the set of the network weights . The updating rule for each weight is given by the backpropagation algorithm:
(19) If the equation above is equivariant the same results of the previous section will hold, i.e., if there exists a solution the whole orbit will be a set of solutions. In the following we analyze the case of deep networks showing that equivariance holds if the output at each layer , is covariant w.r.t. the transformation, i.e.:
(20) We analyze the case of deep convolutional networks with pooling layers between each convolutional layer. In this case the response at each layer is covariant w.r.t. to the input transformation: the output at layer is of the form:
(21) i.e. it is an average of a group convolution where is the output of layer and is the collection of weights up to layer . Using the property that the group convolution commutes with group shift i.e. we have:
where we used the property . This can be seen to hold using an inductive reasoning up to the first layer where:
In the following we prove that the dynamical systems (updating rules for the weights) associated to a deep convolutional network are equivariant. We consider e.g. the square loss function (the same reasoning can be extended to many commonly used loss functions):
where
being the layers number and is a set of labels. The associated dynamical system reads as:
Substituting with we have, by the covariance property, that the first factor of the r.h.s. of the equation above becomes . We are then left to prove the equivariance of the second factor.
Using the chain rule, we have:
where , , being the output at layer .
Notice that, in the case of covariant layer outputs, we have:where we used the covariance property in eq. (20) and the fact that the training set is a collection of orbits w.r.t. the group .
Finally we have:where the comes from the derivative of w.r.t. .
Summarizing we have the following resultTheorem 2.
For , let depend on a set of weights . Consider a deep convolutional network with output of the form
(22) and a differentiable square loss , being a set of labels.
If is a collection of orbits and and each is covariant, then the associated dynamical systems for each layer’s weights’ evolution in timeare equivariant w.r.t. the group .
Appendix C Proof that the Oja equation’s solutions are odd or even.
So far we have shown how biologically plausible learning dynamics in conjunction with appropriate training sets lead to solutions capable of supporting the computation of a viewinvariant face signature (Sections A – B). We showed that several different learning rules satisfied these requirements: Hebb, Oja, Foldiak, ICA, and supervised backpropagation (Section B.1). Now we use properties specific to the Oja rule to address the question of why mirror symmetric responses arise in an intermediate step along the brain’s circuit for computing viewinvariant face representations.
We now use the following wellknown property of Oja’s learning rule: that it implements an online algorithm for principal component extraction [oja1992principal]
. More specifically, we use that the Oja dynamics converge to an eigenfunction of the training set’s covariance
.Recall from section A.2 that in order to guarantee approximate viewinvariance for bilaterally symmetric objects like faces, the training set must consist of a collection of orbits of faces w.r.t. the reflection group . We now show that this implies the eigenfunctions of (equivalently, the principal components (PCs) of ) must be odd or even.
Under this hypothesis the covariance matrix can be written as
where is the set of the orbit representatives (untransformed vectors).
It is immediate to see that the above implies (they commute). Thus and must share the same eigenfunctions. Finally, since the eigenfunctions of the reflection operator are odd or even, this implies the eigenfunctions of must also be odd or even.
Finally, we note that in the specific case of a frontal view, even basis functions (w.r.t. the zero view) are mirror symmetric.
Appendix D Empirical ICA solutions do not show any symmetry
Fig. 6 shows results from the analogous experiment to main Fig. 4. but with ICA instead of PCA. Note that the ICA result is not mirror symmetric.
Comments
There are no comments yet.