Home > Analysis, Linear Algebra, puzzles > Boundary operators

## Boundary operators

Consider the vector space of polynomials with coefficients on a field $\mathbb{F},$ $\mathbb{F}[X]$, with the obvious sum of functions and scalar multiplication. For each $n \in \mathbb{N}$, consider the subspaces $\Pi_n$ spanned by polynomials of order $n$,

$\Pi_n = \{ a_0 + a_1 x + \dotsb + a_n x^n : (a_0, a_1, \dotsc, a_n) \in \mathbb{F}^{n+1} \}.$

These subspaces have dimension $n+1.$ Consider now for each $n \in \mathbb{N}$ the maps $\partial_n \colon \Pi_n \to \Pi_{n-1}$ defined in the following way:

$\partial_n \big( a_0 + a_1 x + \dotsb + a_n x^n \big) = \displaystyle{\sum_{k=0}^{n} (-1)^k \sum_{j\neq k} a_j x^{\varphi(j,k)},}$

where $\varphi(j,k) = j$ if $j < k,$ and $\varphi(j,k) = j-1$ otherwise.

Schematically, this can be written as follows

$\begin{pmatrix} a_0 \\ a_1 \\ \vdots \\ a_n \end{pmatrix} \xrightarrow{\partial_n} \begin{pmatrix} a_1 \\ a_2 \\ \vdots \\ a_n \end{pmatrix} - \begin{pmatrix} a_0 \\ a_2 \\ \vdots \\ a_n \end{pmatrix} + \dotsb + (-1)^k \begin{pmatrix} \vdots \\ a_{k-1} \\ a_{k+1} \\ \vdots \end{pmatrix} + \dotsb + (-1)^n \begin{pmatrix} a_0 \\ a_1 \\ \vdots \\ a_{n-1} \end{pmatrix},$

and it is not hard to prove that these maps are homeomorphisms of vector spaces over $\mathbb{F}.$

Notice this interesting relationship between $\partial_3$ and $\partial_2:$

$\begin{array}{rl} \partial_2(a_0 + a_1 x + a_2 x^2) &= (a_1 + a_2 x) - (a_0 + a_2 x) + (a_0 +a_1 x) \\ &= a_1 + a_1 x \\ \partial_3( a_0 + a_1 x + a_2 x^2 + a_3 x^3) &= ( a_1 + a_2 x + a_3 x^2 ) - ( a_0 + a_2 x + a_3 x^2) \\ &\mbox{} \quad + (a_0 + a_1 x + a_3 x^2) - (a_0 +a_1 x +a_2 x^2) \\ &= (a_1-a_0) + (a_3-a_2)x^2 \end{array}$

The kernel of $\partial_2$ and the image of $\partial_3$ are isomorphic!

$\begin{array}{rl} \ker \partial_2 &= \{ a_0 + a_2 x^2 : (a_0,a_2) \in \mathbb{F}^2 \}. \\ \partial_3\big( \Pi_3 \big) &= \{ b_0 + b_2 x^2 : (b_0, b_2) \in \mathbb{F}^2 \}. \end{array}$

The reader will surely have no trouble to show that this property is satisfied at all levels: $\ker \partial_n = \partial_{n+1} \big( \Pi_{n+1} \big).$ As a consequence, $\partial_n \partial_{n+1} = 0$ for all $n \in \mathbb{N}.$

We say that a family of homomorphisms $\{ \partial_n \colon \Pi_n \to \Pi_{n+1} \}$ are boundary operators if $\partial_n \partial_{n+1} = 0$ for all $n \in \mathbb{N}.$ If this is the case, then trivially $\partial_{n+1} \big( \Pi_{n+1} \big) \subseteq \ker \partial_n.$ The example above is a bit stronger, because of the isomorphism of both subspaces.

So this is the question I pose as today’s challenge:

Describe all boundary operators $\big\{ \partial_n\colon \Pi_n \to \Pi_{n-1} \big\} \big( \partial_n\partial_{n+1} = 0 \big)$

Include a precise relationship between kernels and images of consecutive maps.

1. May 13, 2011 at 9:25 am

I probably shouldn’t be answering this (out of ignorance) but …
In the abstract I would just reverse the process using subset operators and equivalence operations: 0-> f_n/M((x,y) ->f_n+1/M(x,y) where M(x,y) is set membership contraction by an equivalence relation defined by having the same object after the mapping.
A good example is for linear vector space and linear operators. Then the forward operation is reflected into a inverse/pullback operator on the covector space. Then presuming that the forward map is from R^(n+1)->R^n for all n we have the inverse mapping on the covector space: 0->C^1->C^2 . The C^2 covector space then separates the original R^(n+1) vector space into a orthogonal component and it’s complement.

2. May 13, 2011 at 10:39 am

Sorry I didn’t define the operator. This will take a few hours/minutes/days ; although I should know it!

3. May 13, 2011 at 11:23 am

Okay, although I should be able to do better. The mappings M_(n+1)^(n) don’t really map R^(n+1)->R^(n) when consider as automorphisms. They map to R^m, m0;
Then M_(n+1)^n: R^(n+1)->R^m .

Of course the template is antisymmetric products like the wedge product where the operator is: V/\ then V/\V/\ (any vector/element of exterior product algebra) is zero. But successive operations alter the dimensions: (M,n) then (M,n+1) even if the subspaces shrink in dimension.
I hope this makes a modicum of sense! It’s been years since I studied it.

4. May 15, 2011 at 1:35 pm

I think I have a much better set of conclusions.
Given: a set of maps/transforms M^(n-1)_n,M^n_(n+1): R^(n+1) ->R^n->0 that extends down to n=2; probably =1, but that gets touchy on the boundary case.
1) Up to isomorphisms there is only one constructable transform chain. This is made by “pulling back” the canonical n=2 case.
2) Ignoring the isomorphisms there is a simple wedge/minor test to verify and qualify maps that satisfy the given. That is the rank of each M^(n-1)_n has to be n-2 ; and match (in a particular way) the previous and successive transforms.

And in some manner I don’t understand it matches the differentiation transform. This can’t be quite right; probably a transform aligning exterior products or some such.

If your interested I will try to work it out formally. There are some holes but everything seems to fit around the holes.

5. May 15, 2011 at 1:36 pm

I have found a simple way to illustrate the basic construction. No vectors, covectors, or linearity (more or less).
Starting from 2 dimensions:
(x,y)->(x,0) Which can be forced by a rotation
(x,y,z)->(0,y,z)
(x,y,z,w)->(x,0,0,w)
(x,y,z,w,u)->(0,y,z,0,u)
(x,y,z,w,u,r)->(x,0,0,w,0,r)

From which the alternating pattern for each dimension is obvious.
Your (-1)^k applied to even/odd n/cases does this alternation.
The above of course implicitly has the requirement that each mapping is maximal.
You could always put in extra zeros but that trivializes it..

6. May 15, 2011 at 8:17 pm

I should mention that the last entries were a canonical form. In fact only the zeros matter; the x,y… on the right can be any combination (up to some general restriction) of letters on the left.
More deeply one can drop in G*G^-1 (G unitary) between transforms and get “new” transforms that satisfy the requirements. It’s more interesting to see if all trees are isomorphic via. these unitary transforms; pushing and pulling back vectors and covectors.
I presume that untangling the generality takes us into category theory. Which I started to read but didn’t get too far. As one gets more abstract and general; one needs more abstract tools.

7. May 15, 2011 at 10:24 pm

Revise the structure of the first computed maps:

Since all the boundary operators $\partial_n$ are homomorphisms from $\Pi_n \cong \mathbb{F}^{n+1}$ to $\Pi_{n-1} \cong \mathbb{F}^n,$ we represent them as matrices in $\mathbb{F} \big( n \times (n+1) \big).$

Note that $\partial_0$ can only be the zero homomorphism. As for $\partial_1,$ this can be identified with any matrix $\begin{pmatrix} \lambda_0 & \lambda_1 \end{pmatrix} \in \mathbb{F} ( 1 \times 2).$ Since we are looking for non-zero homomorphisms, we have then for any choice of $(\lambda_0, \lambda_1) \neq (0,0), \partial_1(a_0+a_1 x) = a_0\lambda_0 + a_1\lambda_1.$

Now, once we have chosen $\partial_1,$ the next homomorphism $\partial_2$ can only be of the form

$\begin{pmatrix} \lambda_1\mu_0 & \lambda_1\mu_1 & \lambda_1\mu_2 \\ -\lambda_0\mu_0 & -\lambda_0\mu_1 & -\lambda_0\mu_2 \end{pmatrix},$

with $(\mu_0, \mu_1, \mu_2) \neq (0,0,0).$ Go from there, if you want.

• May 20, 2011 at 9:41 am

Yes but for the record (future readers): the reason I went to canonical form is to make the process more obvious. For instance we want for merged transforms $\forall v:v\in R^{3}$ (a column vector):

$\left[\begin{array}{ccc} 0 & 0 & 0\end{array}\right]v=\left[\begin{array}{cc} \lambda_{0} & \lambda_{1}\end{array}\right]\left[\begin{array}{ccc} a_{00} & a_{01} & a_{02}\\ a_{10} & a_{11} & a_{12}\end{array}\right]v$

Since it’s obvious that all of the $a_{xx}$ columns are orthogonal to $\left[\begin{array}{cc}\lambda_{0} & \lambda_{1}\end{array}\right]$ we have: $\left[\begin{array}{ccc} \lambda_{1}\mu_{0} & \lambda_{1}\mu_{1} & \lambda_{1}\mu_{2}\\ -\lambda_{0}\mu_{0} & -\lambda_{0}\mu_{1} & -\lambda_{0}\mu_{2}\end{array}\right]$

The reason I emphasized the canonical form is to avoid a proliferation of constants. Thus I inserted $GG^{-1}$ between the matrices where:

$G=\left[ \begin{array}{cc} \frac{\lambda_0}{\sqrt{\lambda_0^2+\lambda_1^2}} & -\lambda_1 \\ \frac{\lambda_{1}}{\sqrt{\lambda_0^2+\lambda_1^2}} & \lambda_0\end{array} \right]: G^{-1}= \left[ \begin{array}{cc} \lambda_0 & \lambda_1 \\ -\frac{\lambda_1}{\sqrt{\lambda_0^2+\lambda_1^2}} & \frac{\lambda_0}{\sqrt{\lambda_{0}^{2}+\lambda_{1}^{2}}}\end{array}\right]$

Which works as long as $\left[\begin{array}{cc} \lambda_{0} & \lambda_{1}\end{array}\right]\neq\left[\begin{array}{cc} 0 & 0\end{array}\right]$ and real.

This yields:

$\left[\begin{array}{ccc} 0 & 0 & 0\end{array}\right]v=\left[\begin{array}{cc} 1 & 0\end{array}\right]\left[\begin{array}{ccc} 0 & 0 & 0\\ \mu_{0} & \mu_{1} & \mu_{2}\end{array}\right]v$

Which in turn can be rendered canonical via $HH^{-1}$

$H=\frac{1}{\sqrt{\mu_{0}^{2}+\mu_{1}^{2}+\mu_{2}^{2}}}\left[\begin{array}{ccc} \mu_{0} & -\mu_{1} & \mu_{0}\mu_{2}\\ \mu_{1} & \mu_{0} & \mu_{2}\mu_{1}\\ \mu_{2} & 0 & -\mu_{0}^{2}-\mu_{1}^{2}\end{array}\right]$

So:

$\left[\begin{array}{ccc} 0 & 0 & 0\end{array}\right]v=\left[\begin{array}{cc} 1 & 0\end{array}\right]\left[\begin{array}{ccc} 0 & 0 & 0\\ 1 & 0 & 0\end{array}\right]H^{-1}v$

Which in terms of my equations would be (using standard basis):

$\left[\begin{array}{ccc} 0 & 0 & 0\end{array}\right]v=\left[\begin{array}{cc} 1 & 0\end{array}\right]\left[\begin{array}{ccc} 0 & 0 & 0\\ 1 & 0 & 0\end{array}\right]\left[\begin{array}{c} x\\ y\\ z\end{array}\right]=\left[\begin{array}{cc} 1 & 0\end{array}\right]\left[\begin{array}{c} 0\\ x\end{array}\right]$

Of course picking the preserved basis was arbitrary and these could have been chosen.

$\left[\begin{array}{ccc} 0 & 0 & 0\\ 0 & 1 & 0\end{array}\right],\left[\begin{array}{ccc} 0 & 0 & 0\\ 0 & 0 & 1\end{array}\right]$

and so on.