\begin{center} {\bf Positive Definite Completions\\ of\\ Partial Hermitian Matrices} \end{center} \begin{description} \item[~~$\bullet$] $\GG(V,E)$ finite undirected graph \item[~~$\bullet$] $A(\GG)$ is a $\GG$-partial matrix ($a_{ij}$ defined iff $\{i,j\} \in E$) \item[~~$\bullet$] $A(\GG)$ is a $\GG$-partial positive matrix if $a_{ij}=\overline{a_{ji}}, \forall \{i,j\} \in E$ and all existing principal minors are positive. \item[~~$\bullet$] with ${\mathcal J}=(V,\bar{E}), E \subset \bar{E}$ a $\mathcal J$-partial matrix $B({\mathcal J})$ extends the $\GG$-partial matrix $A(\GG)$ if $b_{ij}=a_{ij}, \forall \{i,j\} \in E$ \item[~~$\bullet$] $\GG$ is positive completable if every $\GG$-partial positive matrix can be extended to a positive definite matrix. \end{description} \end{slide} \begin{slide}{} $\GG$ is {\bf chordal} if there are no minimal cycles of length $\geq 4$. (every cycle of length $\geq 4$ has a chord) {\bf THEOREM} (Grone, Johnson, Sa, Wolkowicz)\\ $\GG$ is positive completable iff $\GG$ is chordal. \epr ~~\\ ~~\\ equivalently - strict feasibility for SDP:\\ \[\begin{array}{cl} \trace E_{ij}P=a_{ij}, & \forall \{i,j\} \in E\\ P \succ 0 \end{array}\] ~~\\ where $E_{ij}=e_ie_j^T+e_je_k^T$ \end{slide} \begin{slide}{} \begin{center} {\bf Approximate Positive Semidefinite Completions} \end{center} \begin{flushright} \small{ (with Charlie Johnson\\ and Brenda Kroschel) } \end{flushright} given:\\ ~~\\ $H=H^T \geq 0$ a real, nonnegative (elementwise) {\bf symmetric matrix of weights}, with positive diagonal elements $ H_{ii} > 0,~ \forall i$;\\ and $A=A^*$ the {\bf given partial Hermitian matrix} ~~\\ (i.e. some elements approximately fixed; others free; for notational purposes, assume free elements set to 0 if not specified.) \end{slide} \begin{slide}{} $||A||_F = \sqrt{ \tr A^*A}$ {\em Frobenius norm}, $\circ$ denotes {\em Hadamard product}.\\ \[ \begin{array}{cc} f(P):=||H \circ (A-P) ||_F^2 \end{array} \] ~~\\ ~~\\ {\bf weighted, best approximate,\\ completion problem} \[ (AC)~~ \begin{array}{ccc} \mu^*:=&\min &f(P) \\ & \mbox{~subject to~} & KP=b\\ & & P \succeq 0, \end{array} \] where $K: \hn \rightarrow {\cal C}^m$ linear operator \end{slide} \begin{slide}{} Lagrangian: \[ L(P,y,\Lambda) = f(P) + \left - \tr \Lambda P \] ~~\\ Dual problem: \[ (DAC) \begin{array}{ccc} \max &f(P) +\left- \tr \Lambda P \\ \mbox{~subject to~} & \nabla f(P) -K^*y- \Lambda = 0\\ & \Lambda \succeq 0. \end{array} \] \end{slide} \begin{slide}{} {\bf THEOREM} \label{thm:optcond} The matrix $\bar{P}\succeq 0$ and vector-matrix $\bar{y},\bar{\Lambda} \succeq 0$ solve AC and DAC if and only if \[ \begin{array}{cc} K\bar{P} = b & \mbox{primal feas.}\\ 2H^{(2)} \circ (\bar{P}-A)-K^*\bar{y} -\bar{\Lambda} = 0 & \mbox{dual feas.}\\ \tr \bar{\Lambda} \bar{P} = 0 & \mbox{compl. slack.}\\ \end{array} \] \epr \end{slide} \begin{slide}{} For simplicity and sparsity, discard linear operator $K$ and replace with appropriate weights in $H$. Use ({\em square}) perturbed optimality conditions. \[ \begin{array}{cc} 2H^{(2)} \circ (P-A) -\Lambda = 0 & \mbox{dual feasibility}\\ -P + \mu \Lambda^{-1} = 0 & \mbox{perturbed C.S.}\\ \end{array} \] Linearization of second equation\\ and solve for $h$ and $l$ \[ h= \mu \Lambda^{-1} - \mu \Lambda^{-1} l \Lambda^{-1}-P \] \[ l= \frac 1{\mu}\left\{ - \Lambda (P+h) \Lambda \right\}+\Lambda \] \end{slide} \begin{slide}{} Dual Step First:\\ (if many elements of $P$ are free) We can eliminate the primal step $h$ and solve for the dual step $l$. \[ \begin{array}{ccl} l & = & 2 H^{(2)} \circ h + \left( 2H^{(2)} \circ (P-A) -\Lambda \right) \\ & = & 2 H^{(2)} \circ \left(\mu \Lambda^{-1} - \mu \Lambda^{-1} l \Lambda^{-1} -P \right) \\ && ~~~~~~~ + ( 2H^{(2)} \circ (P-A) -\Lambda). \end{array} \] Equivalently, we get the Newton equation \[ \begin{array}{ccl} 2 H^{(2)} \circ (\mu \Lambda^{-1} l \Lambda^{-1} ) + l = 2 H^{(2)} \circ (\mu \Lambda^{-1} -A ) -\Lambda. \end{array} \] $l,\Lambda$ have same sparsity pattern as $H$,\\ order is number of nonzeros/2 in $H$. \end{slide} \begin{slide}{} \begin{flushleft} \begin{tiny} \begin{tabular}{|c|c|c|c|c|c|c|c|c|}\hline dim& toler&$H$dens./infty& $A$psd &cond(A)&$H$pd&min/max&iters\\ \hline 60 & $10^{-6}$& .01/.001 & yes & 79.7& no & 15/23 & 16.8 \\ 65 & $10^{-6}$& .015/.001 & yes & 49.9& yes & 18/24 & 21.3 \\ 83 & $10^{-6}$& .007/.001 & no & 235.1 & no & 24/29 & 25.5 \\ 85 & $10^{-5}$& .008/.001 & yes & 94.7 & no & 11/17 & 13.1 \\ 85 & $10^{-6}$& .0075/.001 & no & 299.9 & no & 23/27 & 25.2 \\ 87 & $10^{-6}$& .006/.001 & yes & 74.2 & yes & 14/19 & 16.9 \\ 89 & $10^{-6}$& .006/.001 & no & 179.3 & no & 23/28 & 15.2 \\ 110 & $10^{-6}$& .007/.001 & yes & 172.3& yes & 15/20 & 17.8 \\ 155 & $10^{-6}$& .01/0 & yes &643.9& yes & 14/18 & 15.3 \\ 655 & $10^{-6}$& .017/0 & yes &1.4& no & 14/14 & 14. \\ 755 & $10^{-6}$& .002/0 & yes &1.5& no & 15/15 & 15. \\ \hline \end{tabular} ~~\\ data for dual-step-first (20 problems per test): dimension; tolerance for duality gap;\\ density of nonzeros in $H$/ density of infinite values in $H$;\\ positive semidefiniteness of $A$; condition number of $A$; positive definiteness of $H$;\\ (only one test for: 655,755) \end{tiny} \end{flushleft} \end{slide} \begin{slide}{} \begin{center} {\bf Euclidean Distance Matrix Completion Problem} \end{center} \begin{flushright} \small{ (with Abdo Alfakih) } \end{flushright} {\bf What are EDMs?}\\ -----\\ A {\bf pre-distance matrix} (or dissimilarity matrix):\\ $\bullet$ an $n \times n$ symmetric matrix $D=(d_{ij})$ with nonnegative elements and zero diagonal ~~\\ -----\\ A (squared) {\bf Euclidean distance matrix} (EDM):\\ $\bullet$ a pre-distance matrix such that there exists points $x^1,x^2,\ldots,x^n$ in $\Re^r$ such that \[ d_{ij} = {\| x^i- x^j\|}^2, ~~~ i,j=1,2,\ldots,n. \] -----\\ The smallest value of $r$ is called {\bf the embedding dimension} of $D$. ($r$ is always $\leq n-1$) \end{slide} \begin{slide}{} {\bf EDM problem:}\\ Given a partial symmetric matrix $A$ with certain elements specified, the Euclidean distance matrix completion problem (EDMCP) consists in finding the unspecified elements of $A$ that make $A$ a EDM. {\bf WHY?}\\ e.g.: \\ $\bullet$ The shape of an enzyme determines it chemical function. Once the shape is known, then the proper drug can be designed.\\ ~~\\ $\bullet$ distance geometry on molecules: Atoms are points in space with pairwise distances; find a set of points which yield those distances. \end{slide} \begin{slide}{} For approximate EDMCP:\\ $A$ is a pre-distance matrix, $H$ is an $n \times n$ symmetric weight matrix, \[ f(D) := {\| H \circ (A - D) \|}^2_F, \] \[ (CDM_0) \begin{array}{ccc} \mu^* := & \min & f(D) \\ & \mbox{ subject to } & D \in {\cal E}, \end{array} \] where $\cal E$ denotes the cone of EDMs. \end{slide} \begin{slide}{} {\bf DISTANCE GEOMETRY} {\bf A pre-distance matrix $D$ is a EDM if and only if $D$ is negative semidefinite on \[ M:=\left\{ x \in \Re^n : x^T e = 0 \right\}, \] where $e$ is the vector of all ones.} \end{slide} \begin{slide}{} Define {\bf centered} and {\bf hollow} subspaces \[ \begin{array}{rcl} \Sc &:=& \{ B \in \Sn : Be = 0 \}, \\ \Sh& := & \{ D \in \Sn : \diag(D) = 0 \}. \end{array} \] Define two linear operators \[ \begin{array}{rcl} \label{KK} \KK(B)& := & \mbox{diag}(B)\,e^T + e \, \mbox{diag}(B)^T - 2B, \end{array} \] \[ \begin{array}{rcl} \label{T} \TT(D)& := & -\frac 12 JDJ. \end{array} \] The operator $- 2 \TT$ is an orthogonal projection onto $\Sc.$ {\bf THEOREM} The linear operators satisfy \begin{eqnarray*} \KK ( \Sc) = \Sh, \\ \TT ( \Sh) = \Sc, \end{eqnarray*} and $\KK_{|\Sc}$ and $\TT_{|\Sh}$ are inverses of each other. \epr \end{slide} \begin{slide}{} A hollow matrix $D$ is EDM \\ if and only if\\ $B=\TT(D) \succeq 0$ (positive semidefinite) $D$ is EDM\\ if and only if\\ $D=\KK(B),$ for some $B$ with $Be=0$ and $B \succeq 0$. In this case the embedding dimension $r$ is given by the rank of $B$. Moreover if $B=XX^T$, then the coordinates of the points $x^1,x^2,\ldots,x^n$ that generate $D$ are given by the rows of $X$ and, since $Be=0,$ it follows that the origin coincides with the centroid of these points. \end{slide} \begin{slide}{} {\bf For Projection:} $V$ $n \times (n-1),$ full column rank with $V^Te=0.$ \[ \label{eq:Vmp} J := V V^{\dagger}= I- \frac{e e^T}{n} \] is orthogonal projection onto $M$, where $V^{\dagger}$ denotes Moore-Penrose generalized inverse. \end{slide} \begin{slide}{} The cone of EDMs, $\cal E$, has empty interior. This can cause problems for interior-point methods. \[ V \cdot V : {\cal S}_{n-1} \rightarrow {\cal S}_{n} \] \[ V \cdot V : {\cal P}_{n-1} \rightarrow {\cal P}_{n} \] Define the composite operators \[ \begin{array}{rcl} \label{KV} \KK_V(X)& := & \KK( V X V^T), \end{array} \] and \[ \begin{array}{rcl} \label{TV} \TT_V(D)& := & V^{\dagger}\TT( D)(V^{\dagger})^T= - \frac 12 V^{\dagger} D (V^{\dagger})^T. \end{array} \] \end{slide} \begin{slide}{} {\bf LEMMA} \begin{eqnarray*} \KK_V ( \snn) =\Sh, \\ \TT_V ( \Sh) =\snn, \end{eqnarray*} and $\KK_V$ and $\TT_V$ are inverses of each other on these two spaces. \epr {\bf COROLLARY} \begin{eqnarray*} \KK_V(\p) & = & \E , \\ \TT_V(\E) & = &\p. \end{eqnarray*} \epr \end{slide} \begin{slide}{} \begin{center} Summary \end{center} (Re)Define the closest EDM problem: \begin{eqnarray*} f(X) := {\| H \circ (A - \KK_V ( X)) \|}^2_F\\ = {\| H \circ \KK_V(B - X) \|}^2_F, \end{eqnarray*} where $B = \TT_V(A)$.\\ ($\KK_V$ and $\TT_V$ are both linear operators) \[ (CDM) \begin{tabular}{ccc} $\mu^*$ := & $\min$ & $f(X)$ \\ & subject to & $X \succeq 0.$ \end{tabular} \] \end{slide} \begin{slide}{} {\bf Primal-Dual Interior-Point Framework:} STEPS: \begin{enumerate} \item derive a dual program \item state optimality conditions for log-barrier problem (perturbed primal-dual optimality conditions) \item find a search direction for solving the perturbed optimality conditions \item take a step and backtrack to stay strictly feasible (positive definite) \item Update and go to Step 3 (adaptive update of log-barrier parameter) \end{enumerate} \end{slide} \begin{slide}{} {\bf Step 1. derive a dual program:} $\Lambda \in \snn, \Lambda \succeq 0$ and $y \in R^m$, \\ Lagrangian is \[ L(X,y,\Lambda) = f(X) + \langle y, b - \A(X) \rangle - \langle \Lambda, X \rangle \] primal program (CDM) is \[ = \min_{X} \max_{\stackrel{y}{\Lambda \succeq 0}} L(X,y,\Lambda). \] dual program is: \[ = \max_{\stackrel{y}{\Lambda \succeq 0}} \min_{X} L(X,y,\Lambda), \] \end{slide} \begin{slide}{} The inner minimization of the convex, in $X$, Lagrangian is unconstrained so we add the hidden constraint which makes the minimization redundant. dual program (DCDM) \[ \max_{\stackrel{\nabla f(X) - \A^*y=\Lambda}{\Lambda \succeq 0}} f(X) + \langle y, b - \A(X) \rangle - \mbox{trace} \Lambda X. \] or \[ \begin{array}{cccc} & & \max & f(X)+\langle y, b - \A(X) \rangle - \trace \Lambda X \\ & & \mbox{subject to} & \nabla f(X) - \A^*y- \Lambda = 0 \\ & & & \Lambda \succeq 0, (X \succeq 0). \end{array} \] \end{slide} \begin{slide}{} the duality gap,\\ $ f(X)-\left(f(X)+\langle y, b - \A(X) \rangle - \trace \Lambda X \right),$\\ in the case of primal and dual feasibility, is given by the complementary slackness condition: \[ \mbox{ trace } X (\KK^*_V( H^{(2)} \circ \KK_V( {X}-B))- \A^*{y}) = 0, \] or equivalently \[ X (\KK^*_V( H^{(2)} \circ \KK_V( {X}-B))- \A^* {y}) = 0 , \] where $H^{(2)}=H \circ H.$ \end{slide} \begin{slide}{} {\bf THEOREM} Suppose that Slater's condition holds. Then $\bar{X} \succeq 0$, and $\bar{y}$, $\bar{ \Lambda} \succeq 0$ solve (CDM) and (DCDM), respectively, if and only if the following three equations hold. \[ \begin{array}{cc} \A (\bar{X}) = b & \mbox{prim. feas.} \\ 2\KK^*_V( H^{(2)} \circ \KK_V( \bar{X}-B)) - \A^* \bar{y} - \bar{\Lambda} =0 & \mbox{dual feas.} \\ \trace \bar{ \Lambda} \bar{X}=0 & \mbox{C.S.} \end{array} \] \epr {\bf LEMMA} Let $H$ be an $n \times n$ symmetric matrix with nonnegative elements and 0 diagonal such that the graph of $H$ is connected. Then \[ \KK_V^*(H^{(2)} \circ \KK_V(I)) \succ 0 , \] where $I \in \snn$ is the identity matrix. \epr \end{slide} \begin{slide}{} {\bf Step 2. state optimality conditions for log-barrier problem (perturbed primal-dual optimality conditions):} The log-barrier problem for (CDM) is \[ \min_{X \succ 0} B_{\mu}(X) := f(X) - \mu \log \det(X), \] where $\mu \downarrow 0$. For each $\mu > 0$ we take one Newton step for solving the stationarity condition \[ \nabla B_{\mu}(X) = 2 \KK^*_V(H^{(2)} \circ \KK_V(X - B)) - \mu X^{-1}=0. \] Let \[ C := 2 \KK^*_V(H^{(2)} \circ \KK_V( B)) = 2 \KK^*_V(H^{(2)} \circ A). \] Then the stationarity condition is equivalent to \[ \nabla B_{\mu}(X) = 2 \KK^*_V\left(H^{(2)} \circ \KK_V(X)\right) - C - \mu X^{-1}=0. \] \end{slide} \begin{slide}{} equating $\Lambda = \mu X^{-1}$ and multiplying through by $X$ optimality conditions, $F:=\left( \begin{array}{c} F_d \\ F_c \end{array} \right)=0,$ \[ \begin{array}{llccl} 2 \KK^*_V\left(H^{(2)} \circ \KK_V(X)\right) - C - \Lambda &=&0 & \mbox{dual feas.} \\ \Lambda X - \mu I&=&0 & \mbox{pert. C.S.}, \end{array} \] (an OVERDETERMINED nonlinear system since $\Lambda X$ not symmetric) estimate of the barrier parameter \[ \mu = \frac{1}{n-1} \mbox{ trace }\Lambda X \] \end{slide} \begin{slide}{} $\sigma_k$ centering parameter \\ ${\cal F}^0$ set of strictly feasible primal-dual points\\ $F^\prime$ derivative of $F$ \begin{alg} (p-d i-p framework:)\\ {\bf Given} $(X^0,\Lambda^0) \in {\cal F}^0$\\ {\bf for} $k=0,1,2 \ldots $\\ \hspace{.5in}{\bf solve} for the search direction\\ \[ ~~~~F^{\prime}(X^k,\Lambda^k) \left( \begin{array}{ccc} \delta X^k \\ \delta \Lambda^k \end{array} \right) = \left( \begin{array}{ccc} -F_d \\ -\Lambda^k X^k+ \sigma_k \mu_k I \end{array} \right) \] \hspace{.5in}where $\sigma_k$ centering, $\mu_k=\frac {\trace X^k\Lambda^k}{(n-1)}$ \[ (X^{k+1},\Lambda^{k+1}) = (X^k,\Lambda^k) + \alpha_k (\delta X^k, \delta \Lambda^k) \] \hspace{.5in}so that $(X^{k+1},\Lambda^{k+1}) \succ 0$\\ {\bf end (for)}. \end{alg} \end{slide} \begin{slide}{} For the EDM: search direction (Gauss-Newton direction) is the Frobenius norm lss of $F^{\prime} s = - F$, i.e. \[ \begin{array}{rcll} 2 \KK^*_V\left(H^{(2)} \circ \KK_V (h)\right) - l&=&-F_d \\ \Lambda h + l X&=&-F_c. \end{array} \] $t(n)=\frac {(n+1)n}2$ dimension of $\Sn.$ \[ F^{\prime} s= \left[ \begin{array}{cc} F^{\prime}_{u1}& F^{\prime}_{u2}\\ F^{\prime}_{l1} & F^{\prime}_{l2} \end{array} \right] \left( \begin{array}{cc} h \\ l \end{array} \right) = rhs= \left( \begin{array}{cc} rhs_1 \\ rhs_2 \end{array} \right). \] $F^{\prime}: \Re^{2(t(n-1))} \rightarrow \Re^{t(n-1)+(n-1)^2}$ \end{slide} \begin{slide}{} %% \begin{figure}[htb] %% \centering %% \centerline{\ %%\psfig{figure=fig11513.ps,width=5.3in} %% \label{fig2} %% \caption{Approximate Completion Problem} %% \end{figure} %\psfig{figure=fig11513.ps,width=5.5in} %\end{slide} %\begin{slide}{} % %\psfig{figure=fig13513.ps,width=5.3in} %\end{slide} %\begin{slide}{} % %\psfig{figure=fig8513.ps,width=5.3in} %\end{slide} %\begin{slide}{} % %\psfig{figure=fig9513.ps,width=5.3in} % % %\end{slide} %\begin{slide}{} % \begin{center} {\em Larger} Models \end{center} Instead of projecting and reducing the dimension to get Slater's condition, add a variable and increase the dimension. \begin{lem} \label{lem:newcharact} Let \[ \begin{array}{rcl} \FF &:=& \{X \in \Sn : v^Te=0 \quad \Rightarrow \quad v^TXv \leq 0 \},\\ \FF_0 &:=&\{ X \in \Sn : X - \alpha ee^T \preceq 0, \quad \mbox{for some } \alpha \geq 0 \},\\ \FF_1 &:=&\{ X \in \Sn : X - \alpha ee^T \preceq 0, \quad \forall ~ \alpha \geq \bar{\alpha},\\ && ~~~~~~~~~\mbox{ for some } \bar{\alpha} \geq 0 \}. \end{array} \] Then \beq \label{eq:subsets} {\rm ri} \left(\FF \right) \subset \FF_0 = \FF_1 \subset \FF \subset \overline{\FF_0}. \eeq \end{lem} \end{slide} \begin{slide}{} \bpr Suppose that $\bar{X} \in {\rm ri} \left(\FF \right)$ (i.e. $v^Te=0, v \neq 0 \Rightarrow v^T\bar{X}v < 0$) but $\bar{X} \notin \FF_0$. Then, for each $\alpha \geq 0$, there exists $w_{\alpha}$ with $||w_{\alpha}||=1$, such that $w_{\alpha} \rightarrow \bar{w}$, as $\alpha \rightarrow \infty$ and \[ w_{\alpha}^T(\bar{X} - \alpha ee^T)w_{\alpha} > 0, \quad \forall ~ \alpha\geq 0, \] i.e. \[ w_{\alpha}^T\bar{X}w_{\alpha} > \alpha w_{\alpha}^T ee^Tw_{\alpha} , \quad \forall ~ \alpha\geq 0. \] Since $w_{\alpha}$ converges and the left-hand-side of the above inequality must be finite, this implies that $e^T \bar{w} = \bar{w}^T\bar{X}\bar{w} = 0$, a contradiction. Therefore, ${\rm ri} \left(\FF \right) \subset \FF_0$. That $\FF_0 = \FF_1$ is clear. Now suppose that $\bar{X} - \alpha ee^T \preceq 0, ~ \alpha \geq 0$. Let $v^T e = 0$. Then $0 \geq v^T(\bar{X} - \alpha ee^T)v=v^T\bar{X}v$, i.e. $\FF_0 \subset \FF$. The final inclusion comes from the first and the fact that $\FF$ is closed. \epr \end{slide} \begin{slide}{} \begin{cor} \label{cor:newcharacthollow} Let \[ \begin{array}{rcl} \EE &:=& \{X \in \Sh : v^Te=0 \quad \Rightarrow \quad v^TXv \leq 0 \},\\ \EE_0 &:=&\{ X \in \Sh : X - \alpha ee^T \preceq 0, \quad \mbox{for some } \alpha \},\\ \EE_1 &:=&\{ X \in \Sh : X - \alpha ee^T \preceq 0, \quad \forall ~ \alpha \geq \bar{\alpha},\\ && ~~~~~~~~~~~~ \mbox{ for some } \bar{\alpha} \}. \end{array} \] Then \beq \label{eq:subsetsE} \EE = \EE_0 = \EE_1. \eeq \end{cor} \bpr (Similar to Lemma \ref{lem:newcharact}.) For closure, suppose $0 \neq X_k \in \EE_0$, i.e. $\diag(X_k)=0, X_k \preceq \alpha_k E$, for some $\alpha_k$; and, suppose $X_k \rightarrow \bar{X}$. Since $X_k$ is hollow it has exactly one positive eigenvalue which must be smaller than $\alpha_k$. However, since $X_k$ converges to $\bar{X}$, $\bar{X} \leq \lambda_{\max}(\bar{X}) E$, where $\lambda_{\max}(\bar{X})$ is the largest eigenvalue of $\bar{X}$. \epr \end{slide} \begin{slide}{} let: $E=ee^T$; $f(P) := {\| H \circ (A - P) \|}^2_F$;\\ $\KK$ lin. operator with constraint $\diag (P)=0$.\\ {\bf primal problem} is: \[ ({\rm CDM}) \begin{array}{ccc} \mu^* := & \min & f(P) \\ & \mbox{subject to} & \alpha E-P \succeq 0 \end{array} \] and {\bf dual problem (DCDM)} is \[ \begin{array}{ccc} \nu^*:= &\max &f(P) +\left- \tr \Lambda (\alpha E-P)\\ & \mbox{subject to} & \nabla_P f(P) -\KK^*y+ \Lambda = 0\\ & & -\trace \Lambda E = 0\\ & & \Lambda \succeq 0. \end{array} \] (Slater's holds for primal but fails for dual.) \end{slide} \begin{slide}{} (perturbed) Optimality Conditions are: \[ \begin{array}{cl} \diag (P) = 0 & \mbox{primal feas.}\\ \left. \begin{array}{cl} 2H^{(2)} \circ (P-A)-\Diag(y) +\Lambda = 0 \\ ~~~ -\trace \Lambda E= 0 \end{array} \right\} &\mbox{dual feas.}\\ -(\alpha E -P) + \mu \Lambda^{-1} = 0, ~ & \mbox{pert. C.S.}\\ \end{array} \] \end{slide} \begin{slide}{} \[ \begin{array}{rcl} h & \mbox{denotes the step for}& P\\ w & \mbox{denotes the step for}& \alpha\\ l & \mbox{denotes the step for}& \Lambda\\ s & \mbox{denotes the step for}& y. \end{array} \] maintain \[ \diag(h)=\diag(P)=0 \] \end{slide} \begin{slide}{} linearization of complementary slackness \[ \begin{array}{rcl} -(\alpha + w)E+(P+h)+ \mu \Lambda^{-1} - \mu \Lambda^{-1} l \Lambda^{-1}&=&0, \end{array} \] solve for $h$ \[ \begin{array}{rcl} h &=& -\mu \Lambda^{-1} + \mu \Lambda^{-1} l \Lambda^{-1}-P +(\alpha +w)E. \end{array} \] (or solve for $l$) linearization dual feasibility \[ \begin{array}{rcl} 2 H^{(2)} \circ h -\Diag(s) +l &=& -( 2H^{(2)} \circ (P-A)\\ && ~~~ -\Diag(y)+\Lambda)\\ -\trace l E &=& \trace \Lambda E \end{array} \] \end{slide} \begin{slide}{} substitute for $h,s$\\ Newton equation is \[ \begin{array}{rcl} 2 H^{(2)} \circ \left( wE +\mu \Lambda^{-1} l \Lambda^{-1} \right) -\Diag\diag (l)+ l \\ ~~~~~ = 2 H^{(2)} \circ \left\{\mu \Lambda^{-1} +A -\alpha E \right\} +\Diag(y) -\Lambda\\ \diag\left( \mu \Lambda^{-1} l \Lambda^{-1}\right) + we \\ ~~~~~ = \diag\left(\mu \Lambda^{-1}\right) - \alpha e\\ \trace(lE) = -\trace(\Lambda E). \end{array} \] square system, order $1+nnz$ where $nnz$ are the number of nonzeros in the upper triangular part of $H$, ($\diag(H)=0$). \end{slide} \begin{slide}{} $F$ denotes $(nnz+n) \times 2$ matrix\\ row $p$ contains indices of the $p$-th nonzero, upper triangular, element of $H+I$ ordered by columns, \[ \begin{array}{lcc} \left\{ (F_{p1},F_{p2})_{p=1, \ldots nnz+n} \right\}\\ ~~~~~~~~~~ = \left\{ ij : H_{ij} \neq 0, i \leq j, \mbox{ ordered by columns} \right\}. \end{array} \] $\delta_{ij}$ is {\em Kronecker delta function}\\ $\delta_{(ij)(kl)}$ is 1 if $(ij)=(kl)$, 0 otherwise. $E_{ij} = \left( e_ie_j^T+e_j^Te_i\right)/\sqrt{2}$, $ij$ unit matrix in $\Sn$, where $E_{ij} = \left(e_ie_j^T+e_j^Te_i\right)/2$ if $i=j$.\\ (orthonormal basis of $\Sn$) \end{slide} \begin{slide}{} operator equation: \[ \begin{array}{lcl} {\bf \mbox{$k\neq l, i\neq j$ LHS}} = \\ = \tr E_{kl} \left\{ 2 H^{(2)} \circ \left( \mu\Lambda^{-1} E_{ij} \Lambda^{-1} \right) -\Diag \diag (E_{ij}) \right.\\ ~~~~~~~~~~~~~~~~~~ \left. + E_{ij}\right\} \\ = \mu\tr (e_ke_l^T+e_le_k^T) \left(H^{(2)} \circ \Lambda^{-1} (e_ie_j^T+e_je_i^T)\Lambda^{-1} \right)\\ ~~~~~~~~~~~~~~~~ + \delta_{(ij)(kl)}\\ = \mu \left[ 2e_{l}^T \left(H^{(2)} \circ \Lambda^{-1}_{:,i} \Lambda^{-1}_{j:} \right)e_k+ 2e_{k}^T \left(H^{(2)} \circ \Lambda^{-1}_{:,i} \Lambda^{-1}_{j:} \right)e_l \right] \\ ~~~~~~~~~~~~ + \delta_{(ij)(kl)};\\ {\bf \mbox{$k\neq l, i\neq j$ LHS}}=\\ = 2 \mu H^{(2)}_{kl} \left( \Lambda^{-1}_{li} \Lambda^{-1}_{jk}+ \Lambda^{-1}_{ki} \Lambda^{-1}_{jl} \right) + \delta_{(ij)(kl)} \end{array} \] \end{slide} \begin{slide}{} \[ \begin{array}{lcl} {\bf \mbox{$k\neq l, i= j$ LHS}}=\\ = \tr E_{kl} \left\{ 2\mu H^{(2)} \circ \left[ \Lambda^{-1} E_{jj} \Lambda^{-1} \right] -\Diag \diag (E_{jj}) + E_{jj}\right\} \\ = 2\sqrt{2}\mu\tr e_ke_l^T \left(H^{(2)} \circ \Lambda^{-1} e_je_j^T \Lambda^{-1} \right) \\ = 2\sqrt{2} \mu H^{(2)}_{kl} \left( \Lambda^{-1}_{lj} \Lambda^{-1}_{jk} \right);\\ {\bf \mbox{$k = l, i\neq j$ LHS}}=\\ ~~~ = \sqrt{2}\mu \Lambda^{-1}_{ki} \Lambda^{-1}_{jk}, \quad k= 1, \ldots n;\\ {\bf \mbox{$k = l, i= j$ LHS}}=\\ = \mu \Lambda^{-1}_{ki} \Lambda^{-1}_{ik}, \quad k= 1, \ldots n. \end{array} \] \end{slide} \begin{slide}{} last column of LHS, matrix $l=0$ and $w=1$: \[ \begin{array}{rcl} {\bf \mbox{$w=1, k \neq l$ LHS}} =\\ ~~~ \tr \left(E_{kl} ( 2H^{(2)} \circ E)\right);\\ {\bf \mbox{$w=1, k=l$ LHS}} = 1. \end{array} \] last row of LHS: \[ \begin{array}{rcl} {\bf \mbox{$ i \neq j$ LHS}}= &=& \tr \left(E_{ij} E)\right) = \sqrt{2};\\ {\bf \mbox{$ i = j$ LHS}}= &=& 1. \end{array} \] \end{slide} \begin{slide}{} Newton system is: \[ \sMat\left[ L (\svec (l))\right] = \sMat\left[ \svec (RHS) \right], \] $\svec(S)$ vector formed from nonzero elements of columns of upper triangular part, where strict upper triangular part is multiplied by $\sqrt{2}$. ($\trace XY = \svec(X)^T \svec(Y)$, i.e. isometry) $\sMat$ is inverse Solve for $\svec(l)$: \[ L (\svec (l)) = \svec (RHS). \] \end{slide} \begin{slide}{} $ L_{pq}= $ \[ \left\{ \begin{array}{ll} 2 \mu H^{(2)}_{F_{p_2},F_{p_1}} \left( \Lambda^{-1}_{F_{p_2},F_{q_1}} \Lambda^{-1}_{F_{q_2},F_{p_1}} + \Lambda^{-1}_{F_{p_1},F_{q_1}} \Lambda^{-1}_{F_{q_2},F_{p_2}} \right) \\ ~~~~~~~~~~~~~~~~~~~~ \mbox{if } p \neq q, ~ k \neq l, ~ i \neq j;\\ 2\sqrt{2} \mu H^{(2)}_{F_{p_2},F_{p_1}} \left(\Lambda^{-1}_{F_{p_2},F_{q_2}} \Lambda^{-1}_{F_{q_2},F_{p_1}} \right) \\ ~~~~~~~~~~~~~~~~~~~~~~ \mbox{if } p \neq q, ~ k \neq l, ~ i = j;\\ 2\sqrt{2} \mu H^{(2)}_{F_{p_2},F_{p_1}}\left(\Lambda^{-1}_{F_{p_2},F_{q_2}} \Lambda^{-1}_{F_{q_2},F_{p_1}} \right) \\ ~~~~~~~~~~~~~~~~~~~~~~ \mbox{if } p = q, ~ k \neq l, ~ i = j;\\ 2 \mu H^{(2)}_{F_{p_2},F_{p_1}} \left( \Lambda^{-1}_{F_{p_2},F_{q_1}} \Lambda^{-1}_{F_{q_2},F_{p_1}} + \Lambda^{-1}_{F_{p_1},F_{q_1}} \Lambda^{-1}_{F_{q_2},F_{p_2}} \right) +1\\ ~~~~~~~~~~~~~~~~~~~~~~~~~ \mbox{if } p = q, k \neq l, ~ i \neq j;\\ \sqrt{2}\mu \Lambda^{-1}_{F_{p_1},F_{q_1}} \Lambda^{-1}_{F_{q_2},F_{p_1}} \\ ~~~~~~~~~~~~~~~~~~~~~~~~~~~ \mbox{if } k=l, ~ i \neq j; \end{array} \right. \] \end{slide} \begin{slide}{} $ = $ \[ \left\{ \begin{array}{ll} \mu \Lambda^{-1}_{F_{p_1},F_{q_1}} \Lambda^{-1}_{F_{q_1},F_{p_1}} & \mbox{if } k=l, ~ i=j\\ 2\sqrt(2) H^{(2)}_{F_{p_2},F_{p_1}} & \mbox{if } w=1,~k \neq l\\ 1 & \mbox{if } w=1,~k = l. \end{array} \right. \] The $p$-th row calculated using Hadamard product of pairs of columns of $\Lambda^{-1}$, \[ \Lambda^{-1}_{F_{p_2},F_{:,1}} \circ \Lambda^{-1}_{F_{p_1},F_{:,2}}. \] complete vectorization (preliminary numerics are very promising) \end{slide} \begin{slide}{} $p=kl, k\leq l$, and last row, component of the right-hand-side of the system is\\ $RHS_p =$ \[ \left\{ \begin{array}{lcl} \sqrt{2}\left( 2H^{(2)}_{p} \circ \left\{\mu \Lambda^{-1}_p +A_p -\alpha \right\} -\Lambda_p \right), & \mbox{if }k \neq l\\ \mu \Lambda^{-1}_{kk} - \alpha & \mbox{if } k=l\\ -\trace (\Lambda E) & \mbox{last row } \\ \end{array} \right. \] \end{slide} \begin{slide}{} \appendix \section{Outline of Related Papers} \begin{enumerate} \item In \cite{johRea:00}, Johnson and Reams provide necessary and sufficient conditions for a real matrix $A$ to have positive or negative semidefinite symmetric part, $H(A)=\frac 12(A+A^T)$, i.e. it is that $\rank [H(A)X] \leq \rank[X^TA]$ for all real square matrices $X$. Results on the row and column inclusion properties are included. \item In \cite{JohSmith:00}, Johnson and Smith discuss the problem of completing a partial real symmetric matrix so that it is positive semidefinite on a subspace. The classical chordal patterns are studied. \end{enumerate}