Title: Multiplicative Gradient Method: When and Why It Works Abstract: The multiplicative gradient (MG) method was originally developed for the positron emission tomography problem, where in each iteration one simply multiplies the current iterate with the current gradient to produce the next iterate. Its convergence rate was recently discovered by Zhao (2022). Despite this special case, the understanding of when and why MG works in general is lacking. In this talk, we unravel this mystery by identifying a general class of convex problems that involve optimizing a log-homogeneous function over a slice of a symmetric cone, and propose a generalization of MG. We show that the generalized MG method has $O(ln(n)/t)$ convergence rate on this class of problems, where $n$ is the rank of the symmetric cone. Our theory not only subsumes the existing results as special cases, but also helps to identify new applications where the MG method significantly outperforms the state-of-the-art.