My versions of the Keller-Segel PDEs (somewhat more flexible than the originals, which assumed linear action of the attractant) are:

$$ \frac{\partial U}{\partial t} = -\gamma U + D \nabla^2 U + s\rho \\ \frac{\partial \rho}{\partial t} = \nabla\cdot\left(\rho\nabla V\left(\rho,U\right)\right) + \frac{\sigma^2}{2}\nabla^2\rho $$

$\rho\left(t, \vec{r}\right)$ (worm density) and $U\left(t, \vec{r}\right)$ (attractant concentration) are functions of time and space to be approximated by the numerical solution. $\gamma$, $D$, $s$, and $\sigma$ are positive real parameters. $V$ is a function of $\rho$ and $U$ (and thereby of $t$ and $\vec{r}$

In one dimension (using $\dot\rho$ for the time derivative and $\rho^\prime$ for the spatial derivative), these become:

$$ \dot{U} = -\gamma U + D U'' + s \rho \\ \dot{\rho} = \left(\rho V'\left(\rho,U\right)\right)' + \frac{\sigma^2}{2}\rho'' $$

Now I'm going to assume $\rho$ and $U$ are periodic with period 1. I will express each as the sum of an even function and an odd function (which will obviously also be periodic):

$$ \rho(x) = \rho_e(x) + \rho_o(x) \\ U(x) = U_e(x) + U_o(x) $$

$$ \rho_e(x) = \frac{1}{2}\left(\rho(x)+\rho(-x)\right) = \frac{1}{2}\left(\rho(x)+\rho(1-x)\right) \\ \rho_o(x) = \frac{1}{2}\left(\rho(x)-\rho(-x)\right) = \frac{1}{2}\left(\rho(x)-\rho(1-x)\right) \\ U_e(x) = \frac{1}{2}\left(U(x)+U(-x)\right) = \frac{1}{2}\left(U(x)+U(1-x)\right) \\ U_o(x) = \frac{1}{2}\left(U(x)-U(-x)\right) = \frac{1}{2}\left(U(x)-U(1-x)\right) $$

Now I want to derive a system of PDEs in $\rho_e$, $\rho_o$, $U_e$, and $U_o$ defined on $\left[0, \frac{1}{2}\right].$ The even functions are required to satisfy Neumann boundary conditions $f'_e(0) = f'_e\left(\frac{1}{2}\right) = 0$. The odd functions satisfy Dirichlet boundary conditions $f_o(0) = f_o\left(\frac{1}{2}\right) = 0$. Using the symmetries, $f=f_e + f_o$ can be reconstructed on $[0,1]$. It is straightforward to show that this reconstructed function satisfies the periodic boundary conditions $f(0) = f(1), f'(0)=f'(1)$.

For $U$, this is easy:

$$ \dot{U}_e = -\gamma U_e + D U''_e + s \rho_e \\ \dot{U}_o = -\gamma U_o + D U''_o + s \rho_o $$

For $\rho$, the nonlinear term $\left(\rho V'\left(\rho,U\right)\right)'$ makes life more difficult, but we can still do it using the definition of $\rho_e$:

$$ \begin{align} \dot{\rho}_e &= \frac{\sigma^2}{2}\rho''_e + \frac{1}{2}\left(\left.\left(\rho V'\left(\rho,U\right)\right)'\right|_x + \left.\left(\rho V'\left(\rho,U\right)\right)'\right|_{1-x}\right) \\ &= \frac{\sigma^2}{2}\rho''_e + \frac{1}{2}\left(\left(\rho(x) V'\left(\rho(x),U(x)\right)\right)' + \left(\rho(1-x) V'\left(\rho(1-x),U(1-x)\right)\right)'\right) \end{align} $$

I'm being sloppy here: I have made no attempt to get the signs of the derivatives right. Obviously that has to be done correctly when you code it. The next step, which leads to a very messy-lookign expression I'm not going to attempt to write, is to substitute

$$ \rho_e(x) = \rho_e(x) + \rho_o(x) \\ \rho_e(1-x) = \rho_e(x) - \rho_o(x) $$

At this point you have a PDE for $\rho_e$ in terms of $\rho_e$, $\rho_o$, $U_e$, and $U_o$. A similar exercise produces a PDe for $\rho_o$. And of course in two or three dimensions (where there are four or eight component functions) the PDE looks much worse.

Here's the code that does this. (It's less painful than you might expect, or at least less painful than I expected.)

In [ ]:
            self.Vsds = [self.V(Usd, rhosd) for Usd,rhosd in
                         zip(self.Usds, self.rhosds)]
            #
            # I may need to adjust the signs of the subdomain vs by
            # the symmetries of the combinations
            #
            self.vsds = [-ufl.grad(Vsd) - (
                self.s2*ufl.grad(rhosd)/ufl.max_value(rhosd, self.rho_min)
            ) for Vsd,rhosd in zip(self.Vsds, self.rhosds)]
            self.fluxsds = [vsd * rhosd for vsd,rhosd in
                            zip(self.vsds, self.rhosds)]
            self.vnsds = [ufl.max_value(ufl.dot(vsd, self.n), 0)
                          for vsd in self.vsds]
            self.facet_fluxsds = [(
                vnsd('+')*ufl.max_value(rhosd('+'), 0.0) -
                vnsd('-')*ufl.max_value(rhosd('-'), 0.0)
            ) for vnsd,rhosd in zip(self.vnsds, self.rhosds)]
            #
            # Now combine the subdomain fluxes to get the fluxes for
            # the symmetrized functions
            #
            self.fluxs = matmul((2.0**-self.dim)*self.eomat,
                                self.fluxsds)
            self.facet_fluxs = matmul((2.0**-self.dim)*self.eomat,
                                      self.facet_fluxsds)
            self.rho_flux_jump = vectotal(
                [-facet_flux*ufl.jump(wrho)*self.dS
                 for facet_flux,wrho in
                 zip(self.facet_fluxs, self.wrhos)]
            )
            self.rho_grad_move = vectotal(
                [ufl.dot(flux, ufl.grad(wrho))*self.dx
                 for flux,wrho in
                 zip(self.fluxs, self.wrhos)]
            )

This also answers Nate's question, about error. propagation. If you look carefully at the PDEs for the odd and even components, you will convince yourself that the calculation done in this case is exactly the calculation that would be done to solve the original PDEs on $[0,1]$. For the nonlinear term, each $\rho_{[eo]}$ PDE contains two (or four or eight) terms that, for any given $x\in\left[0,\frac{1}{2}\right]$, correspond to the calculation that would have been done to solve the original PDE in the appropriate half (or corner) of the full domain $[0,1]$

It is true that the odd functions can be small in absolute value and may be effectively calculated as the difference between nearly equal large numbers. They may therefore lose relative precision. This doesn't reduce the precision of the reconstructed periodic functions. (It can be a minor annoyance in that the timestepper may think it needs to take small steps to get the desired relative precision in the odd functions.)