due 3/13/2025 before midnight via Learning Suite 25 possible points
We will use convolutional neural nets to perform super resolution of coarse MRI data using the methodology in this paper. We’ll just focus on the very first case (summarized in Figure 3). Read the Methodology section (section II) and Known Boundary Condition (III.B.1), which is the case we’ll focus on. The goal is to reproduce Fig 3c (CNN).
You can download the data in these two files: sr_lfdata.npy, and sr_hfdata.npy and load as follows:
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
# load low resolution data, which serves as input to our model
lfdata = np.load("sr_lfdata.npy")
lfx = lfdata[0, :, :] # size 14 x 9 (height x width)
lfy = lfdata[1, :, :]
lfu = lfdata[4, :, :]
lfv = lfdata[5, :, :]
# plot the low resolution data (like fig 3a except we are using MRI noise here rather than Gaussian noise so it will look a bit different)
plt.figure()
plt.pcolormesh(lfx, lfy, np.sqrt(lfu**2 + lfv**2), cmap=cm.coolwarm, vmin=0.0, vmax=1.0)
plt.colorbar()
# load high resolution grids and mapping from low resolution to high resolution grid
hfdata = np.load("sr_hfdata.npy")
Jinv = hfdata[0, :, :] # size 77 x 49 (height x width)
dxdxi = hfdata[1, :, :]
dxdeta = hfdata[2, :, :]
dydxi = hfdata[3, :, :]
dydeta = hfdata[4, :, :]
hfx = hfdata[5, :, :]
hfy = hfdata[6, :, :]
ny, nx = hfx.shape #(77 x 49)
h = 0.01 # grid spacing in high fidelity (needed for derivatives)
plt.show()
I’ve also done (part of) the differentiation for you in the code below. The convolution filters shown in Appendix A aren’t actually used because they won’t work at the boundaries of the grid. We still use central finite differencing in the interior, but use one-sided finite differencing on the edges. The only difference from finite differencing you’ve done before is that we are using a larger stencil (5 pts for interior, 4 for edges). Note that the below code takes derivatives in the transformed Cartesian coordinates. You then need to apply the coordinate transformation using the formulas shown in eq 10a/b using the provided tensors above (Jinv which is written as 1/J in the text, dxdxi, dxdeta, dydxi, dydeta). Please note that the below assumes a grid given in height x width (not ‘x’ x ‘y’) just to match the ordering in pytorch convolution layers.
# see https://en.wikipedia.org/wiki/Finite_difference_coefficient
# or https://web.media.mit.edu/~crtaylor/calculator.html
# f should be a tensor of size: nbatch x nchannels x height (y or eta) x width (x or xi)
# This is written in a general way if one had more data, but for this case there is only 1 data sample, and there are only a few channels it might be clearer to you to separate the channels out into separate variables, in which case the below could be simplified (i.e., you remove the first two dimensions from everything so that input is just height x width if you desire).
def ddxi(f, h):
# 5-pt stencil
dfdx_central = (f[:, :, :, 0:-4] - 8*f[:, :, :, 1:-3] + 8*f[:, :, :, 3:-1] - f[:, :, :, 4:]) / (12*h)
# 1-sided 4pt stencil
dfdx_left = (-11*f[:, :, :, 0:2] + 18*f[:, :, :, 1:3] -9*f[:, :, :, 2:4] + 2*f[:, :, :, 3:5]) / (6*h)
dfdx_right = (-2*f[:, :, :, -5:-3] + 9*f[:, :, :, -4:-2] -18*f[:, :, :, -3:-1] + 11*f[:, :, :, -2:]) / (6*h)
return torch.cat((dfdx_left, dfdx_central, dfdx_right), dim=3)
def ddeta(f, h):
# 5-pt stencil
dfdy_central = (f[:, :, 0:-4, :] - 8*f[:, :, 1:-3, :] + 8*f[:, :, 3:-1, :] - f[:, :, 4:, :]) / (12*h)
# 1-sided 4pt stencil
dfdy_bot = (-11*f[:, :, 0:2, :] + 18*f[:, :, 1:3, :] -9*f[:, :, 2:4, :] + 2*f[:, :, 3:5, :]) / (6*h)
dfdy_top = (-2*f[:, :, -5:-3, :] + 9*f[:, :, -4:-2, :] -18*f[:, :, -3:-1, :] + 11*f[:, :, -2:, :]) / (6*h)
return torch.cat((dfdy_bot, dfdy_central, dfdy_top), dim=2)
Tips:
u[0, :] = 0; v[0, :] = 1; p[0, :] = p[1, :]
. The latter forces the pressure gradient to be zero at the inlet (which just means it is at some unknown constant pressure). The left and right edges are walls with conditions: \(u = 0, v = 0, dp/d\xi = 0\) (the latter is a result from incompressible boundary layer theory). At the top (outlet) we set \(du/d\eta = 0, dv/d\eta = 0, p = 0\) (the outlet pressure is unknown, but pressure is only defined relative to a reference point so we arbitrarily choose the outlet as a zero reference).In the case we are analyzing, the only loss comes from the physics residuals shown in eq (4). I’ve expanded them in Cartesian coordinates for you below (and I’ve also substituted in the density and viscosity for this particular problem).
\[\frac{du}{dx} + \frac{dv}{dy} = 0\\ u\frac{du}{dx} + v\frac{du}{dy} + \frac{dp}{dx} - 0.01 \left(\frac{d^2u}{dx^2} + \frac{d^2u}{dy^2}\right) = 0\\ u\frac{dv}{dx} + v\frac{dv}{dy} + \frac{dp}{dy} - 0.01 \left(\frac{d^2v}{dx^2} + \frac{d^2v}{dy^2}\right) = 0\] plt.figure()
plt.pcolormesh(hfx, hfy, np.sqrt(u**2 + v**2), cmap=cm.coolwarm, vmin=0.0, vmax=1.3)
plt.colorbar()