torch/nn: What's the canonical way to multiply by a constant matrix? -


nn.mm requires table argument of matrices multiplied. in case, 1 of matrices output of defined model (e.g. nn.sequential) , other constant matrix. how can inject constant nn's pipeline , should worried optimizer start changing if do?

i'm aware solve injection problem by:

  1. writing own nn.module. seems heavy handed.
  2. breaking model 2 parts , manually injecting constant. want model nn.module subclass gets called :forward(input) , allows consumers blissfully ignorant of existence of constant.
  3. using nn.paralleltable, expose constant model consumers.
  4. using nn.linear no bias , overwriting weights. i'm not sure how prevent optimizer performing update.

you can create nn.linear , override :accgradparameters no-op function

m = nn.linear(100,200) -- copy weights / bias m.weight / m.bias m.accgradparameters = function() end -- m constant multiplier thing