IAM

OPENSOURCEFAN STUDYING
STUDYINGCOMPUTERSCIENCEANDMATH COMPUTERSCIENCE

Check out the latest superpixel benchmark — Superpixel Benchmark (2016) — and let me know your opinion! @david_stutz
14thMARCH2017

SNIPPET

Minimal example of defining a custom Torch module on a custom data structure. This example defines a simple data structure wrapping two Torch tensors and defines a linear nn.Module to operate on this data structure. While the backward pass is not implemented, the example illustrates how Torch can be extended for deep learning on custom data structures.

custom_data_structure.lua
-- Small example to test forward and packward passes of custom data structures.
 
require('math')
require('torch')
require('nn')
 
-- (1) CustomDataStructure will wrap two torch tensors as x1 and x2.
-- We will then define a nn module on this data structure. 
-- The below definition of the data structure is straight-forward.
--- @class CustomDataStructure
-- This calss will be out simple test data structure.
CustomDataStructure = {}
CustomDataStructure.__index = CustomDataStructure
 
--- Creates a new CustomDataStructure with 0-vectors of the given size
-- @param n vector size
function CustomDataStructure.create(n)
  local cds = {}
  setmetatable(cds, CustomDataStructure)
  cds.x1 = torch:Tensor(n):fill(0)
  cds.x2 = torch:Tensor(n):fill(0)
  return cds
end
 
-- (2) CustomLinear will implement a linear, fully connected, layer which
-- computes a linear operation on both x1 and x2 simulatenously using
-- the same parameters. It expects as input a CustomDataStructure and
-- also returns one. Note that the module looks as any other Torch module.
-- The backward pass is left unimplemented, but works as the forward pass!
--- @class CustomLinear
CustomLinear, CustomLinearParent = torch.class('nn.CustomLinear', 'nn.Module')
 
--- Initialize the layer specifying the number of input and output units.
-- @param nInputUnits number of input units
-- @param nOutputUnits number of output units
function CustomLinear:__init__(nInputUnits, nOutputUnits)
  self.nInputUnits = nInputUnits
  self.nOutputUnits = nOutputUnits
  self.weight = torch.Tensor(nOutputUnits, nInputUnits):fill(0)
end
 
--- Compute output.
-- @param input input of type CustomDataStructure
-- @return output of type CustomDataStructure
function CustomLinear:updateOutput(input)
  self.output = CustomDataStructure.create(self.nOutputUnits)
  self.output.x1 = torch.mv(self.weight, input.x1)
  self.output.x2 = torch.mv(self.weight, input.x2)
  return self.output
end
 
--- Avoid backward pass.
function CustomLinear:UpdateGradInput(input, gradOutput)
  assert(false)
end
 
-- (3) To the the module, a one-layer network is created
-- and a simple instantiation of CustomDataStructure is passed
-- through it.
N = 10
x1 = torch.Tensor(N):fill(0)
x2 = torch.Tensor(N):fill(1)
x = CustomDataStructure.create(N)
x.x1 = x1
x.x2 = x2
 
model = nn.Sequential()
module = nn.CustomLinear(10, 1)
module.weight = torch.Tensor(1, 10):fill(1)
model:add(module)
 
y = model:forward(x)
print(y.x1)
print(y.x2)

What is your opinion on the code snippet? Is it working? Let me know your thoughts in the comments below or using the following platforms: