gnnwr.networks module
- class gnnwr.networks.STNN_SPNN(STNN_insize: int, STNN_outsize, SPNN_insize: int, SPNN_outsize, activate_func=ReLU())[source]
Bases:
Module
STNN_SPNN is a neural network with dense layers, which is used to calculate the spatial proximity of two nodes and temporal proximity of two nodes at the same time. | The each layer of STNN and SPNN is as follows: | full connection layer -> activate function
- Parameters:
STNN_insize (int) – input size of STNN(must be positive)
STNN_outsize (int) – Output size of STNN(must be positive)
SPNN_insize (int) – input size of SPNN(must be positive)
SPNN_outsize (int) – Output size of SPNN(must be positive)
activate_func (torch.nn.functional) – activate function(default:
nn.ReLU()
)
- forward(input1)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class gnnwr.networks.STPNN(dense_layer, insize, outsize, drop_out=0.2, activate_func=ReLU(), batch_norm=False)[source]
Bases:
Module
STPNN is a neural network with dense layers, which is used to calculate the spatial and temporal proximity of two nodes. | The each layer of STPNN is as follows: | full connection layer -> batch normalization layer -> activate function -> drop out layer
- Parameters:
dense_layer (list) – a list of dense layers of Neural Network
insize (int) – input size of Neural Network(must be positive)
outsize (int) – Output size of Neural Network(must be positive)
drop_out (float) – drop out rate(default:
0.2
)activate_func (torch.nn.functional) – activate function(default:
nn.ReLU()
)batch_norm (bool) – whether use batch normalization(default:
False
)
- forward(x)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class gnnwr.networks.SWNN(dense_layer=None, insize=-1, outsize=-1, drop_out=0.2, activate_func=PReLU(num_parameters=1), batch_norm=True)[source]
Bases:
Module
SWNN is a neural network with dense layers, which is used to calculate the spatial and temporal weight of features. | The each layer of SWNN is as follows: | full connection layer -> batch normalization layer -> activate function -> drop out layer
- Parameters:
dense_layer (list) – a list of dense layers of Neural Network
insize (int) – input size of Neural Network(must be positive)
outsize (int) – Output size of Neural Network(must be positive)
drop_out (float) – drop out rate(default:
0.2
)activate_func (torch.nn.functional) – activate function(default:
nn.PReLU(init=0.1)
)batch_norm (bool) – whether use batch normalization(default:
True
)
- forward(x)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- gnnwr.networks.default_dense_layer(insize, outsize)[source]
generate default dense layers for neural network
- Parameters:
insize (int) – input size of neural network
outsize (int) – output size of neural network
- Returns:
dense_layer – a list of dense layers of neural network
- Return type:
list
weight_share is a function to calculate the output of neural network with weight sharing.
- Parameters:
model (torch.nn.Module) – neural network with weight sharing
x (torch.Tensor) – input of neural network
output_size (int) – output size of neural network
- Returns:
output – output of neural network
- Return type:
torch.Tensor