Skip to main content


Learn About Our Meetup

5000+ Members



Join our meetup, learn, connect, share, and get to know your Toronto AI community. 



Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.



Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[Project] torchfunc: PyTorch functions to improve performance, analyse and make your deep learning life easier.

Hi guys,

I’d like to share another PyTorch related project some of you (hopefully) will find helpful and interesting. Here is GitHub repository and here is documentation.

Also, if you have any suggestions/improvements/questions for this project I will be glad to answer/help, thanks.

Now, description (taken from project’s readme with minor adjustments):

What is it?

torchfunc is library revolving around PyTorch with a goal to help you with:

  • Improving and analysing performance of your neural network (e.g. Tensor Cores compatibility)
  • Record/analyse internal state of torch.nn.Module as data passes through it
  • Do the above based on external conditions (using single Callable to specify it)
  • Day-to-day neural network related duties (model size, seeding, performance measurements etc.)
  • Get information about your host operating system, CUDA devices and others

Quick examples

Get instant performance tips about your module. All problems described by comments will be shown by

class Model(torch.nn.Module): def __init__(self): super().__init__() self.convolution = torch.nn.Sequential( torch.nn.Conv2d(1, 32, 3), torch.nn.ReLU(inplace=True), # Inplace may harm kernel fusion torch.nn.Conv2d(32, 128, 3, groups=32), # Depthwise is slower in PyTorch torch.nn.ReLU(inplace=True), # Same as before torch.nn.Conv2d(128, 250, 3), # Wrong output size for TensorCores ) self.classifier = torch.nn.Sequential( torch.nn.Linear(250, 64), # Wrong input size for TensorCores torch.nn.ReLU(), # Fine, no info about this layer torch.nn.Linear(64, 10), # Wrong output size for TensorCores ) def forward(self, inputs): convolved = torch.nn.AdaptiveAvgPool2d(1)(self.convolution(inputs)).flatten() return self.classifier(convolved) # All you have to do print( 

Seed globaly (including numpy and cuda), freeze weights, check inference time and model size:

# Inb4 MNIST, you can use any module with those functions model = torch.nn.Linear(784, 10) torchfunc.seed(0) frozen = torchfunc.module.freeze(model, bias=False) with torchfunc.Timer() as timer: frozen(torch.randn(32, 784) print(timer.checkpoint()) # Time since the beginning frozen(torch.randn(128, 784) print(timer.checkpoint()) # Since last checkpoint print(f"Overall time {timer}; Model size: {torchfunc.sizeof(frozen)}") 

Record and sum per-layer and per-neuron activation statistics as data passes through network:

# Still MNIST but any module can be put in it's place model = torch.nn.Sequential( torch.nn.Linear(784, 100), torch.nn.ReLU(), torch.nn.Linear(100, 50), torch.nn.ReLU(), torch.nn.Linear(50, 10), ) # Recorder which sums all inputs to layers recorder = torchfunc.hooks.recorders.ForwardPre(reduction=lambda x, y: x+y) # Record only for torch.nn.Linear recorder.children(model, types=(torch.nn.Linear,)) # Train your network normally (or pass data through it) ... # Activations of all neurons of first layer! print(recorder[1]) # You can also post-process this data easily with apply 



Latest release:

pip install --user torchfunc


pip install --user torchfunc-nightly

One could also check the project out with Docker, see more info in README as this post is getting long I guess

BTW. There is also another project of mine torchdata revolving around data processing with PyTorch, might be of interest to some of you as well (it was announced one week ago here as well, but in case you missed).

submitted by /u/szymonmaszke
[link] [comments]