Author Archives: Byung Hoon Ahn

[TVM] Adding New Relay Operation

Official Guide: https://tvm.apache.org/docs/dev/relay_add_op.html

While going through this, it is important to have more context.

Attrs and Type Relation

Attrs are used to provide interface for the final interface regarding various attributes of the operator.

Type Relations are dealt with using functions.

Defining Compute / Strategy of the Operation

It seems that the operator can be defined in various ways such as te.compute or tir.
I want to see how the weights are defined… I think this can be better observed in operations like Convolution or Matmul

Importantly, to lower Relay operators to the implementations defined in TOPI library, a compute and schedule function need to be registered to each relay operator. However, they are usually specialized for each target. It is important that we provide some schedule for that so that AutoTVM or AutoScheduler can optimize the operations.

https://tvm.apache.org/docs/dev/relay_op_strategy.html

refer to schedule_conv3d_winograd_weight_transform in python/tvm/topi/generic/nn.py#L209-243

Other things are about creating the Python Hooks and stuff, so lets ignore this

[Deep Learning] Self-Supervised Learning

The motivation is quite straightforward. Producing a dataset with clean labels is expensive but unlabeled data is being generated all the time. To make use of this much larger amount of unlabeled data, one way is to set the learning objectives properly so as to get supervision from the data itself.

The self-supervised task, also known as pretext task, guides us to a supervised loss function. However, we usually don’t care about the final performance of this invented task. Rather we are interested in the learned intermediate representation with the expectation that this representation can carry good semantic or structural meanings and can be beneficial to a variety of practical downstream tasks.

Broadly speaking, all the generative models can be considered as self-supervised, but with different goals: Generative models focus on creating diverse and realistic images, while self-supervised representation learning care about producing good features generally helpful for many tasks.

Images-Based

Many ideas have been proposed for self-supervised representation learning on images. A common workflow is to train a model on one or multiple pretext tasks with unlabelled images and then use one intermediate feature layer of this model to feed a multinomial logistic regression classifier on ImageNet classification. The final classification accuracy quantifies how good the learned representation is.

Generative Modeling

The pretext task in generative modeling is to reconstruct the original input while learning meaningful latent representation.

Contrastive Learning

Contrastive Predictive Coding (CPC) is an approach for unsupervised learning from high-dimensional data by translating a generative modeling problem to a classification problem.