In: Advances in Neural Information Processing Systems, vol. Vinyals, O., Blundell, C., Lillicrap, T., Wierstra, D., et al.: Matching networks for one shot learning. We conduct extensive experiments on three well-known datasets and the experimental results clearly demonstrate the effectiveness of our proposed method for few-shot image generation. To avoid the singularness of conditional information caused by the prototype model, we use the latest Feature Fusion module (LFM) to learn various features. Subsequently, for the k-shot task, we extract k image features and calculate the conditional information to guide the training generation of the diffusion model. Specifically, we train an autoencoder on seen categories and then use patch discriminator adversarial training to achieve better reconstruction quality. To alleviate these issues, we propose a novel few-shot generation method based on the classifier-free conditional diffusion model. Although many methods have been introduced to handle few-shot generation tasks, most of them are usually unstable during the training process and can only generate cookie-cutter images. They aim at generating more data of a given domain, with only a few available training examples.
In recent research, few-shot generation models have attracted increasing interest in computer vision.