The aim of my project for the Wolfram Science Summer School was to build a neural network which could be able to colorize grayscale images in a realistic way. The network has been built following the article [1]. In this paper, the authors propose a fully automated approach for colorization of grayscale images, which uses a combination of global image features, which are extracted from the entire image, and local image features, which are computed from small image patches. Global priors provide information at an image level such as whether or not the image was taken indoors or outdoors, whether it is day or night, etc., while local features represent the local texture or object at a given location. By combining both features, it’s possible to leverage the semantic information to color the images without requiring human interaction.
Thanks!
I didn’t find the right solution from the Internet.
References http://community.wolfram.com/groups/-/m/t/884348