"No one is harder on a talented person than the person themselves" - Linda Wilkinson ; "Trust your guts and don't follow the herd" ; "Validate direction not destination" ;

June 13, 2022

Project Analysis - Color Transfer / GAN

Project Analysis - Color Transfer

Git ref - Link

  • XYZ to CIE-LAB color space conversion
  • skimage.color.rgb2lab(rgb[, illuminant, …])
  • Conversion from the sRGB color space (IEC 61966-2-1:1999) to the CIE Lab colorspace under the given illuminant and observer.

# -*- coding: utf-8 -*-
"""
Automatically generated by Colaboratory.
"""
#Upload files to colab
from google.colab import files
files.upload()
#https://github.com/emilwallner/Coloring-greyscale-images/blob/master/Alpha-version/alpha_version_notebook.ipynb
from keras.layers import Conv2D, UpSampling2D, InputLayer, Conv2DTranspose
from keras.layers import Activation, Dense, Dropout, Flatten
from tensorflow.keras.layers import BatchNormalization
from keras.models import Sequential
from keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img
from skimage.color import rgb2lab, lab2rgb, rgb2gray, xyz2lab
from skimage.io import imsave
import numpy as np
import os
import random
import tensorflow as tf
# Get images
import cv2
image_old = img_to_array(load_img('2.jpg'))
image_old = np.array(image_old, dtype=float)
image = cv2.resize(image_old, (400, 400),
interpolation = cv2.INTER_NEAREST)
X = rgb2lab(1.0/255*image)[:,:,0]
Y = rgb2lab(1.0/255*image)[:,:,1:]
from IPython.display import Image
Y /= 128
X = X.reshape(1, 400, 400, 1)
Y = Y.reshape(1, 400, 400, 2)
from google.colab.patches import cv2_imshow
Image(image)
# Building the neural network
model = Sequential()
model.add(InputLayer(input_shape=(None, None, 1)))
model.add(Conv2D(8, (3, 3), activation='relu', padding='same', strides=2))
model.add(Conv2D(8, (3, 3), activation='relu', padding='same'))
model.add(Conv2D(16, (3, 3), activation='relu', padding='same'))
model.add(Conv2D(16, (3, 3), activation='relu', padding='same', strides=2))
model.add(Conv2D(32, (3, 3), activation='relu', padding='same'))
model.add(Conv2D(32, (3, 3), activation='relu', padding='same', strides=2))
model.add(UpSampling2D((2, 2)))
model.add(Conv2D(32, (3, 3), activation='relu', padding='same'))
model.add(UpSampling2D((2, 2)))
model.add(Conv2D(16, (3, 3), activation='relu', padding='same'))
model.add(UpSampling2D((2, 2)))
model.add(Conv2D(2, (3, 3), activation='tanh', padding='same'))
# Finish model
model.compile(optimizer='rmsprop',loss='mse')
model.fit(x=X,
y=Y,
batch_size=1,
epochs=1000)
print(model.evaluate(X, Y, batch_size=1))
output = model.predict(X)
output *= 128
# Output colorizations
cur = np.zeros((400, 400, 3))
cur[:,:,0] = X[0][:,:,0]
cur[:,:,1:] = output[0]
imsave("img_result.png", lab2rgb(cur))
imsave("img_gray_version.png", rgb2gray(lab2rgb(cur)))
from IPython.display import Image
Image('img_gray_version.png')
Image('img_result.png')
view raw alphaversion.py hosted with ❤ by GitHub


Project #2 - deep koalarization

Paper  - link

Key Notes

  • High-level feature extraction using a pre-trained model (Inception-ResNetv2) to enhance the coloring process.

  • Fusion The fusion layer takes the feature vector from Inception, replicates it HW/8*8 times and attaches it to the feature volume outputted by the encoder along the depth axis

More Reads

Color Spaces

Paper - Colorization Using ConvNet and GAN

  • Colorization is a popular image-to-image translation problem
  • We implemented two models for the task: ConvNet and conditional-GAN and found that GAN can generate better results both quantitatively and qualitatively
  • ConvNet and GAN. Both models are design to take either grayscale or edge-only images and produce color (RGB) images.

Types of GAN

Attgan

More Reads

Controlling Colors of GAN-Generated and Real Images via Color Histograms

Keep Exploring!!!

No comments: