stylish.core

stylish.core.BATCH_SIZE = 4

Default batch size used for training.

stylish.core.BATCH_SHAPE = (256, 256, 3)

Default shape used for each images within training dataset.

stylish.core.EPOCHS_NUMBER = 2

Default epoch number used for training a model.

stylish.core.ITERATIONS_NUMBER = 100

Default iteration number used for transferring a style to an image.

stylish.core.CONTENT_WEIGHT = 7.5

Default weight of the content for the loss computation.

stylish.core.STYLE_WEIGHT = 100.0

Default weight of the style for the loss computation.

stylish.core.TV_WEIGHT = 200.0

Default weight of the total variation term for the loss computation.

stylish.core.LEARNING_RATE = 0.001

Default Learning Rate.

stylish.core.create_session()[source]

Create a Tensorflow session and reset the default graph.

Should be used as follows:

>>> with create_session() as session:
    ...
Returns:Tensorflow session
stylish.core.extract_style_from_path(path, vgg_mapping, style_layers, image_size=None)[source]

Extract style feature mapping from image path.

This mapping will be used to train a model which should learn to apply those features on any images.

Parameters:
  • path – path to image from which style features will be extracted.
  • vgg_mapping – mapping gathering all weight and bias matrices extracted from a pre-trained Vgg19 model (typically retrieved by stylish.vgg.extract_mapping()).
  • style_layers – Layer names from pre-trained Vgg19 model used to extract the style information with corresponding weights. Default is stylish.vgg.STYLE_LAYERS.
  • image_size – optional shape to resize the style image.

list of 5 values for each layer used for style features extraction. Default is LAYER_WEIGHTS.

Returns:mapping in the form of:
{
    "conv1_1/Relu": numpy.array([...]),
    "conv2_1/Relu": numpy.array([...]),
    "conv3_1/Relu": numpy.array([...]),
    "conv4_1/Relu": numpy.array([...]),
    "conv5_1/Relu": numpy.array([...])
}
stylish.core.optimize_image(image, style_mapping, vgg_mapping, log_path, iterations=None, learning_rate=None, content_weight=None, style_weight=None, tv_weight=None, content_layer=None, style_layer_names=None)[source]

Transfer style mapping features to image and return result.

The training duration can vary depending on the Hyperparameters specified (iterations number) and the power of your workstation.

Parameters:
  • image – 3-D Numpy array representing the image loaded.
  • style_mapping – mapping of pre-computed style features extracted from selected layers from a pre-trained Vgg19 model (typically retrieved by extract_style_from_path())
  • vgg_mapping – mapping gathering all weight and bias matrices extracted from a pre-trained Vgg19 model (typically retrieved by stylish.vgg.extract_mapping()).
  • log_path – path to save the log information into, so it can be used with Tensorboard to analyze the training.
  • iterations – number of time that image should be trained against the style mapping. Default is ITERATIONS_NUMBER.
  • learning_rateLearning Rate value to train the model. Default is LEARNING_RATE.
  • content_weight – weight of the content feature cost. Default is CONTENT_WEIGHT.
  • style_weight – weight of the style feature cost. Default is STYLE_WEIGHT.
  • tv_weight – weight of the total variation cost. Default is TV_WEIGHT.
  • content_layer – Layer name from pre-trained Vgg19 model used to extract the content information. Default is stylish.vgg.CONTENT_LAYER.
  • style_layer_names – Layer names from pre-trained Vgg19 model used to extract the style information. Default are layer names extracted from stylish.vgg.STYLE_LAYERS tuples.
Returns:

Path to output image generated.

stylish.core.optimize_model(training_images, style_mapping, vgg_mapping, model_path, log_path, learning_rate=None, batch_size=None, batch_shape=None, epoch_number=None, content_weight=None, style_weight=None, tv_weight=None, content_layer=None, style_layer_names=None)[source]

Create style generator model from a style mapping and a training dataset.

The training duration can vary depending on the Hyperparameters specified (epoch number, batch size, etc.), the power of your workstation and the number of images in the training data.

The model trained will be saved in model_path.

Parameters:
  • training_images – list of images to train the model with.
  • style_mapping – mapping of pre-computed style features extracted from selected layers from a pre-trained Vgg19 model (typically retrieved by extract_style_from_path())
  • vgg_mapping – mapping gathering all weight and bias matrices extracted from a pre-trained Vgg19 model (typically retrieved by stylish.vgg.extract_mapping()).
  • model_path – path to save the trained model into.
  • log_path – path to save the log information into, so it can be used with Tensorboard to analyze the training.
  • learning_rateLearning Rate value to train the model. Default is LEARNING_RATE.
  • batch_size – number of images to use in one training iteration. Default is BATCH_SIZE.
  • batch_shape – shape used for each images within training dataset. Default is BATCH_SHAPE.
  • epoch_number – number of time that model should be trained against training_images. Default is EPOCHS_NUMBER.
  • content_weight – weight of the content feature cost. Default is CONTENT_WEIGHT.
  • style_weight – weight of the style feature cost. Default is STYLE_WEIGHT.
  • tv_weight – weight of the total variation cost. Default is TV_WEIGHT.
  • content_layer – Layer name from pre-trained Vgg19 model used to extract the content information. Default is stylish.vgg.CONTENT_LAYER.
  • style_layer_names – Layer names from pre-trained Vgg19 model used to extract the style information. Default are layer names extracted from stylish.vgg.STYLE_LAYERS tuples.
Returns:

None

stylish.core.compute_cost(session, style_mapping, output_node, batch_size=None, content_weight=None, style_weight=None, tv_weight=None, content_layer=None, style_layer_names=None, input_namespace='vgg1', output_namespace='vgg2')[source]

Compute total cost.

Parameters:
  • sessionTensorflow session.
  • style_mapping – mapping of pre-computed style features extracted from selected layers from a pre-trained Vgg19 model (typically retrieved by extract_style_from_path())
  • output_node – output node of the model to train.
  • batch_size – number of images to use in one training iteration. Default is BATCH_SIZE.
  • content_weight – weight of the content feature cost. Default is CONTENT_WEIGHT.
  • style_weight – weight of the style feature cost. Default is STYLE_WEIGHT.
  • tv_weight – weight of the total variation cost. Default is TV_WEIGHT.
  • content_layer – Layer name from pre-trained Vgg19 model used to extract the content information. Default is stylish.vgg.CONTENT_LAYER.
  • style_layer_names – Layer names from pre-trained Vgg19 model used to extract the style information. Default are layer names extracted from stylish.vgg.STYLE_LAYERS tuples.
  • input_namespace – Namespace used for the pre-trained Vgg19 model added after the input node. Default is “vgg1”.
  • output_namespace – Namespace used for the pre-trained Vgg19 model added after output_node. Default is “vgg2”.
Returns:

Tensor computing the total cost.

stylish.core.compute_content_cost(session, layer_name1, layer_name2, batch_size=4, content_weight=7.5)[source]

Compute content cost.

Parameters:
  • sessionTensorflow session.
  • layer_name1 – Layer name from pre-trained Vgg19 model used to extract the content information of input node.
  • layer_name2 – Layer name from pre-trained Vgg19 model used to extract the content information of output node.
  • batch_size – number of images to use in one training iteration. Default is BATCH_SIZE.
  • content_weight – weight of the content feature cost. Default is CONTENT_WEIGHT.
Returns:

Tensor computing the content cost.

stylish.core.compute_style_cost(session, style_mapping, layer_names1, layer_names2, batch_size=4, style_weight=100.0)[source]

Compute style cost.

Parameters:
  • sessionTensorflow session.
  • style_mapping – mapping of pre-computed style features extracted from selected layers from a pre-trained Vgg19 model (typically retrieved by extract_style_from_path())
  • layer_names1 – Sorted layer names used in style_mapping.
  • layer_names2 – Layer name from pre-trained Vgg19 model used to extract the style information of output node.
  • batch_size – number of images to use in one training iteration. Default is BATCH_SIZE.
  • style_weight – weight of the style feature cost. Default is STYLE_WEIGHT.
Returns:

Tensor computing the style cost.

stylish.core.compute_total_variation_cost(output_node, batch_size, tv_weight=200.0)[source]

Compute total variation cost.

Parameters:
  • output_node – output node of the model to train.
  • batch_size – number of images to use in one training iteration.
  • tv_weight – weight of the total variation cost. Default is TV_WEIGHT.
Returns:

Tensor computing the total variation cost.

stylish.core.load_dataset_batch(index, training_images, batch_size=None, batch_shape=None)[source]

Return list of images for current batch index.

Usage example:

>>> for index in range(len(training_images) // batch_size)):
...     images = load_dataset_batch(
...         index, training_images,
...         batch_size=batch_size
...     )
Parameters:
  • index – index number of the current batch to load.
  • training_images – complete list of images to train the model with.
  • batch_size – number of images to use in one training iteration. Default is BATCH_SIZE.
  • batch_shape – shape used for each images within training dataset. Default is BATCH_SHAPE.
Returns:

4-dimensional matrix storing images in batch.

stylish.core.save_model(session, input_node, output_node, path)[source]

Save trained model from session.

Parameters:
  • sessionTensorflow session.
  • input_node – input placeholder node of the model trained.
  • output_node – output node of the model trained.
  • path – Path to save the model into.
Returns:

None

stylish.core.infer_model(model_path, input_path)[source]

Inferred trained model to convert input image.

Parameters:
  • model_path – path to trained model saved.
  • input_path – path to image to inferred model to.
Returns:

Path to output image generated.