In this post, we will see how to resolve Why is the graph for the ‘TFOpLambda’ layer’s ‘call’ method not stored in the SavedModel object_graph_def?
Question:
I work on importing TensorFlow models into MATLAB, this includes de-serializing the SavedModel format, identifying the different tf.keras.layers present in the SavedModel and creating an equivalent deep learning network in MATLAB.In TensorFlow 2.3.0 and earlier, if we use a TF symbol (like: tf.nn.relu()) between tf.keras.layers instances, it was serialized as a ‘TensorFlowOpLayer’, that was similar to a layer subclassing tf.keras.layer, i.e., the graph for its ‘call’ method (call_and_return_conditional_losses) was stored in the SavedModel. Specifically, this ‘call_and_return_conditional_losses’ function was stored as a child of the node corresponding to the TensorFlowOpLayer in the SavedModel’s object_graph_def.
In TensorFlow 2.6.0 and later, a TF symbol used between tf.keras.layers instances is serialized as a ‘TFOpLambda’ layer. Saving models containing these TFOpLambda layers into a SavedModel does not serialize the graph for its ‘call’ method (call_and_return_conditional_losses) anymore. There is no child node of the TFOpLambda node in the SavedModel’s object_graph_def that corresponds to the ‘call_and_return_conditional_losses’ function anymore.
This creates a problem for me since I rely on decoding the ‘call_and_return_conditional_losses’ function, in order to import these TensorFlowOpLayer / TFOpLambda into MATLAB. For instance, consider the following model as an example:
- Is there a way to always save the ‘call’ method graph in SavedModels for TFOpLambda layers? Basically a way to keep the older TensorFlowOpLayer behavior in newer versions of TensorFlow?
- What is the benefit of saving TF Symbols as TFOpLambda layers over the original TensorFlowOpLayers?
Best Answer:
Quoting the answers I got from TensorFlow developers on here:- There is no way to force the serialization of the ‘call’ method graph for TFOpLambda layers in newer versions of TensorFlow. This change in serialization was made intentionally.
- The main benefit of using TFOpLambda layers over TensorFlowOpLayers is that they are more general and flexible. TFOpLambda layers allow you to use any TensorFlow operation (not just those provided by tf.keras.layers) as a layer in your model. This can be useful for example when you want to use a custom TensorFlow operation that is not available as a pre-built layer in tf.keras. TFOpLambda layers also allow you to easily wrap a TensorFlow function (or any Python function) as a layer. This can be useful when you want to use a complex computation that cannot be expressed as a single TensorFlow operation, but you still want to treat it as a layer in your model.
If you have better answer, please add a comment about this, thank you!
Source: Stackoverflow.com