python-AttributeError:图层没有入站节点,或者AttributeError:图层从未被调用

前端之家收集整理的这篇文章主要介绍了python-AttributeError:图层没有入站节点,或者AttributeError:图层从未被调用 前端之家小编觉得挺不错的,现在分享给大家,也给大家做个参考。

我需要一种方法获取TensorFlow中任何类型的层(即Dense,Conv2D等)的输出张量的形状.根据文档,有output_shape属性可以解决此问题.但是,每次我访问它都会得到AttributedError.

这是显示问题的代码示例:

import numpy as np
import tensorflow as tf


x = np.arange(0,8,dtype=np.float32).reshape((1,8))
x = tf.constant(value=x,dtype=tf.float32,verify_shape=True)

dense = tf.layers.Dense(units=2)

out = dense(x)

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    res = sess.run(fetches=out)
    print(res)
    print(dense.output_shape)

print(dense.output_shape)语句将产生错误消息:

AttributeError: The layer has never been called and thus has no defined output shape.

或print(dense.output)将产生:

AttributeError('Layer ' + self.name + ' has no inbound nodes.')
AttributeError: Layer dense_1 has no inbound nodes.

有什么办法可以解决错误

附言:
我知道在上面的示例中,我可以通过out.get_shape()获得输出张量的形状.但是我想知道为什么output_shape属性不起作用以及如何解决

最佳答案
TL; DR

我该如何解决?定义输入层:

x = tf.keras.layers.Input(tensor=tf.ones(shape=(1,8)))
dense = tf.layers.Dense(units=2)

out = dense(x)

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    res = sess.run(fetches=out)
    print(dense.output_shape) # shape = (1,2)

根据Keras documentation,如果层具有单个节点,则可以通过以下方式获取其输入张量,输出张量,输入形状和输出形状:

> layer.input
> layer.output
> layer.input_shape
> layer.output_shape

但是在上面的示例中,当我们调用layer.output_shape或其他属性时,它会引发看起来有些奇怪的异常.

如果深入到source code,则错误由入站节点引起.

if not self._inbound_nodes:
  raise AttributeError('The layer has never been called '
                       'and thus has no defined output shape.')

这些inbound nodes是什么?

A Node describes the connectivity between two layers. Each time a layer is connected to some new input,
a node is added to layer._inbound_nodes.
Each time the output of a layer is used by another layer,
a node is added to layer._outbound_nodes.

如上所示,当self._inbounds_nodes为None时,它将引发异常.这意味着,当某个层未连接到输入层或更普遍时,先前的所有层都未连接到输入层,self._inbounds_nodes为空,这会导致问题.

请注意,示例中的x是张量而不是输入层.请参阅另一个示例以获取更多说明:

x = tf.keras.layers.Input(shape=(8,))
dense = tf.layers.Dense(units=2)

out = dense(x)

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    res = sess.run(fetches=out,Feed_dict={x: np.ones(shape=(1,8))})
    print(res)
    print(res.shape)  # shape = (1,2)
    print(dense.output_shape)  # shape = (None,2)

很好,因为定义了输入层.

请注意,在您的示例中,out是张量. tf.shape()函数和.shape =(get_shape())之间的区别是:

tf.shape(x) returns a 1-D integer tensor representing the dynamic
shape of x. A dynamic shape will be known only at graph execution time.

x.shape returns a Python tuple representing the static
shape of x. A static shape,known at graph definition time.

有关更多张量形状的信息,请访问:https://pgaleone.eu/tensorflow/2018/07/28/understanding-tensorflow-tensors-shape-static-dynamic/

原文链接:https://www.f2er.com/python/533053.html

猜你在找的Python相关文章