您好,登录后才能下订单哦!
这篇文章将为大家详细讲解有关tensorflow如何获取变量&打印权值,小编觉得挺实用的,因此分享给大家做个参考,希望大家阅读完这篇文章后可以有所收获。
在使用tensorflow中,我们常常需要获取某个变量的值,比如:打印某一层的权重,通常我们可以直接利用变量的name属性来获取,但是当我们利用一些第三方的库来构造神经网络的layer时,存在一种情况:就是我们自己无法定义该层的变量,因为是自动进行定义的。
比如用tensorflow的slim库时:
<span >def resnet_stack(images, output_shape, hparams, scope=None):</span> <span > """Create a resnet style transfer block.</span> <span ></span> <span > Args:</span> <span > images: [batch-size, height, width, channels] image tensor to feed as input</span> <span > output_shape: output image shape in form [height, width, channels]</span> <span > hparams: hparams objects</span> <span > scope: Variable scope</span> <span ></span> <span > Returns:</span> <span > Images after processing with resnet blocks.</span> <span > """</span> <span > end_points = {}</span> <span > if hparams.noise_channel:</span> <span > # separate the noise for visualization</span> <span > end_points['noise'] = images[:, :, :, -1]</span> <span > assert images.shape.as_list()[1:3] == output_shape[0:2]</span> <span ></span> <span > with tf.variable_scope(scope, 'resnet_style_transfer', [images]):</span> <span > with slim.arg_scope(</span> <span > [slim.conv2d],</span> <span > normalizer_fn=slim.batch_norm,</span> <span > kernel_size=[hparams.generator_kernel_size] * 2,</span> <span > stride=1):</span> <span > net = slim.conv2d(</span> <span > images,</span> <span > hparams.resnet_filters,</span> <span > normalizer_fn=None,</span> <span > activation_fn=tf.nn.relu)</span> <span > for block in range(hparams.resnet_blocks):</span> <span > net = resnet_block(net, hparams)</span> <span > end_points['resnet_block_{}'.format(block)] = net</span> <span ></span> <span > net = slim.conv2d(</span> <span > net,</span> <span > output_shape[-1],</span> <span > kernel_size=[1, 1],</span> <span > normalizer_fn=None,</span> <span > activation_fn=tf.nn.tanh,</span> <span > scope='conv_out')</span> <span > end_points['transferred_images'] = net</span> <span > return net, end_points</span>
我们希望获取第一个卷积层的权重weight,该怎么办呢??
在训练时,这些可训练的变量会被tensorflow保存在 tf.trainable_variables() 中,于是我们就可以通过打印 tf.trainable_variables() 来获取该卷积层的名称(或者你也可以自己根据scope来看出来该变量的name ),然后利用tf.get_default_grap().get_tensor_by_name 来获取该变量。
举个简单的例子:
<span >import tensorflow as tf</span> <span >with tf.variable_scope("generate"):</span> <span > with tf.variable_scope("resnet_stack"):</span> <span > #简单起见,这里没有用第三方库来说明,</span> <span > bias = tf.Variable(0.0,name="bias")</span> <span > weight = tf.Variable(0.0,name="weight")</span> <span ></span> <span >for tv in tf.trainable_variables():</span> <span > print (tv.name)</span> <span ></span> <span >b = tf.get_default_graph().get_tensor_by_name("generate/resnet_stack/bias:0")</span> <span >w = tf.get_default_graph().get_tensor_by_name("generate/resnet_stack/weight:0")</span> <span ></span> <span >with tf.Session() as sess:</span> <span > tf.global_variables_initializer().run()</span> <span > print(sess.run(b))</span> <span > print(sess.run(w)) </span>
结果如下:
关于“tensorflow如何获取变量&打印权值”这篇文章就分享到这里了,希望以上内容可以对大家有一定的帮助,使各位可以学到更多知识,如果觉得文章不错,请把它分享出去让更多的人看到。
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。