tensorboard的使用方法

编写完代码之后,打开terminal,首先激活anaconda环境

source activate tensorflow

然后进入存放events.out.tfevents.xxxxxxx文件的上一层目录,即下面命令中的logs/,然后运行命令

tensorboard --logdir='logs/'

命令中的logs/是存放events.out.tfevents.xxxxxxx文件的文件夹。至于文件夹为什么是logs,而不是别的名字,是因为我在代码中写的就是logs文件夹。

然后打开终端中显示出的链接,在浏览器中查看。

参考代码如下:

其中代码中最为重要的是

tf.name_scope('scope_name')

sess = tf.Session() writer = tf.summary.FileWriter('logs/', sess.graph)

的使用

# -*- coding: utf-8 -*-
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'

def add_layer(inputs, in_size, out_size, activation_function=None):
    # add one more layer and return the output of this layer
    with tf.name_scope('layer'):  # 可视化时添加的 - 大框架
        with tf.name_scope('Weights'):
            Weights = tf.Variable(tf.random_normal([in_size, out_size]), name='weights')
        with tf.name_scope('Biases'):
            biases = tf.Variable(tf.zeros([1, out_size]) + 0.1, name='biases') # recommend not 0
        with tf.name_scope('Wx_plus_b'):
            Wx_plus_bias = tf.matmul(inputs, Weights) + biases # 使用tf.add会报错,目前不知道为什么
        if activation_function is None:
            outputs = Wx_plus_bias
        else:
            outputs = activation_function(Wx_plus_bias) # 激活函数默认有名字,比如本例使用的relu激活函数
        return outputs

x_data = np.linspace(-1, 1, 300)[:, np.newaxis]
noise = np.random.normal(0, 0.05, x_data.shape)
y_data = np.square(x_data) - 0.5 + noise

# define placeholder for inputs to network
with tf.name_scope('inputs'): # 可视化时添加的
    xs = tf.placeholder(tf.float32, [None, 1], name='x_input') # x_data attribute
    ys = tf.placeholder(tf.float32, [None, 1], name='y_input') # None means any size

# add hidden layer
input_hidden_layer = add_layer(xs, 1, 10, activation_function=tf.nn.relu)
# add output layer
hidden_output_layer = add_layer(input_hidden_layer, 10, 1, activation_function=None)

# the error between prediction and real data
with tf.name_scope('loss'):
    loss = tf.reduce_mean(tf.reduce_sum(tf.square(ys - hidden_output_layer, name='loss_square'),
                                    reduction_indices=[1], name='loss_sum'), name='loss_mean')

with tf.name_scope('train'):
    train_step = tf.train.GradientDescentOptimizer(0.1).minimize(loss) # learning rate = 0.1 < 1

sess = tf.Session()
writer = tf.summary.FileWriter('logs/', sess.graph) # Very Very Very Important, 把前面的graph搜集起来放到logs里面去

init = tf.global_variables_initializer()

fig = plt.figure() # 生成图片框
ax = fig.add_subplot(1, 1, 1)
ax.scatter(x_data, y_data)
plt.ion() # 让整个程序,plot之后不暂停
plt.show()

with tf.Session() as sess:
    sess.run(init)
    for i in range(1000):
        sess.run(train_step, feed_dict={xs: x_data, ys: y_data})
        # 因为有可能输入的x_data不是全部的训练数据, 使用placeholder这种形式更方便
        if i % 50 == 0:
            # to see the step improvement
            # print(sess.run(loss, feed_dict={xs: x_data, ys: y_data}))
            try:
                ax.lines.remove(lines[0]) # 第一次还没有定义lines
            except Exception:
                pass # 什么也不做
            prediction_value = sess.run(hidden_output_layer, feed_dict={xs: x_data})
            lines = ax.plot(x_data, prediction_value, 'r-', lw=5)

            plt.pause(0.1)



 

发表评论

电子邮件地址不会被公开。 必填项已用*标注

6 + 1 =

73 + = 78