本机实验环境:
Ubuntu16.04 64 笔记本
硬件配置(涉及系统安装,配置作为参考):
cpu: i7 - 6700HQ
GPU:GTX 960M
内存: 8 G DDR3L * 2 双通道共 16 G
SSD硬盘 : M.2 2280 Nvme格式 固态硬盘 256G 容量
HDD硬盘 : 5400转 机械硬盘 1T 容量
实验准备:
1,完成前四节
2,本机具有 Python解释器/pip/ANACONDA/PyCharm
实验目的
1,安装cpu版本的Tensorflow
1,运行Tensorflow案例
实验开始:
1,安装虚拟环境Virtualenv
打开终端输入:
sudo apt-get install python-virtualenv
使用包管理器下载Virtualenv
安装成功后创建Virtualenv虚拟环境
Python2.X输入以下代码
virtualenv --system-site-packages targetDirectory
Python3.X输入以下代码
virtualenv --system-site-packages -p python3 targetDirectory
其中的targetDirectory是目标地址,我这里替换为
~/tensorflow
创建虚拟环境成功后就需要激活虚拟环境
@H_403_68@source ~/tensorflow/bin/activate
等待提示符改变,若显示(tensorflow)$
则表示进入虚拟环境。
先不要退出虚拟环境我们在虚拟环境中安装tensorflow
2,安装Tensorflow(cpu版本)
接着上面的虚拟机终端,在虚拟环境中输入以下命令,这里还是有两个版本,选择对应的版本进行输入。
Python2.X输入
pip install --upgrade tensorflow
Python3.X输入
pip3 install --upgrade tensorflow
等待安装完成后就完成了。
3,跑tensorflow案例程序
这里提供一个Tensorflow的案例程序,是识别手写数字的程序。
需要先下载数据集
数据集下载
数据集组成:
train-images-idx3-ubyte.gz: training set images (9912422 bytes)
train-labels-idx1-ubyte.gz: training set labels (28881 bytes)
t10k-images-idx3-ubyte.gz: test set images (1648877 bytes)
t10k-labels-idx1-ubyte.gz: test set labels (4542 bytes)
下载完成后先都可以放在桌面上,等下需要移动到data文件夹。
这里提供程序算法Python文件:
//fc_Feed.py
# coding:utf-8
import tensorflow as tf
import argparse
import sys
import math
from tensorflow.examples.tutorials.mnist import input_data
image_size = 28
FLAGS = None
NUM_CLASSES = 10
keep_p = tf.constant(0.5,dtype=tf.float32)
def train():
# 加载数据集:训练、验证和测试数据集
data_sets = input_data.read_data_sets(FLAGS.input_data_dir,FLAGS.fake_data)
train_data = data_sets.train
valid_data = data_sets.validation
test_data = data_sets.test
graph = tf.Graph()
with graph.as_default():
x_placeholder = tf.placeholder(tf.float32,shape=(None,image_size*image_size),name='input_x')
y_placeholder = tf.placeholder(tf.float32,shape=(None),name='input_y')
weights = {
'h1': tf.Variable(tf.truncated_normal([image_size*image_size,FLAGS.hidden1],stddev=1.0 / math.sqrt(float(image_size*image_size))),name='h1/weights'),'h2': tf.Variable(tf.truncated_normal([FLAGS.hidden1,FLAGS.hidden2],stddev=1.0 / math.sqrt(float(FLAGS.hidden1))),name='h2/weights'),'out': tf.Variable(tf.truncated_normal([FLAGS.hidden2,NUM_CLASSES],stddev=1.0 / math.sqrt(float(FLAGS.hidden2))),name='out/weights')
}
biases = {
'h1': tf.Variable(tf.zeros([FLAGS.hidden1]),name='h1/biases'),'h2': tf.Variable(tf.zeros([FLAGS.hidden2]),name='h2/biases'),'out': tf.Variable(tf.zeros([NUM_CLASSES]),name='out/biases')
}
hidden1 = tf.nn.relu(tf.matmul(x_placeholder,weights['h1']) + biases['h1'])
#hidden1 = tf.nn.dropout(hidden1,keep_prob=keep_p)
hidden2 = tf.nn.relu(tf.matmul(hidden1,weights['h2']) + biases['h2'])
#hidden2 = tf.nn.dropout(hidden2,keep_prob=keep_p)
logits = tf.matmul(hidden2,weights['out']) + biases['out']
labels = tf.to_int64(y_placeholder)
cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=labels,logits=logits,name='xentropy')
loss = tf.reduce_mean(cross_entropy,name='xentropy_mean')
# loss = tf.reduce_mean(tf.nn. sparse_softmax_cross_entropy_with_logits(labels=y_placeholder,logits=output)
# + 0.5 * 0.01 * tf.nn.l2_loss(weights['h1'])
# + 0.5 * 0.01 * tf.nn.l2_loss(weights['h2'])
# + 0.5 * 0.01 * tf.nn.l2_loss(weights['out']))
correct = tf.nn.in_top_k(logits,labels,1)
eval_correct = tf.reduce_sum(tf.cast(correct,tf.int32))
optimizer = tf.train.GradientDescentOptimizer(FLAGS.learning_rate)
global_step = tf.Variable(0,name='global_step',trainable=False)
train_op = optimizer.minimize(loss,global_step=global_step)
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
for step in xrange(FLAGS.max_steps):
images,labels = train_data.next_batch(batch_size=FLAGS.batch_size)
_,loss_value = sess.run([train_op,loss],Feed_dict={x_placeholder: images,y_placeholder: labels})
del images,labels
if step % 100 == 0:
print('Step %d: loss = %.2f' % (step,loss_value))
if (step + 1) % 1000 == 0 or (step + 1) == FLAGS.max_steps:
# Evaluate against the validation set.
print('Validation Data Eval:')
true_count = sess.run(eval_correct,Feed_dict={x_placeholder: valid_data.images,y_placeholder: valid_data.labels})
num_examples = valid_data.num_examples
precision = float(true_count) / num_examples
print(' Num examples: %d Num correct: %d Precision @ 1: %0.04f' %
(num_examples,true_count,precision))
# Evaluate against the test set.
print('Test Data Eval:')
true_count = sess.run(eval_correct,Feed_dict={x_placeholder: test_data.images,y_placeholder: test_data.labels})
num_examples = test_data.num_examples
precision = float(true_count) / num_examples
print(' Num examples: %d Num correct: %d Precision @ 1: %0.04f' %
(num_examples,precision))
def main(_):
if tf.gfile.Exists(FLAGS.log_dir):
tf.gfile.DeleteRecursively(FLAGS.log_dir)
tf.gfile.MakeDirs(FLAGS.log_dir)
train()
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument(
'--learning_rate',type=float,default=0.01,help='Initial learning rate.'
)
parser.add_argument(
'--max_steps',type=int,default=2000,help='Number of steps to run trainer.'
)
parser.add_argument(
'--hidden1',default=128,help='Number of units in hidden layer 1.'
)
parser.add_argument(
'--hidden2',default=32,help='Number of units in hidden layer 2.'
)
parser.add_argument(
'--batch_size',default=100,help='Batch size. Must divide evenly into the dataset sizes.'
)
parser.add_argument(
'--input_data_dir',type=str,default='data',help='Directory to put the input data.'
)
parser.add_argument(
'--log_dir',default='log',help='Directory to put the log data.'
)
parser.add_argument(
'--model_dir',default='models',help='Directory to put the log data.'
)
parser.add_argument(
'--fake_data',default=False,help='If true,uses fake data for unit testing.',action='store_true'
)
FLAGS,unparsed = parser.parse_known_args()
tf.app.run(main=main,argv=[sys.argv[0]] + unparsed)
在PyCharm中打开该程序,
提示要配置Oython编译器,点击右边的链接-Configure Python Interpreter
选择添加本地路径解释器
选择自己的tensorflow文件夹bin中的python编译器
等待包加载完毕,点击OK
运行该程序,会报错,不用管。回到程序根部目录,会发现自动生成data和log文件夹,我们把之前手动下载的数据集都扔到data文件夹中。
再次进入PyCharm,运行程序。
若控制台输出如下,则完成本次实验,案例程序跑成功。