如何利用微信監管你的TF訓練

外送茶
援交
援交
魚訊
外約
雷鋒網(公眾號:雷鋒網) AI科技評論按:本文作者Coldwings,雷鋒網 AI科技評論獲其授權發佈。之前回答問題【在機器學習模型的訓練期間,大概幾十分鐘到幾小時不等,大傢都會在等實驗的時候做什麼?】的時候,說到可以用微信來管著訓練,完全不用守著。沒想到這麼受歡迎……原問題下的回答如下
不知道有哪些朋友是在TF/keras/chainer/mxnet等框架下用python擼的….…這可是python啊……上itchat,弄個微信號加自己為好友(或者自己發自己),訓練進展跟著一路發消息給自己就好瞭,做瞭可視化的話順便把圖也一並發過來。然後就能安心睡覺/逛街/泡妞/寫答案瞭。講道理,甚至簡單的參數調整都可以照著用手機來……大體效果如下當然可以做得更全面一些。最可靠的辦法自然是幹脆地做一個http服務或者一個rpc,然而這樣往往太麻煩。本著簡單高效的原則,幾行代碼能起到效果方便自己當然是最好的,接入微信或者web真就是不錯的選擇瞭。隻是查看的話,TensorBoard就很好,但是如果想加入一些自定義操作,還是自行定制的。echat.js做成web,或者itchat做個微信服務,都是挺不賴的選擇。 正文如下這裡折騰一個例子。以TensorFlow的example中,利用CNN處理MNIST的程序為例,我們做一點點小小的修改。首先這裡放上寫完的代碼:

#!/usr/bin/env python# coding: utf-8
A Convolutional Network implementation example using TensorFlow library.This example is using the MNIST database of handwritten digits(http://yann.lecun.com/exdb/mnist/) Author: Aymeric DamienProject: https://github.com/aymericdamien/TensorFlow-Examples/

Add a itchat controller with multi thread
from __future__ import print_function
import tensorflow as tf
# Import MNIST datafrom tensorflow.examples.tutorials.mnist import input_data
# Import itchat & threadingimport itchatimport threading
# Create a running status flaglock = threading.Lock()running = False
# Parameterslearning_rate = 0.001training_iters = 200000batch_size = 128display_step = 10
def nn_train(wechat_name, param): global lock, running
# Lock
with lock:
running = True

# mnist data reading
mnist = input_data.read_data_sets(“data/”, one_hot=True)

# Parameters
# learning_rate = 0.001
# training_iters = 200000
# batch_size = 128
# display_step = 10
learning_rate, training_iters, batch_size, display_step = param

# Network Parameters
n_input = 784 # MNIST data input (img shape: 28*28)
n_classes = 10 # MNIST total classes (0-9 digits)
dropout = 0.75 # Dropout, probability to keep units

# tf Graph input
x = tf.placeholder(tf.float32, [None, n_input])
y = tf.placeholder(tf.float32, [None, n_classes])
keep_prob = tf.placeholder(tf.float32) #dropout (keep probability)

# Create some wrappers for simplicity
def conv2d(x, W, b, strides=1):
# Conv2D wrapper, with bias and relu activation
x = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1], padding=SAME)
x = tf.nn.bias_add(x, b)
return tf.nn.relu(x)

def maxpool2d(x, k=2):
# MaxPool2D wrapper
return tf.nn.max_pool(x, ksize=[1, k, k, 1], strides=[1, k, k, 1],
padding=SAME)

# Create model
def conv_net(x, weights, biases, dropout):
# Reshape input picture
x = tf.reshape(x, shape=[-1, 28, 28, 1])

# Convolution Layer
conv1 = conv2d(x, weights[wc1], biases[bc1])
# Max Pooling (down-sampling)
conv1 = maxpool2d(conv1, k=2)

# Convolution Layer
conv2 = conv2d(conv1, weights[wc2], biases[bc2])
# Max Pooling (down-sampling)
conv2 = maxpool2d(conv2, k=2)

# Fully connected layer
# Reshape conv2 output to fit fully connected layer input
fc1 = tf.reshape(conv2, [-1, weights[wd1].get_shape().as_list()[0]])
fc1 = tf.add(tf.matmul(fc1, weights[wd1]), biases[bd1])
fc1 = tf.nn.relu(fc1)
# Apply Dropout
fc1 = tf.nn.dropout(fc1, dropout)

# Output, class prediction
out = tf.add(tf.matmul(fc1, weights[out]), biases[out])
return out

# Store layers weight & bias
weights = {
# 5×5 conv, 1 input, 32 outputs
wc1: tf.Variable(tf.random_normal([5, 5, 1, 32])),
# 5×5 conv, 32 inputs, 64 outputs
wc2: tf.Variable(tf.random_normal([5, 5, 32, 64])),
# fully connected, 7*7*64 inputs, 1024 outputs
wd1: tf.Variable(tf.random_normal([7*7*64, 1024])),
# 1024 inputs, 10 outputs (class prediction)
out: tf.Variable(tf.random_normal([1024, n_classes]))
}

biases = {
bc1: tf.Variable(tf.random_normal([32])),
bc2: tf.Variable(tf.random_normal([64])),
bd1: tf.Variable(tf.random_normal([1024])),
out: tf.Variable(tf.random_normal([n_classes]))
}

# Construct model
pred = conv_net(x, weights, biases, keep_prob)

# Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)

# Evaluate model
correct_pred = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))

# Initializing the variables
init = tf.global_variables_initializer()

# Launch the graph
with tf.Session() as sess:
sess.run(init)
step = 1
# Keep training until reach max iterations
print(Wait for lock)
with lock:
run_state = running
print(Start)
while step * batch_size < training_iters and run_state:
batch_x, batch_y = mnist.train.next_batch(batch_size)
# Run optimization op (backprop)
sess.run(optimizer, feed_dict={x: batch_x, y: batch_y,
keep_prob: dropout})
if step % display_step == 0:
# Calculate batch loss and accuracy
loss, acc = sess.run([cost, accuracy], feed_dict={x: batch_x,
y: batch_y,
keep_prob: 1.})
print(“Iter ” + str(step*batch_size) + “, Minibatch Loss= ” + “{:.6f}”.format(loss) + “, Training Accuracy= ” + “{:.5f}”.format(acc))
itchat.send(“Iter ” + str(step*batch_size) + “, Minibatch Loss= ” + “{:.6f}”.format(loss) + “, Training Accuracy= ” + “{:.5f}”.format(acc), wechat_name)
step += 1
with lock:
run_state = running
print(“Optimization Finished!”)
itchat.send(“Optimization Finished!”, wechat_name)

# Calculate accuracy for 256 mnist test images
print(“Testing Accuracy:”, sess.run(accuracy, feed_dict={x: mnist.test.images[:256],
y: mnist.test.labels[:256],
keep_prob: 1.}))
itchat.send(“Testing Accuracy: %s” %
sess.run(accuracy, feed_dict={x: mnist.test.images[:256],
y: mnist.test.labels[:256],
keep_prob: 1.}), wechat_name)

with lock:
running = False
@itchat.msg_register([itchat.content.TEXT])def chat_trigger(msg):
global lock, running, learning_rate, training_iters, batch_size, display_step
if msg[Text] == u開始:
print(Starting)
with lock:
run_state = running
if not run_state:
try:
threading.Thread(target=nn_train, args=(msg[FromUserName], (learning_rate, training_iters, batch_size, display_step))).start()
except:
msg.reply(Running)
elif msg[Text] == u停止:
print(Stopping)
with lock:
running = False
elif msg[Text] == u參數:
itchat.send(lr=%f, ti=%d, bs=%d, ds=%d%(learning_rate, training_iters, batch_size, display_step),msg[FromUserName])
else:
try:
param = msg[Text].split()
key, value = param
print(key, value)
if key == lr:
learning_rate = float(value)
elif key == ti:
training_iters = int(value)
elif key == bs:
batch_size = int(value)
elif key == ds:
display_step = int(value)
except:
pass

if __name__ == __main__:
itchat.auto_login(hotReload=True)
itchat.run()

這段代碼裡面,我所做的修改主要是:0.導入瞭itchat和threading1. 把原本的腳本裡網絡構成和訓練的部分甩到瞭一個函數nn_train裡

def nn_train(wechat_name, param):
global lock, running
# Lock
with lock:
running = True

# mnist data reading
mnist = input_data.read_data_sets(“data/”, one_hot=True)

# Parameters
# learning_rate = 0.001
# training_iters = 200000
# batch_size = 128
# display_step = 10
learning_rate, training_iters, batch_size, display_step = param

# Network Parameters
n_input = 784 # MNIST data input (img shape: 28*28)
n_classes = 10 # MNIST total classes (0-9 digits)
dropout = 0.75 # Dropout, probability to keep units

# tf Graph input
x = tf.placeholder(tf.float32, [None, n_input])
y = tf.placeholder(tf.float32, [None, n_classes])
keep_prob = tf.placeholder(tf.float32) #dropout (keep probability)

# Create some wrappers for simplicity
def conv2d(x, W, b, strides=1):
# Conv2D wrapper, with bias and relu activation
x = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1], padding=SAME)
x = tf.nn.bias_add(x, b)
return tf.nn.relu(x)

def maxpool2d(x, k=2):
# MaxPool2D wrapper
return tf.nn.max_pool(x, ksize=[1, k, k, 1], strides=[1, k, k, 1],
padding=SAME)

# Create model
def conv_net(x, weights, biases, dropout):
# Reshape input picture
x = tf.reshape(x, shape=[-1, 28, 28, 1])

# Convolution Layer
conv1 = conv2d(x, weights[wc1], biases[bc1])
# Max Pooling (down-sampling)
conv1 = maxpool2d(conv1, k=2)

# Convolution Layer
conv2 = conv2d(conv1, weights[wc2], biases[bc2])
# Max Pooling (down-sampling)
conv2 = maxpool2d(conv2, k=2)

# Fully connected layer
# Reshape conv2 output to fit fully connected layer input
fc1 = tf.reshape(conv2, [-1, weights[wd1].get_shape().as_list()[0]])
fc1 = tf.add(tf.matmul(fc1, weights[wd1]), biases[bd1])
fc1 = tf.nn.relu(fc1)
# Apply Dropout
fc1 = tf.nn.dropout(fc1, dropout)

# Output, class prediction
out = tf.add(tf.matmul(fc1, weights[out]), biases[out])
return out

# Store layers weight & bias
weights = {
# 5×5 conv, 1 input, 32 outputs
wc1: tf.Variable(tf.random_normal([5, 5, 1, 32])),
# 5×5 conv, 32 inputs, 64 outputs
wc2: tf.Variable(tf.random_normal([5, 5, 32, 64])),
# fully connected, 7*7*64 inputs, 1024 outputs
wd1: tf.Variable(tf.random_normal([7*7*64, 1024])),
# 1024 inputs, 10 outputs (class prediction)
out: tf.Variable(tf.random_normal([1024, n_classes]))
}

biases = {
bc1: tf.Variable(tf.random_normal([32])),
bc2: tf.Variable(tf.random_normal([64])),
bd1: tf.Variable(tf.random_normal([1024])),
out: tf.Variable(tf.random_normal([n_classes]))
}

# Construct model
pred = conv_net(x, weights, biases, keep_prob)

# Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)

# Evaluate model
correct_pred = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))

# Initializing the variables
init = tf.global_variables_initializer()

# Launch the graph
with tf.Session() as sess:
sess.run(init)
step = 1
# Keep training until reach max iterations
print(Wait for lock)
with lock:
run_state = running
print(Start)
while step * batch_size < training_iters and run_state:
batch_x, batch_y = mnist.train.next_batch(batch_size)
# Run optimization op (backprop)
sess.run(optimizer, feed_dict={x: batch_x, y: batch_y,
keep_prob: dropout})
if step % display_step == 0:
# Calculate batch loss and accuracy
loss, acc = sess.run([cost, accuracy], feed_dict={x: batch_x,
y: batch_y,
keep_prob: 1.})
print(“Iter ” + str(step*batch_size) + “, Minibatch Loss= ” + “{:.6f}”.format(loss) + “, Training Accuracy= ” + “{:.5f}”.format(acc))
itchat.send(“Iter ” + str(step*batch_size) + “, Minibatch Loss= ” + “{:.6f}”.format(loss) + “, Training Accuracy= ” + “{:.5f}”.format(acc), wechat_name)
step += 1
with lock:
run_state = running
print(“Optimization Finished!”)
itchat.send(“Optimization Finished!”, wechat_name)

# Calculate accuracy for 256 mnist test images
print(“Testing Accuracy:”, sess.run(accuracy, feed_dict={x: mnist.test.images[:256],
y: mnist.test.labels[:256],
keep_prob: 1.}))
itchat.send(“Testing Accuracy: %s” %
sess.run(accuracy, feed_dict={x: mnist.test.images[:256],
y: mnist.test.labels[:256],
keep_prob: 1.}), wechat_name)

with lock:
running = False

這裡大部分是跟原本的代碼一樣的,不過首先所有print的地方都加瞭個itchat.send來輸出日志,此外加瞭個帶鎖的狀態量running用來做運行開關。此外,部分參數是通過函數參數傳入的。然後呢,寫瞭個itchat的handler

@itchat.msg_register([itchat.content.TEXT])def chat_trigger(msg):
global lock, running, learning_rate, training_iters, batch_size, display_step
if msg[Text] == u開始:
print(Starting)
with lock:
run_state = running
if not run_state:
try:
threading.Thread(target=nn_train, args=(msg[FromUserName], (learning_rate, training_iters, batch_size, display_step))).start()
except:
msg.reply(Running)

作用是,如果收到微信消息,內容為『開始』,那就跑訓練的函數(當然,為瞭防止阻塞,放在瞭另一個線程裡)最後再在腳本主流程裡使用itchat登錄微信並且啟動itchat的服務,這樣就實現瞭基本的控制。

if __name__ == __main__:
itchat.auto_login(hotReload=True)
itchat.run()

但是我們不滿足於此,我還希望可以對流程進行一些控制,對參數進行一些修改,於是乎:

@itchat.msg_register([itchat.content.TEXT])def chat_trigger(msg):
global lock, running, learning_rate, training_iters, batch_size, display_step
if msg[Text] == u開始:
print(Starting)
with lock:
run_state = running
if not run_state:
try:
threading.Thread(target=nn_train, args=(msg[FromUserName], (learning_rate, training_iters, batch_size, display_step))).start()
except:
msg.reply(Running)
elif msg[Text] == u停止:
print(Stopping)
with lock:
running = False
elif msg[Text] == u參數:
itchat.send(lr=%f, ti=%d, bs=%d, ds=%d%(learning_rate, training_iters, batch_size, display_step),msg[FromUserName])
else:
try:
param = msg[Text].split()
key, value = param
print(key, value)
if key == lr:
learning_rate = float(value)
elif key == ti:
training_iters = int(value)
elif key == bs:
batch_size = int(value)
elif key == ds:
display_step = int(value)
except:
pass

通過這個,我們可以在epoch中途停止(因為nn_train裡通過檢查running標志來確定是否需要停下來),也可以在訓練開始前調整learning_rate等幾個參數。實在是很簡單…… 雷鋒網版權文章,未經授權禁止轉載。詳情見轉載須知。

source:https://www.leiphone.com/news/201709/Uqu8GJhDp8E4tN11.html