將圖片導(dǎo)入到程序里 一邊導(dǎo)入一邊訓(xùn)練 batch_size的大小
但是一直是這情況
2018-09-06 15:12:24.930926: I T:srcgithubtensorflowtensorflowcoreplatformcpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2018-09-06 15:12:25.951985: W T:srcgithubtensorflowtensorflowcoreframeworkallocator.cc:108] Allocation of 1511424000 exceeds 10% of system memory.
我用了TFRecords方法
def read_and_decode(filename):
filename_queue = tf.train.string_input_producer([filename])
reader = tf.TFRecordReader()
_, serialized_example = reader.read(filename_queue)
features = tf.parse_single_example(serialized_example,
features={
'label': tf.FixedLenFeature([], tf.int64),
'img_raw' : tf.FixedLenFeature([], tf.string),
})
img = tf.decode_raw(features['img_raw'], tf.uint8)
img = tf.reshape(img, [600, 328, 1])
img = tf.cast(img, tf.float32) * (1. / 255) - 0.5
label = tf.cast(features['label'], tf.int32)
return img, label
if __name__ == '__main__':
img, label = read_and_decode("train.tfrecords")
img_train, label_train = tf.train.shuffle_batch([img, label],
batch_size=30, capacity=2000,
min_after_dequeue=1000)
print("begin")
print("begin data")
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev = 0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape = shape)
return tf.Variable(initial)
def conv2d(x, W):
return tf.nn.conv2d(x, W, strides = [1, 1, 1, 1], padding = 'SAME')
def max_pool_2x2(x):
return tf.nn.max_pool(x, ksize = [1, 2, 2, 1], strides = [1, 2, 2, 1], padding = 'SAME')
def avg_pool_82x150(x):
return tf.nn.avg_pool(x, ksize = [1, 150, 82, 1], strides = [1, 150, 82, 1], padding = 'SAME')
x = tf.placeholder(tf.float32, [None, 600, 328, 1])
y = tf.placeholder(tf.float32, [None, 6])
W_conv1 = weight_variable([5, 5, 1, 64])
b_conv1 = bias_variable([64])
x_image = tf.reshape(x, [-1, 600, 328, 1])
h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
h_pool1 = max_pool_2x2(h_conv1)
W_conv2 = weight_variable([5, 5, 64, 64])
b_conv2 = bias_variable([64])
h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
h_pool2 = max_pool_2x2(h_conv2)
W_conv3 = weight_variable([5, 5, 64, 6])
b_conv3 = bias_variable([6])
h_conv3 = tf.nn.relu(conv2d(h_pool2, W_conv3) + b_conv3)
# 經(jīng)過兩層池化后 圖片變成82*150
nt_hpool3 = avg_pool_82x150(h_conv3)
nt_hpool3_flat = tf.reshape(nt_hpool3, [-1, 6])
y_conv = tf.nn.softmax(nt_hpool3_flat)
cross_entropy = -tf.reduce_sum(y*tf.log(y_conv))
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y_conv, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
#開始會話訓(xùn)練
sess = tf.Session()
sess.run(tf.global_variables_initializer())
tf.train.start_queue_runners(sess = sess)
for i in range(1000):
image_batch, label_batch = sess.run([img_train, label_train])
label_b = np.eye(6, dtype =float)[label_batch]
train_step.run(feed_dict = {x:image_batch, y:label_b},session = sess)
if i%200 == 0:
train_accuracy = accuracy.eval(feed_dict = {x:image_batch, y:label_b}, session = sess)
print("step %d, training accuracy %g" %(i, train_accuracy))
首先內(nèi)存爆滿
然后就是begin
begin data
2018-09-06 15:25:13.005857: I T:srcgithubtensorflowtensorflowcoreplatformcpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2018-09-06 15:25:14.016915: W T:srcgithubtensorflowtensorflowcoreframeworkallocator.cc:108] Allocation of 1511424000 exceeds 10% of system memory.
2018-09-06 15:25:15.642008: W T:srcgithubtensorflowtensorflowcoreframeworkallocator.cc:108] Allocation of 377856000 exceeds 10% of system memory.
2018-09-06 15:25:15.974027: W T:srcgithubtensorflowtensorflowcoreframeworkallocator.cc:108] Allocation of 377856000 exceeds 10% of system memory.
2018-09-06 15:25:31.735929: W T:srcgithubtensorflowtensorflowcoreframeworkallocator.cc:108] Allocation of 377856000 exceeds 10% of system memory.
2018-09-06 15:25:35.883166: W T:srcgithubtensorflowtensorflowcoreframeworkallocator.cc:108] Allocation of 377856000 exceeds 10% of system memory.
step 0, training accuracy 0.0333333
就這樣 卡著不動了
北大青鳥APTECH成立于1999年。依托北京大學(xué)優(yōu)質(zhì)雄厚的教育資源和背景,秉承“教育改變生活”的發(fā)展理念,致力于培養(yǎng)中國IT技能型緊缺人才,是大數(shù)據(jù)專業(yè)的國家
達(dá)內(nèi)教育集團(tuán)成立于2002年,是一家由留學(xué)海歸創(chuàng)辦的高端職業(yè)教育培訓(xùn)機(jī)構(gòu),是中國一站式人才培養(yǎng)平臺、一站式人才輸送平臺。2014年4月3日在美國成功上市,融資1
北大課工場是北京大學(xué)校辦產(chǎn)業(yè)為響應(yīng)國家深化產(chǎn)教融合/校企合作的政策,積極推進(jìn)“中國制造2025”,實現(xiàn)中華民族偉大復(fù)興的升級產(chǎn)業(yè)鏈。利用北京大學(xué)優(yōu)質(zhì)教育資源及背
博為峰,中國職業(yè)人才培訓(xùn)領(lǐng)域的先行者
曾工作于聯(lián)想擔(dān)任系統(tǒng)開發(fā)工程師,曾在博彥科技股份有限公司擔(dān)任項目經(jīng)理從事移動互聯(lián)網(wǎng)管理及研發(fā)工作,曾創(chuàng)辦藍(lán)懿科技有限責(zé)任公司從事總經(jīng)理職務(wù)負(fù)責(zé)iOS教學(xué)及管理工作。
浪潮集團(tuán)項目經(jīng)理。精通Java與.NET 技術(shù), 熟練的跨平臺面向?qū)ο箝_發(fā)經(jīng)驗,技術(shù)功底深厚。 授課風(fēng)格 授課風(fēng)格清新自然、條理清晰、主次分明、重點難點突出、引人入勝。
精通HTML5和CSS3;Javascript及主流js庫,具有快速界面開發(fā)的能力,對瀏覽器兼容性、前端性能優(yōu)化等有深入理解。精通網(wǎng)頁制作和網(wǎng)頁游戲開發(fā)。
具有10 年的Java 企業(yè)應(yīng)用開發(fā)經(jīng)驗。曾經(jīng)歷任德國Software AG 技術(shù)顧問,美國Dachieve 系統(tǒng)架構(gòu)師,美國AngelEngineers Inc. 系統(tǒng)架構(gòu)師。