tf.nn.rnn_cell.LSTMCell
又名:tf.nn.rnn_cell.BasicLSTMCell ,、tf.contrib.rnn.LSTMCell
參見: tf.nn.rnn_cell.LSTMCell
輸出:
- output:LSTM單元輸出
h ,,與LSTM cell state 的區(qū)別在于該輸出又經(jīng)過激活以及和一個(gè)sigmoid函數(shù)輸出相乘,。shape: [batch_size,num_units]
- new_state:當(dāng)前時(shí)間步上的
LSTM cell state 和LSTM output ,,LSTMStateTuple:(c,h),,其中,,h 與上述的output張量相同,。shape: ([batch_size,num_units],[batch_size,num_units])
示例:
batch_size=10
embedding_dim=300
inputs=tf.Variable(tf.random_normal([batch_size,embedding_dim]))
previous_state=(tf.Variable(tf.random_normal([batch_size,128])),tf.Variable(tf.random_normal([batch_size,128])))
lstmcell=tf.nn.rnn_cell.LSTMCell(128)
outputs,(c_state,h_state)=lstmcell(inputs,previous_state)
輸出:
(<tf.Tensor 'lstm_cell/mul_2:0' shape=(10, 128) dtype=float32>,
LSTMStateTuple(c=<tf.Tensor 'lstm_cell/add_1:0' shape=(10, 128) dtype=float32>, h=<tf.Tensor 'lstm_cell/mul_2:0' shape=(10, 128) dtype=float32>))
tf.nn.rnn_cell.MultiRNNCell
參見:tf.nn.rnn_cell.MultiRNNCell
輸出:
- outputs: 最頂層cell的最后一個(gè)時(shí)間步的輸出。shape:[batch_size,cell.output_size]
- states:每一層的state,,M層LSTM則輸出M個(gè)LSTMStateTuple組成的Tuple,。
示例:
batch_size=10
inputs=tf.Variable(tf.random_normal([batch_size,128]))
previous_state0=(tf.random_normal([batch_size,100]),tf.random_normal([batch_size,100]))
previous_state1=(tf.random_normal([batch_size,200]),tf.random_normal([batch_size,200]))
previous_state2=(tf.random_normal([batch_size,300]),tf.random_normal([batch_size,300]))
num_units=[100,200,300]
cells=[tf.nn.rnn_cell.LSTMCell(num_unit) for num_unit in num_units]
mul_cells=tf.nn.rnn_cell.MultiRNNCell(cells)
outputs,states=mul_cells(inputs,(previous_state0,previous_state1,previous_state2))
輸出:
outputs:
<tf.Tensor 'multi_rnn_cell_1/cell_2/lstm_cell/mul_2:0' shape=(10, 300) dtype=float32>
states:
Out[29]:
(LSTMStateTuple(c=<tf.Tensor 'multi_rnn_cell_1/cell_0/lstm_cell/add_1:0' shape=(10, 100) dtype=float32>, h=<tf.Tensor 'multi_rnn_cell_1/cell_0/lstm_cell/mul_2:0' shape=(10, 100) dtype=float32>),
LSTMStateTuple(c=<tf.Tensor 'multi_rnn_cell_1/cell_1/lstm_cell/add_1:0' shape=(10, 200) dtype=float32>, h=<tf.Tensor 'multi_rnn_cell_1/cell_1/lstm_cell/mul_2:0' shape=(10, 200) dtype=float32>),
LSTMStateTuple(c=<tf.Tensor 'multi_rnn_cell_1/cell_2/lstm_cell/add_1:0' shape=(10, 300) dtype=float32>, h=<tf.Tensor 'multi_rnn_cell_1/cell_2/lstm_cell/mul_2:0' shape=(10, 300) dtype=float32>))
tf.nn.dynamic_rnn
參見:tf.nn.dynamic_rnn
輸出:
- outputs: 每個(gè)時(shí)間步上的LSTM輸出;若有多層LSTM,,則為每一個(gè)時(shí)間步上最頂層的LSTM的輸出,。shape: [batch_size,max_time,cell.output_size]
- state:最后一個(gè)時(shí)間步的狀態(tài),該狀態(tài)使用LSTMStateTuple結(jié)構(gòu)輸出,;若有M層LSTM,,則輸出M個(gè)LSTMStateTuple。單層LSTM輸出:[batch_size,cell.output_size],;M層LSTM輸出:M個(gè)LSTMStateTuple組成的Tuple,,這也即是說:outputs[:,-1,:]==state[-1,:,:]。
示例:
batch_size=10
max_time=20
data=tf.Variable(tf.random_normal([batch_size,max_time,128]))
# create a BasicRNNCell
rnn_cell = tf.nn.rnn_cell.BasicRNNCell(num_units=128)
# 'outputs' is a tensor of shape [batch_size, max_time, cell_state_size]
# defining initial state
initial_state = rnn_cell.zero_state(batch_size,dtype=tf.float32)
# 'state' is a tensor of shape [batch_size, cell_state_size]
outputs, state = tf.nn.dynamic_rnn(cell=rnn_cell, inputs=data,
initial_state=initial_state,
dtype=tf.float32)
輸出:
outpus:
<tf.Tensor 'rnn_2/transpose_1:0' shape=(10, 20, 128) dtype=float32>
state:
<tf.Tensor 'rnn_2/while/Exit_3:0' shape=(10, 128) dtype=float32>
batch_size=10
max_time=20
data=tf.Variable(tf.random_normal([batch_size,max_time,128]))
# create 2 LSTMCells
rnn_layers = [tf.nn.rnn_cell.LSTMCell(size) for size in [128, 256]]
# create a RNN cell composed sequentially of a number of RNNCells
multi_rnn_cell = tf.nn.rnn_cell.MultiRNNCell(rnn_layers)
# 'outputs' is a tensor of shape [batch_size, max_time, 256]
# 'state' is a N-tuple where N is the number of LSTMCells containing a
# tf.contrib.rnn.LSTMStateTuple for each cell
outputs, state = tf.nn.dynamic_rnn(cell=multi_rnn_cell,
inputs=data,
dtype=tf.float32)
outputs:
<tf.Tensor 'rnn_1/transpose_1:0' shape=(10, 20, 256) dtype=float32>
state:
Out[20]:
(LSTMStateTuple(c=<tf.Tensor 'rnn_1/while/Exit_3:0' shape=(10, 128) dtype=float32>, h=<tf.Tensor 'rnn_1/while/Exit_4:0' shape=(10, 128) dtype=float32>),
LSTMStateTuple(c=<tf.Tensor 'rnn_1/while/Exit_5:0' shape=(10, 256) dtype=float32>, h=<tf.Tensor 'rnn_1/while/Exit_6:0' shape=(10, 256) dtype=float32>))
tf.nn.bidirectional_dynamic_rnn
tf.contrib.seq2seq.dynamic_decode
- 輸出:
- final_outputs,包含rnn_output和sample_id,,分別可用final_output.rnn_output和final_outputs.sample_id獲取到,。
- final_state,可以從最后一個(gè)解碼器狀態(tài)獲取alignments,
alignments = tf.transpose(final_decoder_state.alignment_history.stack(), [1, 2, 0])
- final_sequence_lengths
來源:http://www./content-4-115051.html
|