久久国产成人av_抖音国产毛片_a片网站免费观看_A片无码播放手机在线观看,色五月在线观看,亚洲精品m在线观看,女人自慰的免费网址,悠悠在线观看精品视频,一级日本片免费的,亚洲精品久,国产精品成人久久久久久久

分享

Tensorflow中循環(huán)神經(jīng)網(wǎng)絡(luò)及其Wrappers

 印度阿三17 2019-02-15
  • tf.nn.rnn_cell.LSTMCell

    • 又名:tf.nn.rnn_cell.BasicLSTMCell,、tf.contrib.rnn.LSTMCell

    • 參見: tf.nn.rnn_cell.LSTMCell

    • 輸出:

      • output:LSTM單元輸出h,,與LSTM cell state的區(qū)別在于該輸出又經(jīng)過激活以及和一個(gè)sigmoid函數(shù)輸出相乘,。shape: [batch_size,num_units]
      • new_state:當(dāng)前時(shí)間步上的LSTM cell stateLSTM output,,LSTMStateTuple:(c,h),,其中,,h與上述的output張量相同,。shape: ([batch_size,num_units],[batch_size,num_units])
    • 示例:

      batch_size=10
      embedding_dim=300
      inputs=tf.Variable(tf.random_normal([batch_size,embedding_dim]))
      previous_state=(tf.Variable(tf.random_normal([batch_size,128])),tf.Variable(tf.random_normal([batch_size,128])))
      lstmcell=tf.nn.rnn_cell.LSTMCell(128)
      outputs,(c_state,h_state)=lstmcell(inputs,previous_state)

      輸出:

      (<tf.Tensor 'lstm_cell/mul_2:0' shape=(10, 128) dtype=float32>,
       LSTMStateTuple(c=<tf.Tensor 'lstm_cell/add_1:0' shape=(10, 128) dtype=float32>, h=<tf.Tensor 'lstm_cell/mul_2:0' shape=(10, 128) dtype=float32>))
  • tf.nn.rnn_cell.MultiRNNCell

    • 參見:tf.nn.rnn_cell.MultiRNNCell

    • 輸出:

      • outputs: 最頂層cell的最后一個(gè)時(shí)間步的輸出。shape:[batch_size,cell.output_size]
      • states:每一層的state,,M層LSTM則輸出M個(gè)LSTMStateTuple組成的Tuple,。
    • 示例:

      batch_size=10
      inputs=tf.Variable(tf.random_normal([batch_size,128]))
      previous_state0=(tf.random_normal([batch_size,100]),tf.random_normal([batch_size,100]))
      previous_state1=(tf.random_normal([batch_size,200]),tf.random_normal([batch_size,200]))
      previous_state2=(tf.random_normal([batch_size,300]),tf.random_normal([batch_size,300]))
      num_units=[100,200,300]
      cells=[tf.nn.rnn_cell.LSTMCell(num_unit) for num_unit in num_units]
      mul_cells=tf.nn.rnn_cell.MultiRNNCell(cells)
      outputs,states=mul_cells(inputs,(previous_state0,previous_state1,previous_state2))

      輸出:

      outputs:
      <tf.Tensor 'multi_rnn_cell_1/cell_2/lstm_cell/mul_2:0' shape=(10, 300) dtype=float32>
      states:
      
      Out[29]:
      (LSTMStateTuple(c=<tf.Tensor 'multi_rnn_cell_1/cell_0/lstm_cell/add_1:0' shape=(10, 100) dtype=float32>, h=<tf.Tensor 'multi_rnn_cell_1/cell_0/lstm_cell/mul_2:0' shape=(10, 100) dtype=float32>),
       LSTMStateTuple(c=<tf.Tensor 'multi_rnn_cell_1/cell_1/lstm_cell/add_1:0' shape=(10, 200) dtype=float32>, h=<tf.Tensor 'multi_rnn_cell_1/cell_1/lstm_cell/mul_2:0' shape=(10, 200) dtype=float32>),
       LSTMStateTuple(c=<tf.Tensor 'multi_rnn_cell_1/cell_2/lstm_cell/add_1:0' shape=(10, 300) dtype=float32>, h=<tf.Tensor 'multi_rnn_cell_1/cell_2/lstm_cell/mul_2:0' shape=(10, 300) dtype=float32>))
  • tf.nn.dynamic_rnn

    • 參見:tf.nn.dynamic_rnn

    • 輸出:

      • outputs: 每個(gè)時(shí)間步上的LSTM輸出;若有多層LSTM,,則為每一個(gè)時(shí)間步上最頂層的LSTM的輸出,。shape: [batch_size,max_time,cell.output_size]
      • state:最后一個(gè)時(shí)間步的狀態(tài),該狀態(tài)使用LSTMStateTuple結(jié)構(gòu)輸出,;若有M層LSTM,,則輸出M個(gè)LSTMStateTuple。單層LSTM輸出:[batch_size,cell.output_size],;M層LSTM輸出:M個(gè)LSTMStateTuple組成的Tuple,,這也即是說:outputs[:,-1,:]==state[-1,:,:]。
    • 示例:

      batch_size=10
      max_time=20
      data=tf.Variable(tf.random_normal([batch_size,max_time,128]))
      # create a BasicRNNCell
      rnn_cell = tf.nn.rnn_cell.BasicRNNCell(num_units=128)
      
      # 'outputs' is a tensor of shape [batch_size, max_time, cell_state_size]
      
      # defining initial state
      initial_state = rnn_cell.zero_state(batch_size,dtype=tf.float32)
      
      # 'state' is a tensor of shape [batch_size, cell_state_size]
      outputs, state = tf.nn.dynamic_rnn(cell=rnn_cell, inputs=data,
                                         initial_state=initial_state,
                                         dtype=tf.float32)

      輸出:

      outpus:
      <tf.Tensor 'rnn_2/transpose_1:0' shape=(10, 20, 128) dtype=float32>
      state:
      <tf.Tensor 'rnn_2/while/Exit_3:0' shape=(10, 128) dtype=float32>
      batch_size=10
      max_time=20
      data=tf.Variable(tf.random_normal([batch_size,max_time,128]))
      # create 2 LSTMCells
      rnn_layers = [tf.nn.rnn_cell.LSTMCell(size) for size in [128, 256]]
      
      # create a RNN cell composed sequentially of a number of RNNCells
      multi_rnn_cell = tf.nn.rnn_cell.MultiRNNCell(rnn_layers)
      
      # 'outputs' is a tensor of shape [batch_size, max_time, 256]
      # 'state' is a N-tuple where N is the number of LSTMCells containing a
      # tf.contrib.rnn.LSTMStateTuple for each cell
      outputs, state = tf.nn.dynamic_rnn(cell=multi_rnn_cell,
                                         inputs=data,
                                         dtype=tf.float32)
      outputs:
      <tf.Tensor 'rnn_1/transpose_1:0' shape=(10, 20, 256) dtype=float32>
      state:
      
      Out[20]:
      (LSTMStateTuple(c=<tf.Tensor 'rnn_1/while/Exit_3:0' shape=(10, 128) dtype=float32>, h=<tf.Tensor 'rnn_1/while/Exit_4:0' shape=(10, 128) dtype=float32>),
       LSTMStateTuple(c=<tf.Tensor 'rnn_1/while/Exit_5:0' shape=(10, 256) dtype=float32>, h=<tf.Tensor 'rnn_1/while/Exit_6:0' shape=(10, 256) dtype=float32>))
  • tf.nn.bidirectional_dynamic_rnn

    • 參見:tf.nn.bidirectional_dynamic_rnn

    • 輸出:

      • outputs:(output_fw,output_bw):前向cell 后向cell

        其中,,output_fw,、output_bw:[batch_size,max_time,cell.output_size]

      • state:(output_state_fw,output_state_bw):包含前向和后向隱狀態(tài)組成的元組

        其中,output_state_fw,、output_state_bw均為L(zhǎng)STMStateTuple。LSTMStateTuple:(c,h),,分別為cell_state,,hidden_output

  • tf.contrib.seq2seq.dynamic_decode

    • 輸出:
      • final_outputs,包含rnn_output和sample_id,,分別可用final_output.rnn_output和final_outputs.sample_id獲取到,。
      • final_state,可以從最后一個(gè)解碼器狀態(tài)獲取alignments,alignments = tf.transpose(final_decoder_state.alignment_history.stack(), [1, 2, 0])
      • final_sequence_lengths
來源:http://www./content-4-115051.html

    本站是提供個(gè)人知識(shí)管理的網(wǎng)絡(luò)存儲(chǔ)空間,,所有內(nèi)容均由用戶發(fā)布,,不代表本站觀點(diǎn)。請(qǐng)注意甄別內(nèi)容中的聯(lián)系方式,、誘導(dǎo)購(gòu)買等信息,,謹(jǐn)防詐騙。如發(fā)現(xiàn)有害或侵權(quán)內(nèi)容,,請(qǐng)點(diǎn)擊一鍵舉報(bào),。
    轉(zhuǎn)藏 分享 獻(xiàn)花(0

    0條評(píng)論

    發(fā)表

    請(qǐng)遵守用戶 評(píng)論公約

    類似文章 更多