TF:tensorflow框架中常用函數(shù)介紹—tf.Variable()和tf.get_variable()用法及其區(qū)別
tensorflow框架
tf.Variable()和tf.get_variable()在創(chuàng)建變量的過(guò)程基本一樣,。它們之間最大的區(qū)別在于指定變量名稱的參數(shù)。
- tf.Variable(),變量名稱name是一個(gè)可選的參數(shù),。
- tf.get_variable(),變量名稱是一個(gè)必填的參數(shù),。
tensorflow.Variable()函數(shù)
@tf_export("Variable") class Variable(checkpointable.CheckpointableBase): ? """See the @{$variables$Variables How To} for a high level overview. ? A variable maintains state in the graph across calls to `run()`. You add a??variable to the graph by constructing an instance of the class `Variable`. ? The `Variable()` constructor requires an initial value for the variable, which can be a `Tensor` of any type and shape. The initial value defines the??type and shape of the variable. After construction, the type and shape of ? the variable are fixed. The value can be changed using one of the assign??methods. ? If you want to change the shape of a variable later you have to use an??`assign` Op with `validate_shape=False`. ? Just like any `Tensor`, variables created with `Variable()` can be used as inputs for other Ops in the graph. Additionally, all the operators overloaded for the `Tensor` class are carried over to variables, so you can ? also add nodes to the graph by just doing arithmetic on variables. ? ```python ? import tensorflow as tf ? # Create a variable. ? w = tf.Variable(<initial-value>, name=<optional-name>) ? # Use the variable in the graph like any Tensor. ? y = tf.matmul(w, ...another variable or tensor...) ? # The overloaded operators are available too. ? z = tf.sigmoid(w + y) ? # Assign a new value to the variable with `assign()` or a related method. ? w.assign(w + 1.0) ? w.assign_add(1.0) | @tf_export(“變量”) 類變量(checkpointable.CheckpointableBase): 查看@{$variables$ variables How To}獲取高級(jí)概述。 一個(gè)變量在調(diào)用“run()”時(shí)維護(hù)圖中的狀態(tài),。通過(guò)構(gòu)造類“variable”的一個(gè)實(shí)例,可以將一個(gè)變量添加到圖形中,。 'Variable()’構(gòu)造函數(shù)需要一個(gè)變量的初值,它可以是任何類型和形狀的'張量’。初始值定義變量的類型和形狀,。施工后,的類型和形狀 變量是固定的,。可以使用指定方法之一更改值,。 如果以后要更改變量的形狀,必須使用' assign ' Op和' validate_shape=False ',。 與任何“張量”一樣,用“Variable()”創(chuàng)建的變量可以用作圖中其他操作的輸入。此外,“張量”類的所有運(yùn)算符都重載了,因此可以轉(zhuǎn)移到變量中 還可以通過(guò)對(duì)變量進(jìn)行運(yùn)算將節(jié)點(diǎn)添加到圖中,。 ”“python 導(dǎo)入tensorflow作為tf 創(chuàng)建一個(gè)變量,。 w =特遣部隊(duì)。變量(name = <可選名稱> <初值>) 像使用任何張量一樣使用圖中的變量,。 y =特遣部隊(duì),。matmul (w,…另一個(gè)變量或張量……) 重載的操作符也是可用的。 z =特遣部隊(duì),。乙狀結(jié)腸(w + y) 用' Assign() '或相關(guān)方法為變量賦值,。 w,。分配(w + 1.0) w.assign_add (1.0) ' ' ' | When you launch the graph, variables have to be explicitly initialized before you can run Ops that use their value. You can initialize a variable by running its *initializer op*, restoring the variable from a save file, or simply running an `assign` Op that assigns a value to the variable. In fact,??the variable *initializer op* is just an `assign` Op that assigns the variable's initial value to the variable itself. ? ```python ? # Launch the graph in a session. ? with tf.Session() as sess: ? ? ? # Run the variable initializer. ? ? ? sess.run(w.initializer) ? ? ? # ...you now can run ops that use the value of 'w'... ? ``` ? The most common initialization pattern is to use the convenience function global_variables_initializer()` to add an Op to the graph that initializes??all the variables. You then run that Op after launching the graph. ? ```python ? # Add an Op to initialize global variables. ? init_op = tf.global_variables_initializer() ? # Launch the graph in a session. ? with tf.Session() as sess: ? ? ? # Run the Op that initializes global variables. ? ? ? sess.run(init_op) ? ? ? # ...you can now run any Op that uses variable values... ? ``` ? If you need to create a variable with an initial value dependent on another variable, use the other variable's `initialized_value()`. This ensures that variables are initialized in the right order. All variables are automatically collected in the graph where they are created. By default, the constructor adds the new variable to the graph? collection `GraphKeys.GLOBAL_VARIABLES`. The convenience function ? `global_variables()` returns the contents of that collection. ? When building a machine learning model it is often convenient to distinguish??between variables holding the trainable model parameters and other variables??such as a `global step` variable used to count training steps. To make this??easier, the variable constructor supports a `trainable=<bool>` parameter. If `True`, the new variable is also added to the graph collection `GraphKeys.TRAINABLE_VARIABLES`. The convenience function `trainable_variables()` returns the contents of this collection. The various `Optimizer` classes use this collection as the default list of??variables to optimize. ? WARNING: tf.Variable objects have a non-intuitive memory model. A Variable is represented internally as a mutable Tensor which can non-deterministically alias other Tensors in a graph. The set of operations which consume a Variable??and can lead to aliasing is undetermined and can change across TensorFlow versions. Avoid writing code which relies on the value of a Variable either??changing or not changing as other operations happen. For example, using Variable objects or simple functions thereof as predicates in a `tf.cond` is??dangerous and error-prone: ? ``` ? v = tf.Variable(True) ? tf.cond(v, lambda: v.assign(False), my_false_fn) ?# Note: this is broken. ? ``` ? Here replacing tf.Variable with tf.contrib.eager.Variable will fix any nondeterminism issues. ? To use the replacement for variables which does not have these issues: ? * Replace `tf.Variable` with `tf.contrib.eager.Variable`; ? * Call `tf.get_variable_scope().set_use_resource(True)` inside a??`tf.variable_scope` before the `tf.get_variable()` call. ? @compatibility(eager) ? `tf.Variable` is not compatible with eager execution. ?Use??`tf.contrib.eager.Variable` instead which is compatible with both eager??execution and graph construction. ?See [the TensorFlow Eager Execution??guide](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/g3doc/guide.md#variables-and-optimizers) ? for details on how variables work in eager execution. ? @end_compatibility ? """ | 啟動(dòng)圖形時(shí),必須顯式初始化變量,然后才能運(yùn)行使用其值的操作。您可以通過(guò)運(yùn)行它的*initializer op*來(lái)初始化一個(gè)變量,也可以從保存文件中恢復(fù)這個(gè)變量,或者簡(jiǎn)單地運(yùn)行一個(gè)' assign ' op來(lái)為這個(gè)變量賦值,。實(shí)際上,變量*初始化器op*只是一個(gè)' assign ' op,它將變量的初始值賦給變量本身,。 ”“python 在會(huì)話中啟動(dòng)圖形。 session()作為sess: #運(yùn)行變量初始化器,。 sess.run (w.initializer) #……現(xiàn)在可以運(yùn)行使用'w'值的ops… ' ' ' 最常見(jiàn)的初始化模式是使用方便的函數(shù)global_variables_initializer() '將Op添加到初始化所有變量的圖中,。然后在啟動(dòng)圖形之后運(yùn)行該Op。 ”“python #添加一個(gè)Op來(lái)初始化全局變量,。 init_op = tf.global_variables_initializer () 在會(huì)話中啟動(dòng)圖形,。 session()作為sess: 運(yùn)行初始化全局變量的Op。 sess.run (init_op) #……您現(xiàn)在可以運(yùn)行任何使用變量值的Op… ' ' ' 如果需要?jiǎng)?chuàng)建一個(gè)初始值依賴于另一個(gè)變量的變量,請(qǐng)使用另一個(gè)變量的' initialized_value() ',。這樣可以確保以正確的順序初始化變量。所有變量都自動(dòng)收集到創(chuàng)建它們的圖中,。默認(rèn)情況下,構(gòu)造函數(shù)將新變量添加到圖形集合“GraphKeys.GLOBAL_VARIABLES”中,。方便的功能 ' global_variables() '返回該集合的內(nèi)容。 在構(gòu)建機(jī)器學(xué)習(xí)模型時(shí),通??梢苑奖愕貐^(qū)分包含可訓(xùn)練模型參數(shù)的變量和其他變量,如用于計(jì)算訓(xùn)練步驟的“全局步驟”變量,。為了簡(jiǎn)化這一點(diǎn),變量構(gòu)造函數(shù)支持一個(gè)' trainable=<bool> '參數(shù)。</bool>如果為T(mén)rue,則新變量也將添加到圖形集合“GraphKeys.TRAINABLE_VARIABLES”中,。便利函數(shù)' trainable_variables() '返回這個(gè)集合的內(nèi)容,。各種“優(yōu)化器”類使用這個(gè)集合作為要優(yōu)化的默認(rèn)變量列表。 警告:tf,。變量對(duì)象有一個(gè)不直觀的內(nèi)存模型,。一個(gè)變量在內(nèi)部被表示為一個(gè)可變張量,它可以不確定性地混疊一個(gè)圖中的其他張量。使用變量并可能導(dǎo)致別名的操作集是未確定的,可以跨TensorFlow版本更改,。避免編寫(xiě)依賴于變量值的代碼,這些變量值隨著其他操作的發(fā)生而改變或不改變,。例如,在“tf”中使用變量對(duì)象或其簡(jiǎn)單函數(shù)作為謂詞。cond’是危險(xiǎn)的,容易出錯(cuò)的: ' ' ' v = tf.Variable(真正的) 特遣部隊(duì),。cond(v, lambda: v.assign(False), my_false_fn) #注意:這個(gè)壞了,。 ' ' ' 這里替換特遣部隊(duì)。與tf.contrib.eager變量,。變量將修復(fù)任何非決定論的問(wèn)題,。 使用替換變量不存在以下問(wèn)題: *取代“特遣部隊(duì)。變量與“tf.contrib.eager.Variable”; *在一個(gè)tf中調(diào)用' tf.get_variable_scope().set_use_resource(True) ',。在調(diào)用tf.get_variable()之前調(diào)用variable_scope,。 @compatibility(渴望) “特遣部隊(duì)。變量'與立即執(zhí)行不兼容,。使用“tf.contrib.eager,。變量',它與立即執(zhí)行和圖形構(gòu)造都兼容,。參見(jiàn)[TensorFlow Eager執(zhí)行指南](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/g3doc/guide.md#變量和優(yōu)化器) 有關(guān)變量在立即執(zhí)行中如何工作的詳細(xì)信息。 @end_compatibility ”“” | ? Args: ?initial_value: A `Tensor`, or Python object convertible to a `Tensor`, ? which is the initial value for the Variable. The initial value must have ?a shape specified unless `validate_shape` is set to False. Can also be a callable with no argument that returns the initial value when called. In ?that case, `dtype` must be specified. (Note that initializer functions from init_ops.py must first be bound to a shape before being used here.) ? ? ? trainable: If `True`, the default, also adds the variable to the graph collection `GraphKeys.TRAINABLE_VARIABLES`. This collection is used as the default list of variables to use by the `Optimizer` classes. collections: List of graph collections keys. The new variable is added to these collections. Defaults to `[GraphKeys.GLOBAL_VARIABLES]`. ? ? ? validate_shape: If `False`, allows the variable to be initialized with a value of unknown shape. If `True`, the default, the shape of initial_value` must be known. caching_device: Optional device string describing where the Variable??should be cached for reading. ?Defaults to the Variable's device.?? If not `None`, caches on another device. ?Typical use is to cache on the device where the Ops using the Variable reside, to deduplicate??copying through `Switch` and other conditional statements. ? ? ? name: Optional name for the variable. Defaults to `'Variable'` and gets uniquified automatically. ? ? ? variable_def: `VariableDef` protocol buffer. If not `None`, recreates the Variable object with its contents, referencing the variable's nodes ? ? ? ? in the graph, which must already exist. The graph is not changed. `variable_def` and the other arguments are mutually exclusive. ? ? ? dtype: If set, initial_value will be converted to the given type.??If `None`, either the datatype will be kept (if `initial_value` is??a Tensor), or `convert_to_tensor` will decide. ? ? ? expected_shape: A TensorShape. If set, initial_value is expected??to have this shape. ? ? ? import_scope: Optional `string`. Name scope to add to the?? `Variable.` Only used when initializing from protocol buffer. ? ? ? constraint: An optional projection function to be applied to the variable after being updated by an `Optimizer` (e.g. used to implement norm constraints or value constraints for layer weights). The function must??take as input the unprojected Tensor representing the value of the?? variable and return the Tensor for the projected value?? (which must have the same shape). Constraints are not safe to??use when doing asynchronous distributed training. ? ? Raises: ? ? ? ValueError: If both `variable_def` and initial_value are specified. ? ? ? ValueError: If the initial value is not specified, or does not have a shape and `validate_shape` is `True`. ? ? ? RuntimeError: If eager execution is enabled. ? ? @compatibility(eager) ? ? `tf.Variable` is not compatible with eager execution. ?Use ? ? `tfe.Variable` instead which is compatible with both eager execution ? ? and graph construction. ?See [the TensorFlow Eager Execution ? ? guide](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/g3doc/guide.md#variables-and-optimizers) ? ? for details on how variables work in eager execution. ? ? @end_compatibility | 參數(shù): initial_value:一個(gè)“張量”,或者Python對(duì)象可轉(zhuǎn)換成一個(gè)“張量”,它是變量的初始值,。除非將“validate_shape”設(shè)置為False,否則必須指定初始值的形狀,。也可以是可調(diào)用的,沒(méi)有參數(shù),調(diào)用時(shí)返回初始值。在這種情況下,必須指定' dtype ',。(注意,在這里使用初始化器函數(shù)之前,init_ops.py必須先綁定到一個(gè)形狀上,。) 可訓(xùn)練的:如果“True”是默認(rèn)值,那么也會(huì)將變量添加到圖形集合“GraphKeys.TRAINABLE_VARIABLES”中。此集合用作“優(yōu)化器”類使用的默認(rèn)變量列表,。集合:圖形集合鍵的列表,。新變量被添加到這些集合中。默認(rèn)為“[GraphKeys.GLOBAL_VARIABLES]”,。 validate_shape:如果為“False”,則允許使用未知形狀的值初始化變量,。如果' True '是默認(rèn)值,則必須知道initial_value '的形狀。caching_device:可選的設(shè)備字符串,用于描述變量應(yīng)該被緩存到什么地方以便讀取,。變量設(shè)備的默認(rèn)值,。如果不是“None”,則緩存到另一個(gè)設(shè)備上。典型的用法是在使用變量駐留的操作系統(tǒng)所在的設(shè)備上進(jìn)行緩存,通過(guò)“Switch”和其他條件語(yǔ)句進(jìn)行重復(fù)復(fù)制,。 name:變量的可選名稱,。默認(rèn)值為“變量”,并自動(dòng)uniquified。 variable_def: ' VariableDef '協(xié)議緩沖區(qū),。如果不是“None”,則使用其內(nèi)容重新創(chuàng)建變量對(duì)象,并引用變量的節(jié)點(diǎn) 在圖中,它必須已經(jīng)存在,。圖形沒(méi)有改變。' variable_def '和其他參數(shù)是互斥的,。 如果設(shè)置了,initial_value將轉(zhuǎn)換為給定的類型,。如果'None’,那么數(shù)據(jù)類型將被保留(如果'initial_value’是一個(gè)張量),或者'convert_to_張量’將決定。 expected_shape: TensorShape,。如果設(shè)置了,initial_value將具有此形狀,。 import_scope:可選“字符串”。將作用域命名為“變量”,。僅在從協(xié)議緩沖區(qū)初始化時(shí)使用,。 約束:一個(gè)可選的投影函數(shù),在被“優(yōu)化器”更新后應(yīng)用到變量上(例如,用于實(shí)現(xiàn)規(guī)范約束或?qū)訖?quán)重的值約束)。函數(shù)必須將表示變量值的未投影張量作為輸入,并返回投影值的張量(其形狀必須相同),。在進(jìn)行異步分布式培訓(xùn)時(shí)使用約束是不安全的,。 提出了: ValueError:如果同時(shí)指定了' variable_def '和initial_value。 ValueError:如果沒(méi)有指定初始值,或者沒(méi)有形狀,并且'validate_shape’為'True’,。 RuntimeError:如果啟用了立即執(zhí)行,。 @compatibility(渴望) “特遣部隊(duì)。變量'與立即執(zhí)行不兼容。使用 tfe,。變量',而不是與兩個(gè)立即執(zhí)行兼容 和圖施工,。參見(jiàn)[TensorFlow立即執(zhí)行] 指南](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/g3doc/guide.md # variables-and-optimizers) 有關(guān)變量在立即執(zhí)行中如何工作的詳細(xì)信息。 @end_compatibility |
@tf_export("Variable")
class Variable(checkpointable.CheckpointableBase):
"""See the @{$variables$Variables How To} for a high level overview.
A variable maintains state in the graph across calls to `run()`. You add a
variable to the graph by constructing an instance of the class `Variable`.
The `Variable()` constructor requires an initial value for the variable,
which can be a `Tensor` of any type and shape. The initial value defines the
type and shape of the variable. After construction, the type and shape of
the variable are fixed. The value can be changed using one of the assign
methods.
If you want to change the shape of a variable later you have to use an
`assign` Op with `validate_shape=False`.
Just like any `Tensor`, variables created with `Variable()` can be used as
inputs for other Ops in the graph. Additionally, all the operators
overloaded for the `Tensor` class are carried over to variables, so you can
also add nodes to the graph by just doing arithmetic on variables.
```python
import tensorflow as tf
# Create a variable.
w = tf.Variable(<initial-value>, name=<optional-name>)
# Use the variable in the graph like any Tensor.
y = tf.matmul(w, ...another variable or tensor...)
# The overloaded operators are available too.
z = tf.sigmoid(w + y)
# Assign a new value to the variable with `assign()` or a related method.
w.assign(w + 1.0)
w.assign_add(1.0)
```
When you launch the graph, variables have to be explicitly initialized before
you can run Ops that use their value. You can initialize a variable by
running its *initializer op*, restoring the variable from a save file, or
simply running an `assign` Op that assigns a value to the variable. In fact,
the variable *initializer op* is just an `assign` Op that assigns the
variable's initial value to the variable itself.
```python
# Launch the graph in a session.
with tf.Session() as sess:
# Run the variable initializer.
sess.run(w.initializer)
# ...you now can run ops that use the value of 'w'...
```
The most common initialization pattern is to use the convenience function
`global_variables_initializer()` to add an Op to the graph that initializes
all the variables. You then run that Op after launching the graph.
```python
# Add an Op to initialize global variables.
init_op = tf.global_variables_initializer()
# Launch the graph in a session.
with tf.Session() as sess:
# Run the Op that initializes global variables.
sess.run(init_op)
# ...you can now run any Op that uses variable values...
```
If you need to create a variable with an initial value dependent on another
variable, use the other variable's `initialized_value()`. This ensures that
variables are initialized in the right order.
All variables are automatically collected in the graph where they are
created. By default, the constructor adds the new variable to the graph
collection `GraphKeys.GLOBAL_VARIABLES`. The convenience function
`global_variables()` returns the contents of that collection.
When building a machine learning model it is often convenient to distinguish
between variables holding the trainable model parameters and other variables
such as a `global step` variable used to count training steps. To make this
easier, the variable constructor supports a `trainable=<bool>` parameter. If
`True`, the new variable is also added to the graph collection
`GraphKeys.TRAINABLE_VARIABLES`. The convenience function
`trainable_variables()` returns the contents of this collection. The
various `Optimizer` classes use this collection as the default list of
variables to optimize.
WARNING: tf.Variable objects have a non-intuitive memory model. A Variable is
represented internally as a mutable Tensor which can non-deterministically
alias other Tensors in a graph. The set of operations which consume a Variable
and can lead to aliasing is undetermined and can change across TensorFlow
versions. Avoid writing code which relies on the value of a Variable either
changing or not changing as other operations happen. For example, using
Variable objects or simple functions thereof as predicates in a `tf.cond` is
dangerous and error-prone:
```
v = tf.Variable(True)
tf.cond(v, lambda: v.assign(False), my_false_fn) # Note: this is broken.
```
Here replacing tf.Variable with tf.contrib.eager.Variable will fix any
nondeterminism issues.
To use the replacement for variables which does
not have these issues:
* Replace `tf.Variable` with `tf.contrib.eager.Variable`;
* Call `tf.get_variable_scope().set_use_resource(True)` inside a
`tf.variable_scope` before the `tf.get_variable()` call.
@compatibility(eager)
`tf.Variable` is not compatible with eager execution. Use
`tf.contrib.eager.Variable` instead which is compatible with both eager
execution and graph construction. See [the TensorFlow Eager Execution
guide](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/g3doc/guide.md#variables-and-optimizers)
for details on how variables work in eager execution.
@end_compatibility
"""
def __init__(self,
initial_value=None,
trainable=True,
collections=None,
validate_shape=True,
caching_device=None,
name=None,
variable_def=None,
dtype=None,
expected_shape=None,
import_scope=None,
constraint=None):
"""Creates a new variable with value `initial_value`.
The new variable is added to the graph collections listed in `collections`,
which defaults to `[GraphKeys.GLOBAL_VARIABLES]`.
If `trainable` is `True` the variable is also added to the graph collection
`GraphKeys.TRAINABLE_VARIABLES`.
This constructor creates both a `variable` Op and an `assign` Op to set the
variable to its initial value.
Args:
initial_value: A `Tensor`, or Python object convertible to a `Tensor`,
which is the initial value for the Variable. The initial value must have
a shape specified unless `validate_shape` is set to False. Can also be a
callable with no argument that returns the initial value when called. In
that case, `dtype` must be specified. (Note that initializer functions
from init_ops.py must first be bound to a shape before being used here.)
trainable: If `True`, the default, also adds the variable to the graph
collection `GraphKeys.TRAINABLE_VARIABLES`. This collection is used as
the default list of variables to use by the `Optimizer` classes.
collections: List of graph collections keys. The new variable is added to
these collections. Defaults to `[GraphKeys.GLOBAL_VARIABLES]`.
validate_shape: If `False`, allows the variable to be initialized with a
value of unknown shape. If `True`, the default, the shape of
`initial_value` must be known.
caching_device: Optional device string describing where the Variable
should be cached for reading. Defaults to the Variable's device.
If not `None`, caches on another device. Typical use is to cache
on the device where the Ops using the Variable reside, to deduplicate
copying through `Switch` and other conditional statements.
name: Optional name for the variable. Defaults to `'Variable'` and gets
uniquified automatically.
variable_def: `VariableDef` protocol buffer. If not `None`, recreates
the Variable object with its contents, referencing the variable's nodes
in the graph, which must already exist. The graph is not changed.
`variable_def` and the other arguments are mutually exclusive.
dtype: If set, initial_value will be converted to the given type.
If `None`, either the datatype will be kept (if `initial_value` is
a Tensor), or `convert_to_tensor` will decide.
expected_shape: A TensorShape. If set, initial_value is expected
to have this shape.
import_scope: Optional `string`. Name scope to add to the
`Variable.` Only used when initializing from protocol buffer.
constraint: An optional projection function to be applied to the variable
after being updated by an `Optimizer` (e.g. used to implement norm
constraints or value constraints for layer weights). The function must
take as input the unprojected Tensor representing the value of the
variable and return the Tensor for the projected value
(which must have the same shape). Constraints are not safe to
use when doing asynchronous distributed training.
Raises:
ValueError: If both `variable_def` and initial_value are specified.
ValueError: If the initial value is not specified, or does not have a
shape and `validate_shape` is `True`.
RuntimeError: If eager execution is enabled.
@compatibility(eager)
`tf.Variable` is not compatible with eager execution. Use
`tfe.Variable` instead which is compatible with both eager execution
and graph construction. See [the TensorFlow Eager Execution
guide](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/g3doc/guide.md#variables-and-optimizers)
for details on how variables work in eager execution.
@end_compatibility
tensorflow.get_variable()函數(shù)
# The argument list for get_variable must match arguments to get_local_variable.
# So, if you are updating the arguments, also update arguments to
# get_local_variable below.
@tf_export("get_variable")
def get_variable(name,
shape=None,
dtype=None,
initializer=None,
regularizer=None,
trainable=None,
collections=None,
caching_device=None,
partitioner=None,
validate_shape=True,
use_resource=None,
custom_getter=None,
constraint=None,
synchronization=VariableSynchronization.AUTO,
aggregation=VariableAggregation.NONE):
return get_variable_scope().get_variable(
_get_default_variable_store(),
name,
shape=shape,
dtype=dtype,
initializer=initializer,
regularizer=regularizer,
trainable=trainable,
collections=collections,
caching_device=caching_device,
partitioner=partitioner,
validate_shape=validate_shape,
use_resource=use_resource,
custom_getter=custom_getter,
constraint=constraint,
synchronization=synchronization,
aggregation=aggregation)
|