Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

recent changes squash #25

Merged
merged 1,052 commits into from
Sep 21, 2018
Merged

recent changes squash #25

merged 1,052 commits into from
Sep 21, 2018

Conversation

ganny26
Copy link
Owner

@ganny26 ganny26 commented Sep 21, 2018

No description provided.

annarev and others added 30 commits September 14, 2018 12:50
PiperOrigin-RevId: 213028338
PiperOrigin-RevId: 213040362
Also added some experimental C APIs for facilitate the use of eager C APIs in
S4TF compiler.

PiperOrigin-RevId: 213041780
…nt ever since xla::DotGeneral was added.

PiperOrigin-RevId: 213052269
PiperOrigin-RevId: 213053512
…adability.

- Logic change: Moved getting metric name and function out of the training/eval loops in eager mode
- Moved setting metric attributes on the model out the function which calls metric functions.

PiperOrigin-RevId: 213060143
…ehavior of Optimizer.compute_gradients().

PiperOrigin-RevId: 213060585
PiperOrigin-RevId: 213062112
Previously, tf.Variable arguments to a defun-d Python function were made captured inputs. This change makes it possible to parameterize functions on DT_RESOURCE inputs.

PiperOrigin-RevId: 213064739
Mixing index type doesn't work well with latest Eigen.

PiperOrigin-RevId: 213067224
It breaks. should be s/input_shape/inputs_shape.

PiperOrigin-RevId: 213070141
…p in lieu of a new num_cores_per_replica.

PiperOrigin-RevId: 213111326
I need these to write readable unit tests for TF graph transformations.  All of
my use cases will live inside tensorflow/compiler so putting it in
tensorflow/compiler/jit for now; but we can move these out if other users are
interested.

In the future we may want to auto-generate type safe versions of these from the
op registrations like we generate C++ wrappers today.

PiperOrigin-RevId: 213186810
…ild_link_issue

PiperOrigin-RevId: 213208519
phawkins@ suggested these in cr/212715067 but I accidentally made the changes in
another client.

PiperOrigin-RevId: 213208811
PiperOrigin-RevId: 213210253
PiperOrigin-RevId: 213212445
Rachel Lim and others added 29 commits September 20, 2018 14:52
PiperOrigin-RevId: 213886813
…OS & environment configurations to a separate test target, and disables running them on Windows.

PiperOrigin-RevId: 213895372
This CL splits the functionality in XlaLaunch into two separate operations:

 - XlaCompile, responsible for compiling a TF function into a LocalExecutable
 - XlaRun, responsible for executing a LocalExecutable created by XlaCompile

This CL is a stepping stone towards implementing lazy compilation for TF/XLA.
The XlaCompile op is spec'ed to return a boolean indicating whether the
compilation was successful.  Right now that boolean is always set to true by
XlaCompile and its value is otherwise ignored, but in the future it will be used
to indicate whether the TF function was compiled or not, and thus whether we
should execute XlaRun or just directly call the TF function.

XlaLaunch still exists, and will be created by create_xla_launch_op.cc.  In the
future we may consider removing it altogether.  build_xla_launch_ops.cc, now
renamed to build_xla_ops.cc, creates a XlaCompile/XlaRun pair instead of
XlaLaunch.

This CL is organized as follows:

 - jit/ops/xla_ops.cc gets two new XLA-specific operations, XlaCompile and
   XlaRun, described above.  XlaRun redundantly takes the must-be-constant
   inputs to the TensorFlow cluster to keep the implementation simple (simple in
   the sense of similar to XlaLaunch), but I will remove this in a subsequent
   cleanup CL.

 - jit/kernels/xla_ops.cc implements XlaCompile and XlaRun in a fairly
   straightforward manner.  XlaCompile compiles the TF function, puts it in a
   process-global storage, XlaExecutableClosureStore, and produces a int64 key.
   XlaRun uses the key to read out the LocalExecutable and execute it.  I'm not
   sure if XlaExecutableClosureStore should be a resource like
   XlaCompilationCache; I did not immediately see any reason to make it so.

 - There are changes to the various _device files to register XlaCompile and
   XlaRun for the XLA_* devices.

 - Finally, I had to fix some tests that were expecting XlaLaunch in the
   execution timeline.

PiperOrigin-RevId: 213895405
depthwise convolution instead of a full convolution now that it exists in XLA.

PiperOrigin-RevId: 213896333
…refactoring the API for exposing tunable parameters, and removing `model::Node` from the public API.

PiperOrigin-RevId: 213907565
PiperOrigin-RevId: 213912651
PiperOrigin-RevId: 213913013
PiperOrigin-RevId: 213917881
PiperOrigin-RevId: 213917946
…ction into the number of shards used. This is a variant of threadpool::parallelFor

PiperOrigin-RevId: 213920649
… in python3 threading.local cannot be pickled.

PiperOrigin-RevId: 213928766
allowing callers to know if we up-converted a SessionBundle to
SavedModel format.

PiperOrigin-RevId: 213937542
self.test_session() has been deprecated in 9962eb5 as its name confuses readers of the test. Moving to cached_session() instead which is more explicit about:
* the fact that the session may be reused.
* the session is not closed even when doing a "with self.test_session()" statement.

PiperOrigin-RevId: 213944355
self.test_session() has been deprecated in 9962eb5 as its name confuses readers of the test. Moving to cached_session() instead which is more explicit about:
* the fact that the session may be reused.
* the session is not closed even when doing a "with self.test_session()" statement.

PiperOrigin-RevId: 213944932
PiperOrigin-RevId: 213948394
This was blocked by an LLVM bug, which was fixed in r342542.

PiperOrigin-RevId: 213953743
Given a class

@attr.s()
class SampleAttr(object):
  field_1 = attr.ib()
  field_2 = attr.ib()

we will be able to run

obj = SampleAttr(tensor_1, tensor_2)
session.run(obj) # equivalent with session.run([obj.field_1, obj.field_2])

Please note, this does not need nest flatten support (which is only relevant to the feed_dict argument).

Also, the information in __attrs_attrs__ is provided for extensions (as per the docs: http://www.attrs.org/en/stable/extending.html#extending-metadata) like this and is not an "implementation detail".

PiperOrigin-RevId: 213963978
…umber of circular references. Replace unnecessary OrderedDict with a regular dict.

PiperOrigin-RevId: 213982097
…, logical core) indexing scheme for cores.

Previously the DeviceAssignment class mixed both a general concept (a mapping from (replica, logical core) to physical TPU core) and a specific instantiation of that concept, by imposing a particular 3D grid structure on the logical core numbers. This was excessive ? while the physical core numbers have a particular structure, there is no need to impose any particular structure on the logical core numbers.

This change simplifies the DeviceAssignment scheme, changing it so logical cores within a replica are numbered sequentially without any particular semantics.

PiperOrigin-RevId: 213984629
@ganny26 ganny26 merged commit 3b1a7ec into ganny26:master Sep 21, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.