Blog

Learn About Our Meetup

4500+ Members

[D] A primer on TensorFlow 2.0

I’ve noticed some confusion on what TensorFlow 2.0 is on this subreddit. I worked as an engineer on parts of TensorFlow 2.0, specifically on imperative (or “eager”) execution. I’ll try to clear up some of the confusion here. I’m also happy to answer any questions to the best of my ability.

(I’m no longer employed by Alphabet / Google Brain, so these words are my own.)

TF 2.0 is a backwards-incompatible update to TF’s (1) execution model and (2) API. It is currently in alpha.

(1) TF 2.0 executes operations imperatively (or “eagerly”) by default; this means that it will feel similar to PyTorch or NumPy. It also provides a just-in-time tracer (tf.function) that rewrites Python functions that execute TF (2.0) operations into graphs. This tracer also rewrites Python ASTs to replace tensor-dependent Python control flow to TF control flow using autograph, meaning that you don’t need to use constructs like tf.cond or tf.while_loop. Using this tracer is optional. The tracer is similar in spirit to torch.jit.trace and TorchScript, but the usage and semantics are different. It’s also similar to JAX’s jit.

One consequence of this change is that in 2.0, there’s no global graph, no global collections, no get_variable, no custom_getters, no Session, no feeds, no fetches, no placeholders, no control_dependencies, no variable initializers, etc., even when you’re using tf.function. There are many other things that have been excised from the API.

(2) In TF 1.x, there were many high-level APIs for neural networks (e.g., see everything under tf.contrib, which no longer exists in 2.0). Many users found this confusing, especially because these APIs were similar but different and incompatible. With 2.0, TF has standardized on tf.keras, which is essentially an implementation of the Keras API specification, customized for TF’s need.

That said, TF 2.0 has many low-level APIs, for things like numerical computation (tf, tf.math), linear algebra (tf.linalg), automatic differentiation (tf.GradientTape), state (tf.Variable), neural networks (tf.nn), stochastic gradient-based optimization (tf.optimizers, tf.losses), dataset munging (tf.data). I’ve only named a few of these low-level APIs. If you don’t want to use tf.keras, you’re free to use these low-level APIs directly. Note that you can also directly use the object oriented layers in tf.keras.layers without wrapping them in tf.keras.Sequential or tf.keras.Model.

I’ve written a more comprehensive, technical primer on TF 2.0, which is available as a blog post and as a python notebook. There’s also an official guide from the TF team.

submitted by /u/akshayka
[link] [comments]

Next Meetup

 

Days
:
Hours
:
Minutes
:
Seconds

 

Plug yourself into AI and don't miss a beat

 


Toronto AI is a social and collaborative hub to unite AI innovators of Toronto and surrounding areas. We explore AI technologies in digital art and music, healthcare, marketing, fintech, vr, robotics and more. Toronto AI was founded by Dave MacDonald and Patrick O'Mara.