You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session. Dismiss alert
Keras TPU Integration This directory contains examples of using the experimental Cloud TPU-Keras integration that was added in TF 1.9. To learn more about this new integration, check out the documentation (coming soon!). MNIST This is a simple sequential convolutional network to recognize handwritten digits. This is a simple example of how to use the new Keras integration. ResNet-50 ResNet-50 is a
TensorFlow Lite for Microcontrollers This an experimental port of TensorFlow Lite aimed at micro controllers and other devices with only kilobytes of memory. It doesn't require any operating system support, any standard C or C++ libraries, or dynamic memory allocation, so it's designed to be portable even to 'bare metal' systems. The core runtime fits in 16KB on a Cortex M3, and with enough operat
Please go to Stack Overflow for help and support: https://stackoverflow.com/questions/tagged/tensorflow If you open a GitHub issue, here is our policy: It must be a bug, a feature request, or a significant problem with documentation (for small docs fixes please send a PR instead). The form below must be filled out. It shouldn't be a TensorBoard issue. Those go here. Here's why we have that policy:
TensorFlow for Java: Examples These examples include using pre-trained models for image classification and object detection, and driving the training of a pre-defined model - all using the TensorFlow Java API. The TensorFlow Java API does not have feature parity with the Python API. The Java API is most suitable for inference using pre-trained models and for training pre-defined models from a sing
Introduction Mesh TensorFlow (mtf) is a language for distributed deep learning, capable of specifying a broad class of distributed tensor computations. The purpose of mesh-tensorflow is to formalize and implement distribution strategies for your computation graph over your hardware/processors For example: "Split the batch over rows of processors and split the units in the hidden layer across colum
Hello, I would like to ask if current API of datasets allows for implementation of oversampling algorithm? I deal with highly imbalanced class problem. I was thinking that it would be nice to oversample specific classes during dataset parsing i.e. online generation. I've seen the implementation for rejection_resample function, however this removes samples instead of duplicating them and its slows
System information Have I written custom code (as opposed to using a stock example script provided in TensorFlow): custom, yes OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 14.04 TensorFlow installed from (source or binary): from pip TensorFlow version (use command below): ('v1.2.0-5-g435cdfc', '1.2.1') Python version: python2.7 Bazel version (if compiling from source): - C
Release 1.8.0 Major Features And Improvements Can now pass tf.contrib.distribute.MirroredStrategy() to tf.estimator.RunConfig() to run an Estimator model on multiple GPUs on one machine. Add tf.contrib.data.prefetch_to_device(), which supports prefetching to GPU memory. Added Gradient Boosted Trees as pre-made Estimators: BoostedTreesClassifier, BoostedTreesRegressor. Add 3rd generation pipeline c
リリース、障害情報などのサービスのお知らせ
最新の人気エントリーの配信
処理を実行中です
j次のブックマーク
k前のブックマーク
lあとで読む
eコメント一覧を開く
oページを開く