You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session. Dismiss alert
Please make sure that the boxes below are checked before you submit your issue. If your issue is an implementation question, please ask your question on StackOverflow or join the Keras Slack channel and ask there instead of filing a GitHub issue. Thank you! [v ] Check that you are up-to-date with the master branch of Keras. You can update with: pip install git+git://github.com/fchollet/keras.git -
I have installed keras on a redhat 6 server, it is really a attractive framework cause it is really easy to build a deep neural network. However, I found my keras use only single thread(or single core). I ran examples given in the source code package, such as "mnist_mlp.py", and used "top" command, found the usage of CPU is 100%, never more than CPU, I have 6 cores, each with four threads, no GPU,
The nb_epoch argument has been renamed epochs everywhere. The methods fit_generator, evaluate_generator and predict_generator now work by drawing a number of batches from a generator (number of training steps), rather than a number of samples. samples_per_epoch was changed to steps_per_epoch in fit_generator. It now refers to the number of batches an epoch is considered as done. nb_val_samples was
>>> model = Sequential() model = Sequential() >>> model.add(Conv2D(32, (3, 3), activation='relu', input_shape=(3, None, None))) ) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/nfs/isicvlnas01/share/anaconda/lib/python2.7/site-packages/keras/models.py", line 422, in add layer(x) File "/nfs/isicvlnas01/share/anaconda/lib/python2.7/site-packages/keras/engine/topology.p
I am already aware of some discussions on how to use Keras for very large datasets (>1,000,000 images) such as this and this. However, for my scenario, I can't figure out the appropriate way to use the ImageDataGenerator or write my own dataGenerator. Specifically, I have the following four questions: From this link: when we do datagen.fit(X_sample), do we assume that X_sample is a big enough chun
problem 1: i changed my code from fit to fit_generator, however, i found it converged slower than using fit. Using fit it converges at 5th epoch, however, it doesn't converge even at 10th epoch when i change to fit_generator. Moreover, when validation_data also uses fit_generator, the performance becomes much worse. Is my generator function has errors? def batch_iter(x, y, batch_size): """ Generat
リリース、障害情報などのサービスのお知らせ
最新の人気エントリーの配信
処理を実行中です
j次のブックマーク
k前のブックマーク
lあとで読む
eコメント一覧を開く
oページを開く